input
stringlengths 6.82k
29k
|
---|
Instruction: Intimate partner violence against women: do victims cost health plans more?
Abstracts:
abstract_id: PUBMED:37551097
Intimate Partner Violence Polyvictimization and Health Outcomes. This study examines how gender interacts with polyvictimization patterns in survivors' health problems using 8,587 survivors of intimate partner violence from the National Intimate Partner and Sexual Violence Survey, a nationally representative sample collected in 2010. Polyvictimization included six categories that were created in our previous work: sexual violence, physical and psychological violence, coercive control, multiple violence, stalking, and psychological aggression. Multiple violence was associated with chronic pain, headache, difficulty sleeping, and poor health perception. Females experiencing coercive control were more likely to have chronic pain than males. The appropriate assessment of gendered patterns of polyvictimization, and relevant subsequent services and support will better address health problems among survivors.
abstract_id: PUBMED:28551446
Intimate partner violence and pregnancy: epidemiology and impact. Intimate partner violence is a significant public health problem in our society, affecting women disproportionately. Intimate partner violence takes many forms, including physical violence, sexual violence, stalking, and psychological aggression. While the scope of intimate partner violence is not fully documented, nearly 40% of women in the United States are victims of sexual violence in their lifetimes and 20% are victims of physical intimate partner violence. Other forms of intimate partner violence are likely particularly underreported. Intimate partner violence has a substantial impact on a woman's physical and mental health. Physical disorders include the direct consequences of injuries sustained after physical violence, such as fractures, lacerations and head trauma, sexually transmitted infections and unintended pregnancies as a consequence of sexual violence, and various pain disorders. Mental health impacts include an increased risk of depression, anxiety, posttraumatic stress disorder, and suicide. These adverse health effects are amplified in pregnancy, with an increased risk of pregnancy outcomes such as preterm birth, low birthweight, and small for gestational age. In many US localities, suicide and homicide are leading causes of pregnancy-associated mortality. We herein review the issues noted previously in greater depth and introduce the basic principles of intimate partner violence prevention. We separately address current recommendations for intimate partner violence screening and the evidence surrounding effectiveness of intimate partner violence interventions.
abstract_id: PUBMED:29223207
Commentary on a Cochrane Review of Screening for Intimate Partner Violence in Health Care Settings. Intimate partner violence is a universal phenomenon that warrants awareness by all health care providers. This article summarizes a Cochrane Review on screening women for intimate partner violence in health care settings. The review authors identified 13 randomized controlled trials and quasi-randomized controlled trials that assessed the effectiveness of screening for intimate partner violence. The authors concluded that there was insufficient evidence to justify implementation of universal screening for intimate partner violence.
abstract_id: PUBMED:29211231
Intimate partner violence among health professionals: distribution by autonomous communities in Spain. OBJECTIVE To determine the prevalence of intimate partner violence among health care professionals who work in the Spanish National Health System, according to the autonomous communities of Spain. METHOD This was a descriptive cross-sectional multicenter study conducted with male and female health professionals (doctors, nurses, and nursing aides) in the different autonomous communities that are part of the Spanish National Health System. The following instruments were employed: among women, an intimate partner violence screening questionnaire; and among men, a questionnaire that screened for violence in the family environment. RESULTS A total of 1,039 health professionals participated in the study. Of these, 26% had suffered some type of abuse. Among the men, this prevalence was 2.7%, while among the women, it was 33.8%. There were differences in the prevalence of intimate partner violence among different autonomous communities, with the highest percentages in the Canary Islands. In terms of profession, 19.5% of the doctors had been exposed to intimate partner violence, while this percentage was 31% and 48.6% for nurses and nursing professionals, respectively. CONCLUSION The results indicate the presence of intimate partner violence among healthcare personnel in most of the autonomous communities of Spain. The data demonstrate the need to implement action plans, both to support victims and to mitigate the problem.
abstract_id: PUBMED:33788212
Views of primary health care providers of the challenges to screening for intimate partner violence, Egypt. Background: Health care providers can play an important role in detection of intimate partner violence within health services but barriers exist.
Aims: This study aimed to determine the barriers that health care providers in Fayoum, Egypt, consider prevent them from screening for intimate partner violence.
Methods: This was a cross-sectional study between June 2018 and January 2019. The sample was health care providers (doctors, nurses, social workers and community workers) selected from government primary care centres in all seven districts of Fayoum. A validated Arabic version of the Domestic Violence Health Care Provider Survey was used to collect data.
Results: A total of 385 health care providers (92.7% women) agreed to participate (78.6% response rate). Just over half of the participants did not have access to social workers or community workers or strategies to help victims of intimate partner violence. None had received training on screening for domestic violence. More than half (59.7%) thought that investigating the cause of intimate partner violence was not part of medical practice. Sex was significantly associated with perceived self-efficacy, while age and occupation were significantly associated with referral management and health providers' attitude.
Conclusion: Primary health care providers perceived many barriers to screening for intimate partner violence. Training on screening for and managing intimate partner violence should be part of the professional development for all health care providers. An effective referral system is needed that ensures comprehensive services for victims.
abstract_id: PUBMED:30335574
Intimate Partner Sexual Violence: An Often Overlooked Problem. Background: Intimate partner sexual violence (IPSV) is a common but often overlooked form of intimate partner violence (IPV) that may have unique consequences for those who experience it. We aimed to explore how outcomes associated with IPSV differ from outcomes associated with other forms of intimate partner and sexual violence.
Methods: We conducted a narrative review of the English-language literature, including original research studies and reports that focused on outcomes associated with IPSV. We aimed to quantify the risk for health outcomes associated with exposure to IPSV in comparison with exposure to other forms of interpersonal violence or nonexposure to interpersonal violence.
Results: Twenty-eight publications were reviewed, most were small observational studies focused on women exposed to IPSV. Reported outcomes were related to mental health (n = 20 studies), physical and sexual health (n = 19 studies), and health of children with a parent exposed to IPSV (n = 1 study). Compared with other forms of interpersonal violence, exposure to IPSV was associated with greater risk for posttraumatic stress disorder and depressive symptoms, problematic substance use, suicidality, pain and other somatic symptoms, adverse sexual health problems, specific physical injuries including strangulation, and death by homicide. Children with an exposed parent were at higher risk for internalizing symptoms such as depression, anxiety, and somatization.
Conclusions: Sexual violence in intimate partner relationships is common and has distinct consequences compared with other forms of interpersonal violence including elevated risks for suicidality and death by homicide. It should be given special consideration within the assessment and management of interpersonal violence.
abstract_id: PUBMED:36266994
Concurrent Intimate Partner Violence: Survivors' Health and Help-Seeking. This study examined intimate partner violence patterns using the National Intimate Partner and Sexual Violence Survey, a nationally representative sample collected in 2010. The latent class analysis detected six distinctive patterns: Sexual Violence, Psychological Aggression, Multiple Violence, Coercive Control, Physical and Psychological Violence, and Stalking. Multiple Violence was the most common among males, while Coercive Control was the most common among females. Multiple Violence and Physical and Psychological Violence perpetrators inflicted more negative health consequences than the other types. Intervention and prevention approaches that consider perpetrator types as a part of survivor need assessments will improve services.
abstract_id: PUBMED:34344223
Intimate Partner Violence Stories of Appalachian Women. The purpose of this study was to explore past intimate partner violence as it occurred in Appalachian women residing in rural and non-urbanized areas. The methodology was qualitative description. Twelve former intimate partner violence female victims shared their experiences through the socioculturally appropriate tradition of story-telling. The meta-theme, Turning Points, reflecting the perceived non-linearity of intimate partner violence was supported by three themes: (1) The Process of Abuse: (2) Learn from my Story; Don't Let it be Your Story; and (3) Does Where I Live Make a Difference? All participants experienced patterns of abuse that cycled in frequency and severity, similar injuries and health problems, fear, helplessness, and, at times, hopelessness. Application of knowledge gained through naturalistic methods can advance our understanding of intimate partner violence as it occurs in vulnerable populations and the depth and breadth of sociocultural influences which may affect the public health threat of this type of violence.
abstract_id: PUBMED:30796829
Health of Women Surviving Intimate Partner Violence: Impact of Injury and Fear. This study examined associations between experiencing physical violence and stalking perpetrated by an intimate partner, and the number of health problems women reported. The sample of 2,626 women who survived intimate partner violence (IPV) was extracted from the National Intimate Partner and Sexual Violence Survey 2010. Results showed associations between greater number of health problems and the following seven factors: injury, fear, poor mental health, advanced age, less education, lower income, and marriage; no association was found between number of health problems and number of incidents of physical violence, stalking, or both (in preceding year). The results imply that IPV has long-term adverse impacts on women's physical health.
abstract_id: PUBMED:37024170
Intimate Partner Violence and Sexual Violence. Intimate partner violence and sexual violence represent significant public health challenges that carry many individual and societal costs. More than 1 in 3 women (35.6%) and more than 1 in 4 men (28.5%) in the United States have experienced rape, physical violence, and/or stalking by an intimate partner in their lifetime. Clinicians play an integral role on the screening, identification, and management of these sensitive issues.
Answer: The abstracts provided do not directly address the specific question of whether victims of intimate partner violence (IPV) against women cost health plans more. However, they do provide evidence that IPV has significant health consequences for survivors, which could imply higher healthcare costs.
For instance, the study summarized in PUBMED:37551097 indicates that survivors of IPV experience a range of health problems, including chronic pain, headache, difficulty sleeping, and poor health perception. Similarly, PUBMED:28551446 highlights that IPV has substantial impacts on a woman's physical and mental health, including injuries, sexually transmitted infections, unintended pregnancies, and various pain disorders, as well as an increased risk of depression, anxiety, posttraumatic stress disorder, and suicide. These adverse health effects are amplified in pregnancy, with increased risks of poor pregnancy outcomes.
The study in PUBMED:30796829 found associations between a greater number of health problems and factors such as injury and fear in women who survived IPV. PUBMED:30335574 discusses the unique consequences of intimate partner sexual violence (IPSV), which is associated with greater risks for mental health issues, problematic substance use, pain, adverse sexual health problems, and death by homicide.
Given the range of health issues associated with IPV, it is reasonable to infer that victims may require more medical attention and support, potentially leading to higher healthcare costs for health plans. However, without specific data on healthcare utilization and costs associated with IPV victims, it is not possible to conclusively answer the question based on the provided abstracts. |
Instruction: Can therapy dogs evoke awareness of one's past and present life in persons with Alzheimer's disease?
Abstracts:
abstract_id: PUBMED:24814254
Can therapy dogs evoke awareness of one's past and present life in persons with Alzheimer's disease? Background: Persons with Alzheimer's disease (AD) sometimes express themselves through behaviours that are difficult to manage for themselves and their caregivers, and to minimise these symptoms alternative methods are recommended. For some time now, animals have been introduced in different ways into the environment of persons with dementia. Animal-Assisted Therapy (AAT) includes prescribed therapy dogs visiting the person with dementia for a specific purpose.
Aim: This study aims to illuminate the meaning of the lived experience of encounters with a therapy dog for persons with Alzheimer's disease.
Method: Video recorded sessions were conducted for each visit of the dog and its handler to a person with AD (10 times/person). The observations have a life-world approach and were transcribed and analysed using a phenomenological hermeneutical approach.
Results: The result shows a main theme 'Being aware of one's past and present existence', meaning to connect with one's senses and memories and to reflect upon these with the dog. The time spent with the dog shows the person recounting memories and feelings, and enables an opportunity to reach the person on a cognitive level.
Conclusions: The present study may contribute to health care research and provide knowledge about the use of trained therapy dogs in the care of older persons with AD in a way that might increase quality of life and well-being in persons with dementia.
Implications For Practice: The study might be useful for caregivers and dog handlers in the care of older persons with dementia.
abstract_id: PUBMED:35648383
Awareness of Diagnosis in Persons with Early-Stage Alzheimer's Disease: An Observational Study in Spain. Introduction: Limited information is available on people's experiences of living with Alzheimer's disease (AD) at earlier stages. This study assessed awareness of diagnosis among people with early-stage AD and its impact on different person-centered outcome measures.
Methods: We conducted an observational, cross-sectional study in 21 memory clinics in Spain. Persons aged 50-90 years, diagnosed with prodromal or mild AD (NIA/AA criteria), a Mini Mental State Examination (MMSE) score ≥ 22, and a Clinical Dementia Rating-Global score (CDR-GS) of 0.5 or 1.0 were recruited. The Representations and Adjustment to Dementia Index (RADIX) was used to assess participants' beliefs about their condition and its consequences.
Results: A total of 149 persons with early-stage AD were studied. Mean (SD) age was 72.3 (7.0) years and 50.3% were female. Mean duration of AD was 1.4 (1.8) years. Mean MMSE score was 24.6 (2.1) and 87.2% had a CDR-GS score of 0.5. Most participants (n = 84, 57.5%) used a descriptive term related to specific AD symptoms (e.g., memory difficulties) when asked what they called their condition. Participants aware of their diagnosis using the term AD (n = 66, 45.2%) were younger, had more depressive symptoms, and poorer life satisfaction and quality of life compared to those without awareness of their specific diagnosis. Practical and emotional consequences RADIX scores showed a significant negative correlation with Quality of Life in Alzheimer's Disease score (rho = - 0.389 and - 0.413, respectively; p < 0.0001). Years of education was the only predictor of awareness of AD diagnosis [OR = 1.04 (95% CI 1.00-1.08); p = 0.029].
Conclusions: Awareness of diagnosis was a common phenomenon in persons with early-stage AD negatively impacting their quality of life. Understanding illness representations in earlier stages may facilitate implementing optimized care that supports improved quality of life and well-being.
abstract_id: PUBMED:28958089
Awareness of Mild Cognitive Impairment and Mild Alzheimer's Disease Dementia Diagnoses Associated With Lower Self-Ratings of Quality of Life in Older Adults. Objective: This study examined how awareness of diagnostic label impacted self-reported quality of life (QOL) in persons with varying degrees of cognitive impairment.
Method: Older adults (n = 259) with normal cognition, Mild Cognitive Impairment (MCI), or mild Alzheimer's disease dementia (AD) completed tests of cognition and self-report questionnaires that assessed diagnosis awareness and multiple domains of QOL: cognitive problems, activities of daily living, physical functioning, mental wellbeing, and perceptions of one's daily life. We compared measures of QOL by cognitive performance, diagnosis awareness, and diagnostic group.
Results: Persons with MCI or AD who were aware of their diagnosis reported lower average satisfaction with daily life (QOL-AD), basic functioning (BADL Scale), and physical wellbeing (SF-12 PCS), and more difficulties in daily life (DEM-QOL) than those who were unaware (all p ≤ .007). Controlling for gender, those expecting their condition to worsen over time reported greater depression (GDS), higher stress (PSS), lower quality of daily life (QOL-AD, DEM-QOL), and more cognitive difficulties (CDS) compared to others (all p < .05).
Discussion: Persons aware of their diagnostic label-either MCI or AD-and its prognosis report lower QOL than those unaware of these facts about themselves. These relationships are independent of the severity of cognitive impairment.
abstract_id: PUBMED:30474400
Patterns of discrepancies in different objects of awareness in mild and moderate Alzheimer's disease. Objectives: Awareness is considered a heterogeneous and non-linear phenomenon in dementia. We aim to investigate patterns of change of different domains of awareness (awareness of cognitive functioning and health condition, activities of daily living, emotional state, social functioning, and relationships) in people with mild and moderate Alzheimer's disease (AD) and aspects related to each domain.Method: Cross-sectional assessment of dyads of people with AD (PwAD) and caregivers (n = 128; CDR1 = 74, CDR2 = 54). PwAD completed assessments about quality of life, cognition and their awareness of disease. Caregivers provided information about PwAD and received quality of life and burden of care assessments.Results: Mild AD group showed a mildly impaired awareness (n = 40; 54.05%), while moderate AD group, showed higher presence of moderately impaired awareness (n = 22; 40.74%). There was a significant difference between groups in awareness of cognitive functioning and health condition (p < 0.004), functional activity impairments (p < 0.001) and total score of awareness (p < 0.01). Conversely, awareness of emotional state (p = 0.22) and of social functioning and relationship (p = 0.44) presented no significant difference between groups. Unawareness of functional activity impairments showed higher discrepancy scores between PwAD and caregivers in both groups.Conclusions: Significant differences were found only in patterns of discrepancies in awareness of cognitive functioning and health condition, of ADL and socio-emotional functioning. Different factors are related to different domains in mild and moderate group, reinforcing the heterogeneity of awareness in dementia. ADL deficits have an important role in awareness phenomenon, independent of the severity of disease.
abstract_id: PUBMED:15481783
Improving quality of life for persons with Alzheimer's disease and their family caregivers: brief occupational therapy intervention. Objective: This study examined the extent to which adherence to occupational therapy recommendations would increase the quality of life of persons with Alzheimer's disease living in the community and decrease the burden felt by family members caring for them.
Method: Using a pretest-posttest control group design, the Assessment of Instrumental Function (AIF) was administered to two groups of persons with Alzheimer's disease in their own homes (n= 40). Caregivers completed measures of their feelings of burden and the quality of life, including level of function of the persons with Alzheimer's disease.
Results: A significant (MANCOVA) main effect was obtained for caregiver burden and three components of quality of life, positive affect, activity frequency and self-care status, by the treatment group, F(4, 31) = 7.34, p < .001.
Conclusions: Individualized occupational therapy intervention based on the person-environment fit model appears effective for both caregivers and clients. This is especially important in light of a recent directive for more favorable reimbursement for occupational therapy services for persons with dementia.
abstract_id: PUBMED:35095469
Awareness for People With Alzheimer's Disease: Profiles and Weekly Trajectories. Objective: To understand awareness and fluctuations of awareness in Alzheimer's disease (AD), it is fruitful to consider the objects of awareness, e.g., cognitive functioning or recognition of the disease, as well as the mechanisms and modes of expression underlying awareness. With a holistic and discourse-centered approach, we aimed to identify different awareness profiles and test whether these profiles were stable or whether transitions from one profile to another occurred over short time intervals. Methods: Twenty-eight residents of nursing homes with a diagnosis of AD participated in four semistructured interviews at biweekly intervals. These interviews were cluster analyzed to determine profiles of awareness. A Markov chain was applied to model their fluctuation. Results: Five awareness profiles were observed that differed in terms of objects and underlying processes. Awareness proved to be quite stable for four of the five profiles. Interindividual variability in awareness was also observed through numerous different trajectories that were identified. Discussion: Self-awareness and disease awareness are characterized by profiles that vary subtly between individuals. Fluctuations in awareness underscore the need to employ assessment intervals that closely reflect daily life in institutions.
abstract_id: PUBMED:27294268
Self-reported and informant-reported memory functioning and awareness in patients with mild cognitive impairment and Alzheimer´s disease. Background: Awareness of subjective memory is an important factor for adequate treatment of patients with mild cognitive impairment (MCI) and Alzheimer's disease (AD). This study served to find out whether awareness of subjective memory complies with objective performance, if differences in awareness are observed longitudinally and whether decrease of awareness can serve as a predictor of AD in MCI patients.
Methods: Thirty-four patients with MCI seeking help in a memory outpatient clinic were included. All participants underwent thorough neuropsychological examination. Awareness of subjective memory was obtained by calculating difference scores between patient and informant ratings on a 16-item questionnaire concerning complaints about loss of memory in every-day life. Retesting was performed after a mean follow-up period of 24 months.
Results: Whole group analyses showed that awareness remained relatively stable across time. Self-reported memory complaints correlated with episodic memory at baseline and with performance on a language task at follow-up. Retests displayed decrease of awareness. At group level differences in awareness between both times of assessment were not significant for MCI and MCI patients converting to mild AD at follow-up. The predictive value of awareness was low.
Conclusions: Awareness of subjective memory deficit is linked to episodic memory function and decreases with decline of cognitive ability. Further studies evaluating predictive power of awareness of subjective memory should include a larger patient sample.
abstract_id: PUBMED:35803005
Electroencephalographic signatures of dogs with presumptive diagnosis of canine cognitive dysfunction. Canine cognitive dysfunction (CCD) is a highly prevalent neurodegenerative disease considered the canine analog of early Alzheimer's disease (AD). Unfortunately, CCD cannot be cured. However, early therapeutic interventions can slow the progression of cognitive decline and improve quality of life of the patients; therefore, early diagnosis is ideal. In humans, electroencephalogram (EEG) findings specific to AD have been described, and some of them have successfully detect early stages of the disease. In this study we characterized the EEG correlates of CCD, and we compared them with the EEGs of healthy aging dogs and dogs at risk of developing CCD. EEG recordings were performed in 25 senior dogs during wakefulness. Dogs were categorized in normal, at risk of CCD or with CCD according to their score in the Rofina questionnaire. We demonstrated that, quantitative EEG can detect differences between normal dogs and dogs with CCD. Dogs with CCD experience a reduction in beta and gamma interhemispheric coherence, and higher Joint Lempel Ziv complexity. Dogs at risk of developing CCD, had higher alpha power and interhemispheric coherence, making these features potential markers of early stages of the disease. These results demonstrate that quantitative EEG analysis could aid the diagnosis of CCD, and reinforce the CCD as a translational model of early AD.
abstract_id: PUBMED:29844911
End-of-life decision making by family caregivers of persons with advanced dementia: A literature review of decision aids. Objectives: To investigate existing knowledge in the literature about end-of-life decision making by family caregivers of persons with dementia, focusing on decision aids for caregivers of persons with advanced dementia, and to identify gaps in the literature that can guide future research.
Methods: A literature review through systematic searches in PubMed, CINAHL Plus with Full Text, and PsycINFO was conducted in February 2018; publications with full text in English and published in the past 10 years were selected in multiple steps.
Results: The final sample included five decision aids with predominantly Caucasian participants; three of them had control groups, and three used audiovisual technology in presenting the intervention materials. No other technology was used in any intervention. Existing interventions lacked tailoring of information to caregivers' preferences for different types and amounts of information necessary to make decisions consistent with patients' values.
Conclusion: Research is needed in exploring the use of technology in decision aids that could provide tailored information to facilitate caregivers' decision making. More diverse samples are needed.
abstract_id: PUBMED:36878603
The past and present of therapeutic strategy for Alzheimer's diseases: potential for stem cell therapy. Alzheimer's disease (AD), a progressive neurodegenerative disease characterized by cognitive dysfunction and neuropsychiatric symptoms, is the most prevalent form of dementia among the elderly. Amyloid aggregation, tau hyperphosphorylation, and neural cell loss are the main pathological features. Various hypotheses have been proposed to explain the development of AD. Some therapeutic agents have shown clinical benefits in patients with AD; however, many of these agents have failed. The degree of neural cell loss is associated with the severity of AD. Adult neurogenesis, which governs cognitive and emotional behaviors, occurs in the hippocampus, and some research groups have reported that neural cell transplantation into the hippocampus improves cognitive dysfunction in AD model mice. Based on these clinical findings, stem cell therapy for patients with AD has recently attracted attention. This review provides past and present therapeutic strategies for the management and treatment of AD.
Answer: Yes, therapy dogs can evoke awareness of one's past and present life in persons with Alzheimer's disease (AD). A study aimed at illuminating the meaning of the lived experience of encounters with a therapy dog for persons with AD found that the time spent with the dog allowed the person to connect with their senses and memories and to reflect upon these with the dog. The result shows a main theme of 'Being aware of one's past and present existence,' which indicates that therapy dogs can indeed help individuals with AD recount memories and feelings, thereby enabling an opportunity to reach the person on a cognitive level (PUBMED:24814254). |
Instruction: Does the public receive and adhere to boil water advisory recommendations?
Abstracts:
abstract_id: PUBMED:26733450
Does the public receive and adhere to boil water advisory recommendations? A cross-sectional study in Newfoundland and Labrador, Canada. Background: Highly publicized water supply problems highlight the importance of safe drinking water to the public. Boil water advisories (BWAs) are an important precautionary measure meant to protect public health by ensuring drinking water safety. Newfoundland and Labrador, Canada is a prime location for exploring public notification practices and adherence to recommendations as there were a total of 215 BWAs, affecting 6 % of the provincial population, in 145 communities between April 2006 and March 2007 when data for the present study were collected.
Methods: Residents who received household water from a public water supply were randomly selected for a telephone interview. Collected data included participants' notification of boil water advisory, satisfaction with information provided, and their adherence to recommendations.
Results: Most participants learned that a BWA had been issued or lifted in their community through radio, television, or word of mouth. BWAs were issued for a range of operational reasons. Almost all participants who had experienced a BWA reported wanting more information about the reasons a BWA had been issued. Low adherence to water use recommendations during a BWA was common.
Conclusions: This study is first to report on public adherence to boil water advisory recommendations in Canada. The findings raise public health concerns, particularly given the high number of BWAs issued each year. Further studies in partnership with community stakeholders and government decision-makers responsible for overseeing public water systems are needed to assess the perceptions of BWAs, the reasons for non-adherence, and to identify information dissemination methods to increase information uptake and public adherence with acceptable uses of public drinking water during a BWA.
abstract_id: PUBMED:26938499
A meta-analysis of public compliance to boil water advisories. Water utilities that generally provide continuous and reliable service to their customers may sometimes issue an advisory notification when service is interrupted or water quality is compromised. When the contamination is biological, utilities or the local public health agencies issue a 'boil water advisory' (BWA). The public health effectiveness of a BWA depends strongly on an implicit public understanding and compliance. In this study, a meta-analysis of 11 articles that investigated public compliance to BWA notifications was conducted. Awareness of BWA was moderately high, except in situations involving extreme weather. Reported rates of compliance were generally high, but when rate of awareness and non-compliant behavior such as brushing teeth were factored in, the median effective compliance rate was found to be around 68 percent. This does not include situations where people forgot to boil water for some part of the duration, or ingested contaminated water after the BWA was issued but before they became aware of the notification. The two-thirds compliance rate is thus an over-estimate. Results further suggest that timeliness of receipt, content of the advisory, and number of sources reporting the advisory have a significant impact on public response and compliance. This analysis points to improvements in the phrasing and content of BWA notices that could result in greater compliance, and recommends the use of a standard protocol to limit recall bias and capture the public response accurately.
abstract_id: PUBMED:16779145
Issue a boil-water advisory or wait for definitive information? A decision analysis. Objective: Study the decision to issue a boil-water advisory in response to a spike in sales of diarrhea remedies or wait 72 hours for the results of definitive testing of water and people.
Methods: Decision analysis.
Results: In the base-case analysis, the optimal decision is test-and-wait. If the cost of issuing a boil-water advisory is less than 13.92 cents per person per day, the optimal decision is to issue the boil-water advisory immediately.
Conclusions: Decisions based on surveillance data that are suggestive but not conclusive about the existence of a disease outbreak can be modeled.
abstract_id: PUBMED:33187509
"Is there anything good about a water advisory?": an exploration of the consequences of drinking water advisories in an indigenous community. Background: In Ontario, Canada, Indigenous communities experience some of the province's worst drinking water, with issues ranging from deteriorating water quality to regulatory problems and lack of support. When water is known, or suspected, to be unsafe for human consumption, communities are placed under a Drinking Water Advisory. Between 2004 and 2013, approximately 70% of all on-reserve communities in Ontario were under at least one Drinking Water Advisory. Despite the widespread impact of Drinking Water Advisories on health and wellbeing, little is known about First Nation individuals' perceptions and experiences living with a Drinking Water Advisory. This study presents information shared by members of a community who have lived with Boil Water Advisories on and off for many years, and a long-term Boil Water Advisory since 2017. The goal of this paper is to unpack and explore the Boil Water Advisories from the perspective of community members and provide considerations for current and future Boil Water Advisory management.
Methods: Methodological choices were driven by the principles of community-based participatory research. Two data collection methodologies were employed: hard copy surveys and interviews.
Results: Forty-four individuals (19.5%) completed a survey. Eight Elders and 16 key informants participated in 20 interviews. Respondents expressed varying degrees of uncertainty regarding protective actions to take while under a Boil Water Advisory. Further, 79% of men but only 46% of women indicated they always adhere to the Boil Water Advisory. Knowledge gaps that could lead to risky behaviours were also identified. Finally, Boil Water Advisories were demonstrated to have physical, financial, and time impacts on the majority of respondents.
Conclusions: A direct outcome was the identification of a critical need to reinforce best practices for health protection through community education and outreach. More broadly, Chief and Council were able to use the findings to successfully advocate for improved drinking water for the community. Additionally, benefits of participatory research and community ownership include enhanced local research capacity, and increased awareness of, and desire for, research to inform decisions.
abstract_id: PUBMED:31266668
Chile's National Advisory Committee on Immunization (CAVEI): Evidence-based recommendations for public policy decision-making on vaccines and immunization. A National Immunization Technical Advisory Group (NITAG) provides independent, evidence-based recommendations to the Ministry of Health for immunization programmes and policy formulation. In this article, we describe the structure, functioning and work processes of Chile's NITAG (CAVEI) and assess its functionality, quality of work processes and outputs, and integration of the committee into the Ministry of Health policy process using the Assessment tool for National Immunization Technical Advisory Groups. Among its strengths, CAVEI's administrative and work plasticity allows it to respond in a timely manner to the Ministry of Health's requests and proactively raise subjects for review. Representation of multiple areas of expertise within the committee makes CAVEI a robust and balanced entity for the development of evidence-based comprehensive recommendations. High ranking profile of the Secretariat structure furthers CAVEI's competences in policymaking and serves as a bridge between the committee and international initiatives in the field of immunizations.
abstract_id: PUBMED:31464621
Compliance with water advisories after water outages in Norway. Background: Water advisories, especially those concerning boiling drinking water, are widely used to reduce risks of infection from contaminants in the water supply. Since the effectiveness of boil water advisories (BWAs) depends on public compliance, monitoring the public response to such advisories is essential for protecting human health. However, assessments of public compliance with BWAs remain sparse. Thus, this study was aimed at investigating awareness and compliance among residents who had received BWAs in Baerum municipality in Norway.
Method: We conducted a cross-sectional study among 2764 residents who had received water advisories by SMS in the municipality of Baerum between January and September 2017. We analysed data from two focus group discussions and an online survey sent to all residents who had received an advisory. We conducted descriptive analyses and calculated odds ratios (OR) using logistic regression to identify associations of compliance and awareness with demographic characteristics.
Results: Of the 611 respondents, 67% reported that they had received a water advisory notification. Effective compliance rate with safe drinking water practices, either by storing clean drinking water or boiling tap water, after a water outage was 72% among those who remembered receiving a notification. Compliance with safe drinking water advisories was lower among men than women (OR 0.53, 95% CI 0.29-0.96), but was independent of age, education and household type. The main reason for respondents' non-compliance with safe water practices was that they perceived the water to be safe to drink after letting it flush through the tap until it became clear.
Conclusions: Awareness of advisories was suboptimal among residents who had received notifications, but compliance was high. The present study highlights the need to improve the distribution, phrasing and content of water advisory notifications to achieve greater awareness and compliance. Future studies should include hard-to-reach groups with adequate data collection approaches and examine the use of BWAs in a national context to inform future policies on BWAs.
abstract_id: PUBMED:11824683
Application of monitoring data for Giardia and Cryptosporidium to boil water advisories. Despite the problems associated with analyzing water samples for Giardia cysts and Cryptosporidium oocysts, the data can be very useful if their strengths and weaknesses are understood. Two municipalities in northern Ontario, Temagami and Thunder Bay, both issued boil water advisories for Giardia contamination. Data from these two cities are compared to show that only one municipality experienced a real outbreak, whereas the other did not. The concentration of Giardia cysts was much higher than background during the outbreak at Temagami, and the postoutbreak concentrations of cysts were very similar to the long-term average cyst concentration at Thunder Bay. The waterborne outbreak of giardiasis at Temagami was characterized by consistent positive results from water samples, concentrations two to three orders of magnitude higher than normal, and an obvious increase in the number of cases of giardiasis in the population. No outbreak was experienced at Thunder Bay, but a boil water advisory (BWA) was set in place for more than a year on the basis of a single sample from Loch Lomond in which only two cysts were detected but the sample equivalent volume was low. This gave the impression of a sudden increase in concentration, but 39 of 41 subsequent samples were negative. Additional factors that led to a BWA at Thunder Bay are described, and recommendations are presented to help determine when a BWA is necessary and when it should be rescinded.
abstract_id: PUBMED:31304643
Association Between Food and Drug Administration Advisory Committee Recommendations and Agency Actions, 2008-2015. Policy Points Food and Drug Administration (FDA) advisory committee recommendations and the agency's final actions exhibit high rates of agreement, with cases of disagreement tending to reflect the proposed action type and degree of advisory committee consensus. In the case of disagreements, the FDA tended to be less likely than its advisory committees to approve new products, approve new supplemental indications, or enact new safety changes. These findings raise important issues regarding the factors that differentially shape decision making by advisory committees and the FDA as an agency, including institutional or reputational concerns.
Context: The Food and Drug Administration (FDA) convenes advisory committees to provide external scientific counsel on potential agency actions and to inform regulatory decision making. The degree to which advisory committees and their respective agency divisions disagree on recommendations has not been well characterized across product and action types.
Methods: We examined public documents from FDA advisory committee meetings and medical product databases for all FDA advisory committee meetings from 2008 through 2015. We classified the 376 voting meetings in that period by medical product, regulatory, and advisory committee meeting characteristics. We used multivariable logistic regression to determine the associations between these characteristics and discordance between the advisory committee's recommendations and the FDA's final actions.
Findings: Twenty-two percent of the FDA's final actions were discordant with the advisory committee's recommendations. Of these, 75% resulted in the FDA making more restrictive decisions after favorable committee recommendations, and 25% resulted in the agency making less restrictive decisions after unfavorable committee recommendations. Discordance was associated with lower degrees of advisory committee consensus and was more likely for agency actions focused on medical product safety than for novel approvals or supplemental indications. Statements by public speakers, advisory committee conflicts of interest, and media coverage were not associated with discordance between the committee and the agency.
Conclusions: The FDA disagrees with the recommendation of its advisory committees a minority of the time, and in these cases it tends to be less likely to approve new products or supplemental indications and take safety actions. Deviations from recommendations thus offer an opportunity to understand the factors influencing decisions made by both the agency and its expert advisory groups.
abstract_id: PUBMED:34001570
Consent for blood transfusion: summary of recommendations from the Advisory Committee for the Safety of Blood, Tissues and Organs (SaBTO). The Advisory Committee on the Safety of Blood, Tissues and Organs (SaBTO) decided that its 2011 recommendations on consent for blood transfusion needed to be reviewed and revised due to evidence of poor compliance and recent legal guidance on consent. The recommendations are to ensure that patients are informed about and understand the purpose, benefits and potential risks of transfusion, and have an opportunity to discuss their treatment options. They should be incorporated into local practices for all patients.
abstract_id: PUBMED:32730235
Essential Components of a Public Health Tuberculosis Prevention, Control, and Elimination Program: Recommendations of the Advisory Council for the Elimination of Tuberculosis and the National Tuberculosis Controllers Association. This report provides an introduction and reference tool for tuberculosis (TB) controllers regarding the essential components of a public health program to prevent, control, and eliminate TB. The Advisory Council for the Elimination of Tuberculosis and the National Tuberculosis Controllers Association recommendations in this report update those previously published (Advisory Council for the Elimination of Tuberculosis. Essential components of a tuberculosis prevention and control program. Recommendations of the Advisory Council for the Elimination of Tuberculosis. MMWR Recomm Rep 1995;44[No. RR-11]). The report has been written collaboratively on the basis of experience and expert opinion on approaches to organizing programs engaged in diagnosis, treatment, prevention, and surveillance for TB at state and local levels.This report reemphasizes the importance of well-established priority strategies for TB prevention and control: identification of and completion of treatment for persons with active TB disease; finding and screening persons who have had contact with TB patients; and screening, testing, and treatment of other selected persons and populations at high risk for latent TB infection (LTBI) and subsequent active TB disease.Health departments are responsible for public safety and population health. To meet their responsibilities, TB control programs should institute or ensure completion of numerous responsibilities and activities described in this report: preparing and maintaining an overall plan and policy for TB control; maintaining a surveillance system; collecting and analyzing data; participating in program evaluation and research; prioritizing TB control efforts; ensuring access to recommended laboratory and radiology tests; identifying, managing, and treating contacts and other persons at high risk for Mycobacterium tuberculosis infection; managing persons who have TB disease or who are being evaluated for TB disease; providing TB training and education; and collaborating in the coordination of patient care and other TB control activities. Descriptions of CDC-funded resources, tests for evaluation of persons with TB or LTBI, and treatment regimens for LTBI are provided (Supplementary Appendices; https://stacks.cdc.gov/view/cdc/90289).
Answer: The public does receive boil water advisory (BWA) recommendations, but adherence to these recommendations varies. A cross-sectional study in Newfoundland and Labrador, Canada, found that most participants learned about BWAs through radio, television, or word of mouth, but adherence to water use recommendations during a BWA was commonly low (PUBMED:26733450). A meta-analysis of public compliance to BWA notifications reported that awareness of BWAs was moderately high, except in extreme weather situations, and that compliance rates were generally high. However, when considering factors such as brushing teeth and the timeliness of receiving the advisory, the median effective compliance rate was around 68 percent, which is likely an overestimate (PUBMED:26938499).
In Norway, a study found that awareness of advisories was suboptimal among residents who had received notifications, but compliance was high among those who remembered receiving a notification, with an effective compliance rate of 72% (PUBMED:31464621). In an indigenous community in Ontario, Canada, respondents expressed varying degrees of uncertainty regarding protective actions to take while under a BWA, and adherence varied by gender, with 79% of men but only 46% of women indicating they always adhered to the BWA (PUBMED:33187509).
These findings suggest that while the public does receive BWA recommendations, there are challenges in ensuring full compliance. Factors influencing adherence include the timeliness of receipt, content of the advisory, the number of sources reporting the advisory, and individual perceptions of risk (PUBMED:26938499; PUBMED:31464621). To improve public adherence, studies recommend improvements in the distribution, phrasing, and content of water advisory notifications, as well as community education and outreach (PUBMED:33187509; PUBMED:31464621). |
Instruction: Cancelled stereotactic biopsy of calcifications not seen using the stereotactic technique: do we still need to biopsy?
Abstracts:
abstract_id: PUBMED:24217642
Cancelled stereotactic biopsy of calcifications not seen using the stereotactic technique: do we still need to biopsy? Objective: To determine the frequency of cancelled stereotactic biopsy due to non-visualisation of calcifications, and assess associated features and outcome data.
Methods: A retrospective review was performed on 1,874 patients scheduled for stereotactic-guided breast biopsy from 2009 to 2011. Medical records and imaging studies were reviewed.
Results: Of 1,874 stereotactic biopsies, 76 (4 %) were cancelled because of non-visualisation of calcifications. Prompt histological confirmation was obtained in 42/76 (55 %). In 28/76 (37 %) follow-up mammography was performed, and 7/28 subsequently underwent biopsy. Of 27 without biopsy, 21 (78 %) had follow-up. Nine cancers (9/49, 18 %) were found: 6 ductal carcinoma in situ (DCIS), 3 infiltrating ductal carcinoma (IDC). Of 54 patients with either biopsy or at least 2 years' follow-up, 9 (17 %) had cancer (95 % CI 8-29). Cancer was present in 7/42 (17 %, 95 % CI 7-31 %) lesions that had prompt histological confirmation (DCIS = 5, IDC = 2) and in 2/28 (7 %, 95 % CI 0.8-24 %) lesions referred for follow-up (DCIS = 1, IDC = 1). Neither calcification morphology (P = 0.2), patient age (P = 0.7), breast density (P = 1.0), personal history (P = 1.0) nor family history of breast cancer (P = 0.5) had a significant association with cancer.
Conclusion: Calcifications not visualised on the stereotactic unit are not definitely benign and require surgical biopsy or follow-up. No patient or morphological features were predictive of cancer.
Key Points: • Half of cancelled stereotactic biopsies were due to non-visualisation of calcified foci. • This reflects the improved detection of calcifications by digital mammography. • Calcifications too faint for the stereotactic technique require alternative biopsy or follow-up • 17 % of patients with biopsy or at least 2 years' follow-up had cancer. • No patient/morphological features were found to aid selection for re-biopsy vs. follow-up.
abstract_id: PUBMED:37443627
Stepwise Implementation of 2D Synthesized Screening Mammography and Its Effect on Stereotactic Biopsy of Microcalcifications. Rationale And Objectives: Information evaluating the efficacy of 2D synthesized mammography (2Ds) reconstructions in microcalcification detection is limited. This study used stereotactic biopsy data for microcalcifications to evaluate the stepwise implementation of 2Ds in screening mammography. The study aim was to identify whether 2Ds + digital breast tomosynthesis (DBT) is non-inferior to 2D digital mammography (2DM) + 2Ds + DBT, 2DM + DBT, and 2DM in identifying microcalcifications undergoing further diagnostic imaging and stereotactic biopsy.
Materials And Methods: Retrospective stereotactic biopsy data were extracted following 151,736 screening mammograms of healthy women (average age, 56.3 years; range, 30-89 years), performed between 2012 and 2019. The stereotactic biopsy data were separated into 2DM, 2DM + DBT, 2DM + 2Ds + DBT, and 2Ds + DBT arms and examined using Fisher's exact test to compare the detection rates of all cancers, invasive cancers, DCIS, and ADH between modalities for patients undergoing stereotactic biopsy of microcalcifications.
Results: No statistical significance in cancer detection was seen for 2Ds + DBT among those calcifications that underwent stereotactic biopsy when comparing the 2Ds + DBT to 2DM, 2DM + DBT, and 2DM + 2Ds + DBT imaging combinations.
Conclusion: These data suggest that 2Ds + DBT is non-inferior to 2DM + DBT in detecting microcalcifications that will undergo stereotactic biopsy.
abstract_id: PUBMED:30321753
Stereotactic breast biopsy efficiency: Does a pre-biopsy grid image help? Objective: Prior to stereotactic breast biopsy, some radiologists obtain a mammogram image with an overlying alphanumeric grid to mark the skin overlying the target. Our purpose is to determine if this grid image affects stereotactic biopsy efficiency and accuracy, including total images obtained, procedure time and need for retargeting.
Materials And Methods: IRB approved, HIPAA compliant retrospective review of prone stereotactic biopsy cases targeting calcifications 9/1/2015 to 9/1/2016 was performed. Images and reports were reviewed for number and type of images obtained, evidence of retargeting and biopsy table time. Attending radiologist, technologist and trainee involvement were recorded. Statistical analysis was performed utilizing SAS statistical software v 9.4 (SAS Institute, Cary, NC).
Results: Of 463 women (avg age 58.0 years, range 30-94), 392/463 (84.7%) had grid images obtained pre-biopsy. Grid patients had more images total than non-grid (avg 9.26 versus 8.44 images/patient; p < 0.0001) but spent less time on the biopsy table (avg 15 min 2 s versus 16 min 44 s/procedure; p < 0.0001). Non-grid patients were more likely to undergo initial retargeting (45% non-grid vs 30% of grid patients; p = 0.013); however, later retargeting after needle placement was comparable (p = 0.3).
Conclusion: Grid imaging increases images obtained but decreases retargeting and biopsy table time at the expense of mammogram room/technologist time to obtain the grid image. The overall result is longer total procedure time (grid time plus table time) for the patient/technologist. A grid image therefore has limited usefulness and should be used judiciously in cases where prone positioning is challenging to patients.
abstract_id: PUBMED:27052522
Stereotactic Biopsy of Segmental Breast Calcifications: Is Sampling of Anterior and Posterior Components Necessary? Rationale And Objectives: Core needle biopsy results of segmental calcifications on mammography can have direct impact on surgical management. Although dependent on breast size, cancer spanning greater than 5 cm is usually treated with mastectomy, and cancer less than 5 cm is managed with lumpectomy. Approach to stereotactic biopsy of morphologically similar segmental calcifications that span more than 5 cm on mammography varies geographically and is currently largely based on preference of the surgical or medical oncology colleagues. Some clinicians prefer biopsy of the anterior and posterior aspects of the abnormality, whereas others believe a single biopsy within the abnormality is adequate. There is insufficient data to support whether a single biopsy of calcifications is adequate to establish the need for mastectomy, or if pathology-proven cancer in the anterior and posterior components to define the extent of disease is required. This study aims to evaluate concordance rates of paired biopsies of suspicious segmental mammographic calcifications.
Materials And Methods: From a 5-year review of our imaging database, 32 subjects were identified with breast imaging reporting and data system (BI-RADS) 4 or 5 segmental calcifications on mammography who underwent anterior and posterior stereotactic biopsies. The paired biopsy results were independently analyzed for concordance on benign, high-risk, or malignant pathology.
Results: Of the 32 cases, there was perfect agreement (32/32 cases = 100% concordance, 95% confidence interval = 89.3-100%) in anterior and posterior pairs in benign, high-risk, or malignant findings (kappa = 1, P < 0.001).
Conclusions: The absence of data on pathological concordance in anterior and posterior aspects of suspicious, morphologically similar, segmental calcifications spanning 5 cm or more has led to a varied clinical approach to stereotactic biopsy. The 100% rate of pathological concordance in our study suggests that a single biopsy is adequate for diagnosis and representative of the whole mammographic abnormality. Implementation of this approach will potentially reduce unnecessary biopsies and surgeries, minimize healthcare costs, and decrease patient morbidity.
abstract_id: PUBMED:31563108
Stereotactic 9-gauge vacuum-assisted breast biopsy, how many specimens are needed? Purpose: To determine the minimum number of stereotactic 9-gauge vacuum-assisted biopsy specimens required to establish a final histopathological biopsy diagnosis of mammographically suspicious breast lesions.
Methods: This prospective single-center observational cohort study included 120 women referred for stereotactic vacuum-assisted biopsy of 129 mammographically suspicious lesions between December 2017 and October 2018. Stereotactic 9-gauge vacuum-assisted biopsy was performed, acquiring twelve specimens per lesion. Calcification retrieval was assessed with individual specimen radiography. Each specimen was histologically analyzed in chronological order and findings were compared with the final histopathological result after assessment of all twelve specimens and with results of surgical excision. Cumulative diagnostic yield per specimen was calculated.
Results: In total, 131 biopsy procedures were performed in 120 women (mean age 59 years). In 95% (95%CI 90%-98%) of the procedures a final histopathological diagnosis was reached after six specimens. After nine specimens the final biopsy diagnosis was established in all 131 cases. In the subgroup of 41 patients with a DCIS or invasive diagnosis at biopsy there were eight procedures (20%) where calcifications were retrieved before the diagnostic specimen was obtained. Underestimation of subsequent resection diagnosis occurred in six out of 30 excised lesions classified as DCIS (20%) and in one out of four excised high-risk lesions.
Conclusions: With six stereotactic 9-gauge vacuum-assisted biopsy specimens a final histopathological biopsy diagnosis could be established in 95% (95%CI 90%-98%) of the biopsy procedures. Taking nine 9-gauge specimens seems to be optimal. Ending the stereotactic vacuum-assisted breast biopsy procedure as soon as calcifications are retrieved may cause false negative results.
abstract_id: PUBMED:24989112
Sentinel lymph node biopsy is not necessary in patients diagnosed with ductal carcinoma in situ of the breast by stereotactic vacuum-assisted biopsy. Background: This study evaluated the role and need of a sentinel lymph node biopsy (SLNB) in patients with an initial diagnosis of ductal carcinoma in situ (DCIS) made by stereotactic vacuum-assisted biopsy (VAB).
Materials And Methods: A retrospective analysis was performed of 1,458 patients who underwent stereotactic VAB between January 1999 and December 2012 at Aichi Cancer Center Hospital. The rates of axillary node metastasis and the underestimation of invasive ductal carcinoma (IDC) were examined.
Results: Of the 1,458 patients who underwent stereotactic VAB, 199 had a preoperative diagnosis of DCIS and underwent surgery. In these patients, 20 % (39/199) were upstaged to IDC or at least microinvasion in final pathology. Axillary lymph node status was investigated in 81 % (161/199) of initially diagnosed DCIS patients, and resulted in finding lymph node metastasis in 0.62 % (1/161) patients. To assess the potential preoperative predictors of invasiveness, the value of DCIS histological grade on biopsy samples, the distribution of calcifications on mammograms, and the combination of these factors were studied. The underestimation rate was higher (30 %) in the combination of high DCIS histological grade and extensive calcification although there was no significant association (p = 0.23).
Conclusion: The rate of lymph node metastasis was extremely low (0.62 %), even when invasive carcinoma was identified on excision in patients initially diagnosed with DCIS by stereotactic VAB. Because of the low prevalence of metastatic involvement, the cessation of SLNB is a reasonable consideration in patients initially diagnosed with DCIS by stereotactic VAB.
abstract_id: PUBMED:26589316
Surgical biopsy is still necessary for BI-RADS 4 calcifications found on digital mammography that are technically too faint for stereotactic core biopsy. The purpose of this study was to evaluate the outcome of faint BI-RADS 4 calcifications detected with digital mammography that were not amenable to stereotactic core biopsy due to suboptimal visualization. Following Institutional Review Board approval, a HIPAA compliant retrospective search identified 665 wire-localized surgical excisions of calcifications in 606 patients between 2007 and 2010. We included all patients that had surgical excision for initial diagnostic biopsy due to poor calcification visualization, whose current imaging was entirely digital and performed at our institution and who did not have a diagnosis of breast cancer within the prior 2 years. The final study population consisted of 20 wire-localized surgical biopsies in 19 patients performed instead of stereotactic core biopsy due to poor visibility of faint calcifications. Of the 20 biopsies, 4 (20% confidence intervals 2, 38%) were malignant, 5 (25%) showed atypia and 11 (55%) were benign. Of the malignant cases, two were invasive ductal carcinoma (2 and 1.5 mm), one was intermediate grade DCIS and one was low-grade DCIS. Malignant calcifications ranged from 3 to 12 mm. The breast density was scattered in 6/19 (32%), heterogeneously dense in 11/19 (58%) and extremely dense in 2/19 (10%). Digital mammography-detected faint calcifications that were not amenable to stereotactic biopsy due to suboptimal visualization had a risk of malignancy of 20%. While infrequent, these calcifications should continue to be considered suspicious and surgical biopsy recommended.
abstract_id: PUBMED:33224800
Value of stereotactic 11-gauge vacuum-assisted breast biopsy in non-palpable suspicious calcifications: an eight-year single institution experience with 587 patients. Background: Vacuum-assisted breast biopsy (VABB) has been routinely recommended for stereotactic intervention in cases of isolate mammographically-detected calcifications. Herein we aimed to evaluate and compare the diagnostic consistency and accuracy of calcified and noncalcified specimens obtained from same sites of sampling on mammography-visible calcifications. In addition, we presented the biopsy procedure and retrospectively evaluated the usefulness of VABB as well as the complications of this technique over an eight-year experience in our centre.
Methods: This single-institution observational cohort study included 587 patients referred for stereotactic 11-gauge VABB of 594 mammographically-detected calcifications between January 2010 and December 2018. The rate of histopathological underestimation, the false negative, the diagnostic consistency and accuracy between calcified and noncalcified specimens of VABB were comprehensively evaluated based on the surveillance data and final histopathological result of the surgical specimens.
Results: In total, 594 biopsy procedures were performed in 587 patients (mean age 46 years, range, 21-80 years). The average number of biopsy specimens was 14.7 (range, 9-21) per lesion. VABB pathological results revealed 471 (79.3%) benign, 39 (6.6%) high-risk, and 84 (14.1%) malignant cases. The diagnostic inconsistency between calcified and noncalcified specimens was 14.6% (105/123) for high-risk and malignant lesions. Furthermore, calcified specimens exhibited higher diagnostic accuracy of malignant lesion as compared with the noncalcified specimens (97.7% versus 82.6%, respectively). Underestimation rate for high-risk lesions and in situ carcinoma was 5.1% and 54.1%, respectively, along with a false negative rate of 6.25%. In addition, mild complications were reported with high patient tolerance.
Conclusions: Stereotactic 11G-VABB might be preferred for the investigation of non-palpable mammographically-detected calcifications in terms of accuracy and safety profile. The high prevalence of diagnostic discordance between the specimens with and without calcifications revealed a higher value of calcified specimens in diagnosing high-risk and malignant calcifications.
abstract_id: PUBMED:9644688
Stereotactic breast biopsy: indications and results. Imaging-guided breast biopsy performed with large-core needles can accurately diagnose most breast pathologies, often allowing a diagnosis to be made more quickly and less expensively than with surgical biopsy. Major complications, such as hemorrhage and infection, are extremely rare, although post-biopsy ecchymosis and tenderness are not unusual. Because less tissue is removed, post-biopsy cosmetic deformity does not occur. Stereotactic biopsy is performed by triangulating the position of a breast lesion and by obtaining views angled equally off a central axis. This can be done using dedicated tables or add-on equipment. Stereotactic core biopsy has a reported accuracy of at least 90%. All lesions for which biopsy would ordinarily be recommended are amenable to stereotactic techniques, but those near the chest wall or in the axilla may be more difficult to biopsy with some equipment. Lesions characterized by calcifications are sometimes more difficult to sample. A biopsy diagnosis of ductal atypia, because of its histologic heterogeneity, requires surgical excision to exclude coexistent carcinoma, which has been found in half of women at subsequent surgical excision. A core biopsy diagnosis of ductal carcinoma in situ does not preclude the discovery of invasive carcinoma at surgery. In rare instances, the small tissue volume removed at stereotactic biopsy does not permit a final diagnosis to be made; this occurs most commonly when differentiating phyllodes tumor from fibroadenoma.
abstract_id: PUBMED:23901313
The clinical utility of a adding lateral approach to conventional vertical approach for prone stereotactic vacuum-assisted breast biopsy. Objective: The purpose of this study is to evaluate the clinical utility of adding lateral approach to conventional vertical approach for prone stereotactic vacuum-assisted breast biopsies.
Materials And Methods: From April 2010 to May 2012, 130 vacuum-assisted stereotactic biopsies were attempted in 127 patients. While a vertical approach was preferred, a lateral approach was used if the vertical approach failed. The success rate of biopsies utilizing only a vertical approach was compared with that using both vertical and lateral approaches and the breast thickness for both procedures was measured and compared with that for vertical approach. In addition, pathology results were evaluated and the causes of the failed biopsies were analyzed.
Results: Of the 130 cases, 127 biopsies were performed and 3 biopsies failed. The success rate of the vertical approach was 83.8% (109/130); however, when the lateral approach was also used, the success rate increased to 97.7% (127/130) (p = 0.0004). The mean breast thickness was 2.7 ± 1 cm for the lateral approach and 4 ± 1.2 cm for the vertical approach (p < 0.0001). The histopathologic results in 76 (59.8%) of the biopsies were benign, 23 (18.1%) were high-risk lesions, and 28 (22.0%) were malignant. The causes of biopsy failure were thin breasts (n = 2) and undetected difficult lesion location (n = 1).
Conclusion: The addition of lateral approach to conventional vertical approach in prone stereotactic vacuum-assisted breast biopsy improved the success rate of stereotactic biopsy, especially in patients with thin breasts.
Answer: Yes, cancelled stereotactic biopsies of calcifications not seen using the stereotactic technique still require attention, either through surgical biopsy or follow-up. A study found that of 1,874 stereotactic biopsies, 76 (4%) were cancelled because of non-visualisation of calcifications. Of these, 55% had prompt histological confirmation, and 37% had follow-up mammography, with some of these subsequently undergoing biopsy. Of the patients with either biopsy or at least 2 years' follow-up, 17% had cancer. The study concluded that calcifications not visualised on the stereotactic unit are not definitively benign and require surgical biopsy or follow-up, as no patient or morphological features were predictive of cancer (PUBMED:24217642).
Additionally, another study emphasized that while faint BI-RADS 4 calcifications detected with digital mammography that are not amenable to stereotactic core biopsy due to suboptimal visualization are infrequent, they should still be considered suspicious, and surgical biopsy is recommended. In their study, 20% of such cases were malignant (PUBMED:26589316).
Therefore, even when calcifications are not visualized well enough for stereotactic biopsy, further investigation through surgical biopsy or close follow-up is necessary to ensure that potential malignancies are not missed. |
Instruction: Disaster preparedness: are retired physicians willing to help?
Abstracts:
abstract_id: PUBMED:20014545
Disaster preparedness: are retired physicians willing to help? Objective: To identify the proportion of retired physicians belonging to a state-wide professional association who would be willing to volunteer in the event of a disaster.
Methods: A paper-based, self-administered questionnaire sent to all physicians listed as retired members of the Washington State Medical Association (WSMA). The main questions included whether subjects would be willing to volunteer during a disaster, which tasks they would be most willing to perform, and whether they would be willing to participate in disaster preparedness training.
Results: A total of 2,443 surveys were mailed, 2,274 arrived at their destination (169 were undeliverable), and 1,447 were returned (response rate 64 percent). Fifty-four percent of respondents reported they would be willing to perform healthcare tasks during a disaster and 24 percent of respondents said they would possibly be willing to help. Tasks retired physicians were most willing to assist with included minor wound care (85 percent), vaccine administration (74 percent), and starting intravenous lines (71 percent). Fewer respondents indicated willingness to assist with community education (60 percent) or staffing ambulatory clinics (48 percent). Seventy-eight percent indicated they would attend disaster preparedness training.
Conclusions: Healthcare facilities must be prepared to cope with staffing shortages in the event of a disaster and volunteers such as retired physicians could fill crucial roles in a medical response plan. The majority of retired physicians surveyed would be willing to participate. They would be most willing to perform well-defined tasks directly related to patient care. Most would be willing to participate in preparatory training.
abstract_id: PUBMED:17279185
Enhancing public health response to respiratory epidemics: are family physicians ready and willing to help? Objective: To describe Ottawa family physicians' perceptions of their preparedness to respond to outbreaks of infectious diseases or other public health emergencies and to assess their capacity and willingness to assist in the event of such emergencies.
Design: Cross-sectional self-administered survey conducted between February 11 and March 10, 2004.
Setting: The City of Ottawa, Ont, and the Department of Family Medicine at the University of Ottawa.
Participants: Ottawa family physicians; respondents can be considered a self-selected sample.
Main Outcome Measures: Self-reported office preparedness and physicians' capacity and willingness to respond to public health emergencies.
Results: Response rate was 41%. Of 676 physicians contacted, 274 responded, and of those, 246 completed surveys. About 26% of respondents felt prepared for an outbreak of influenza not well covered by vaccine. About 18% felt prepared for serious respiratory epidemics, such as severe acute respiratory syndrome; about 50% felt unprepared. Most respondents (80%) thought they were not ready to respond to an earthquake. About 77% of physicians were willing to be contacted on an urgent basis in case of a public health emergency. Of these, 94% would assist in immunization clinics, 84% in antibiotic clinics, 58% in assessment centres, 52% in treatment centres, 41% with declaration of death, 26% with home care, and 23% with telephone counseling.
Conclusion: Family physicians appear to be unprepared for, but willing to address, serious public health emergencies. It is essential to set up effective partnerships between primary care and public health services to support family physicians' capacity to respond to emergencies. This type of study, along with the creation of a register of available services and of a virtual network for sharing information, is an initial step in assessing primary care response.
abstract_id: PUBMED:29719848
A Questionnaire Study on the Attitudes and Previous Experience of Croatian Family Physicians toward their Preparedness for Disaster Management. Objective: To explore family physicians' attitudes, previous experience and self-assessed preparedness to respond or to assist in mass casualty incidents in Croatia.
Methods: The cross-sectional survey was carried out during January 2017. Study participants were recruited through a Facebook group that brings together family physicians from Croatia. They were asked to complete the questionnaire, which was distributed via google.docs. Knowledge and attitudes toward disaster preparedness were evaluated by 18 questions. Analysis of variance, Student t test and Kruskal-Wallis test t were used for statistical analysis.
Results: Risk awareness of disasters was high among respondents (M = 4.89, SD=0.450). Only 16.4 of respondents have participated in the management of disaster at the scene. The majority (73.8%) of physicians have not been participating in any educational activity dealing with disaster over the past two years. Family physicians believed they are not well prepared to participate in national (M = 3.02, SD=0.856) and local community emergency response system for disaster (M = 3.16, SD=1.119). Male physicians scored higher preparedness to participate in national emergency response system for disaster (p=0.012), to carry out accepted triage principles used in the disaster situation (p=0.003) and recognize differences in health assessments indicating potential exposure to specific agents (p=0,001) compared to their female colleagues.
Conclusion: Croatian primary healthcare system attracts many young physicians, who can be an important part of disaster and emergency management. However, the lack of experience despite a high motivation indicates a need for inclusion of disaster medicine training during undergraduate studies and annual educational activities.
abstract_id: PUBMED:35152935
A Cross-sectional Study About Nurses' and Physicians' Experience of Disaster Management Preparedness Throughout COVID-19. Objective: The aim of this study was to assess and compare nurses' and physicians' knowledge of disaster management preparedness. An effective health-care system response to various disasters is paramount, and nurses and physicians must be prepared with appropriate competencies to be able to manage the disaster events.
Methods: This is a cross-sectional study. A total of 636 nurses and 257 physicians were recruited from 1 hospital in Saudi Arabia. Of them, 608 (95.6%) nurses and 228 (83.2%) physicians completed self-administered, online questionnaires. The questionnaire assessed participants' sociodemographic data, and disaster management knowledge.
Results: The findings revealed that participants had more knowledge regarding the disaster preparedness stage than mitigation and recovery stages. They also reported a need for advanced disaster training areas. A total of 10.1% of nurses' and 15.6% of physicians' overall knowledge is explained by their demographic and work-related characteristics.
Conclusions: Both nurses and physicians had to some extent knowledge regarding the information and practices required for disaster management process. It is proposed that hospital managers must look for opportunities to effectively adopt national standards to manage disasters and include nurses and physicians in major-related learning activities because experience has suggested a somewhat low overall perceived competence in managing disaster situations.
abstract_id: PUBMED:35832271
Physicians' Response and Preparedness of Terrorism-Related Disaster Events in Quetta City, Pakistan: A Qualitative Inquiry. Background: Besides catastrophes, infrastructural damages, and psychosocial distress, terrorism also imposes an unexpected burden on healthcare services. Considerably, adequately-prepared and responsive healthcare professionals affirms effective management of terrorism-related incidences. Accordingly, the present study aimed to evaluate physicians' preparedness and response toward terrorism-related disaster events in Quetta city, Pakistan.
Methods: A qualitative design was adopted. Physicians practicing at the Trauma Center of Sandeman Provincial Hospital (SPH), Quetta, were approached for the study. We conducted in-depth interviews; all interviews were audio-taped, transcribed verbatim, and analyzed for thematic contents by a standard content analysis framework.
Results: Fifteen physicians were interviewed. The saturation was achieved at the 13th interview however we conducted another two to validate the saturation. The thematic content analysis revealed five themes and 11 subthemes. All physicians have experienced, responded to, and managed terrorism-related disaster events. They were prepared professionally and psychologically in dealing with a terrorism-related disaster. Physicians identified lack of disaster-related curricula and training, absence of a standardized protocol, recurrence of the disaster, and hostile behavior of victim's attendants during an emergency as critical barriers to effective terrorism-related disaster management. Among limitations, all respondents mentioned workspace, and resources as a foremost constraint while managing a terrorism-related disaster event.
Conclusion: Although physicians understood the abilities and had the required competencies to mitigate a terrorism-related disaster, lack of workspace and resources were identified as a potential barrier to effective disaster management. Based on the results, we propose reconsideration and integration of the medical curriculum, particularly for terrorism-related disaster management, collaboration, and communication among various stakeholders to manage terrorism-related disaster events competently.
abstract_id: PUBMED:27221114
What Kinds of Skills Are Necessary for Physicians Involved in International Disaster Response? Unlabelled: Introduction Physicians are key disaster responders in foreign medical teams (FMTs) that provide medical relief to affected people. However, few studies have examined the skills required for physicians in real, international, disaster-response situations. Problem The objectives of this study were to survey the primary skills required for physicians from a Japanese FMT and to examine whether there were differences in the frequencies of performed skills according to demographic characteristics, previous experience, and dispatch situations to guide future training and certification programs.
Methods: This cross-sectional survey used a self-administered questionnaire given to 64 physicians with international disaster-response site experience. The questionnaire assessed demographic characteristics (sex, age, years of experience as a physician, affiliation, and specialty), previous experience (domestic disaster-relief experience, international disaster-relief experience, or disaster medicine training experience), and dispatch situation (length of dispatch, post-disaster phase, disaster type, and place of dispatch). In addition, the frequencies of 42 performed skills were assessed via a five-point Likert scale. Descriptive statistics were used to assess the participants' characteristics and total scores as the frequencies of performed skills. Mean scores for surgical skills, health care-related skills, public health skills, and management and coordination skills were compared according to the demographic characteristics, previous experience, and dispatch situations.
Results: Fifty-two valid questionnaires (81.3% response rate) were collected. There was a trend toward higher skill scores among those who had more previous international disaster-relief experience (P=.03). The more disaster medicine training experience the participants had, the higher their skill score was (P<.001). Physicians reported involvement in 23 disaster-relief response skills, nine of which were performed frequently. There was a trend toward higher scores for surgical skills, health care-related skills, and management and coordination skills related to more disaster medicine training experience.
Conclusion: This study's findings can be used as evidence to boost the frequency of physicians' performed skills by promoting previous experience with international disaster relief and disaster medicine training. Additionally, these results may contribute to enhancing the quality of medical practice in the international disaster relief and disaster training curricula. Noguchi N , Inoue S , Shimanoe C , Shibayama K , Matsunaga H , Tanaka S , Ishibashi A , Shinchi K . What kinds of skills are necessary for physicians involved in international disaster response? Prehosp Disaster Med. 2016;31(4):397-406.
abstract_id: PUBMED:33909527
Role of Emergency Medical Services in Disaster Response - National Association of EMS Physicians Position Statement. This is the official position statement of the National Association of EMS Physicians on the role of emergency medical services (EMS) in disaster response.
abstract_id: PUBMED:11218757
Survey on postmortem examination to police surgeons and emergency physicians. Possibility of physicians' assist in mass-disaster We conducted a questionnaire survey of police surgeons and emergency physicians, inquiring about their experience of medicolegal investigation of death and their willingness to join a death investigation team in a major disaster. The questionnaire also asked about their knowledge about and interest in the forensic specialist system established by the Japanese Society of Legal Medicine. Police surgeons were generally willing to join an investigation team only if a disaster occurred in or close to their hometown, because they could not afford more than several days away from patient care. Although many of the emergency physicians were willing to join a death investigation team, they had difficulty in doing so without permission or orders from their employer or the authorities concerned. The survey found that the percentage of aged police surgeons was increasing among those surveyed. This fact, in combination with the current emphasis of postgraduate education on specialty training, threatens to cause a substantial lack of physicians available for medicolegal investigation of death. Therefore, it is urgently necessary to establish a system of training resident and emergency physicians in medicolegal investigation of death. In addition to providing postgraduate training in medicolegal investigation of death to prospective trainees who are emergency physicians at major hospitals in potential disaster-stricken areas, the medical school should incorporate forensic medicine in postgraduate training programs so that they can actively perform death investigation on disaster victims dying before or after arrival at their hospitals. Furthermore, the forensic community should make every effort to increase the number of autopsies in each department of forensic medicine and to expand the medical examiner system throughout Japan that is currently in practice only in the Metropolis of Tokyo and Yokohama, Nagoya, Osaka and Kobe Cities in order to incorporate forensic training in the postgraduate clinical training programs that will become compulsory in 2004.
abstract_id: PUBMED:38197147
Footprint of Emergency Medicine Physicians in Disaster Medicine Publications: A Bibliometric Analysis. Introduction: Investigating the developments in the ever-growing field of disaster medicine and revealing the scientific trends will make an important contribution to researchers in related fields. This study aims to identify the contributions of emergency medicine physicians (EMPs) and trends in disaster medicine publications.
Methods: The expressions "disaster medicine" or "disaster*" and "medicine*" were searched in the Web of Science (WoS) database. Research and review papers produced by EMPs from 2001 through 2021 were included in the study. Basic descriptive information was assessed such as the number of publications, authors, citations, most active authors, institutions, countries, and journals. In addition, conceptual, intellectual, and social structures were analyzed.
Results: The study included a total of 346 papers written by 1,500 authors. The mean citation rate per publication was 13.2. Prehospital and Disaster Medicine, Disaster Medicine and Public Health Preparedness, and Academic Emergency Medicine were the journals with the highest number of publications and the highest number of citations. The most common keywords used by the authors were "disaster medicine," "emergency medicine," and "disaster/disasters." According to the distribution of the corresponding authors by country, the United States (n = 175), Japan (n = 23), Italy (n = 20), Australia (n = 17), and Canada (n = 17) had the highest number of publications. The institutions that produced the most publications were John Hopkins University (n = 37), Brigham and Women's Hospital (n = 27), George Washington University (n = 25), University Piemonte Orientale (n = 24), and Brown University (n = 22).
Conclusion: Increasingly, EMPs have contributed to disaster medicine publications over the years. This study can be used as a guide for EMPs and other researchers who want to contribute to the disaster medicine literature.
abstract_id: PUBMED:37484133
Exploring the experiences and learning of young primary care physicians in disaster-affected areas: A qualitative study on the Great East Japan Earthquake. Background: Primary care physicians (PCPs) play a critical role in disaster medicine. However, it is unclear how PCPs who provide chronic support to disaster-affected areas learn from their experiences.
Methods: This qualitative study investigates the learnings of young PCPs who provided medical care during the chronic phase of the Great East Japan Earthquake disaster.
Results: Thematic analysis of interviews with five physicians revealed the challenges faced by them and their learnings in providing medical support to disaster-affected areas.
Conclusions: They not only learned medical skills but also humanistic aspects such as empathizing with the survivors' loss.
Answer: Yes, retired physicians are willing to help in disaster preparedness. A study conducted by sending a paper-based, self-administered questionnaire to retired members of the Washington State Medical Association (WSMA) found that 54% of respondents reported they would be willing to perform healthcare tasks during a disaster, and an additional 24% said they would possibly be willing to help. The tasks retired physicians were most willing to assist with included minor wound care (85%), vaccine administration (74%), and starting intravenous lines (71%). Furthermore, 78% indicated they would attend disaster preparedness training, suggesting that healthcare facilities could consider retired physicians as potential volunteers to fill crucial roles in a medical response plan during disasters (PUBMED:20014545). |
Instruction: A Meta-analysis of Intraoperative Ventilation Strategies to Prevent Pulmonary Complications: Is Low Tidal Volume Alone Sufficient to Protect Healthy Lungs?
Abstracts:
abstract_id: PUBMED:26720429
A Meta-analysis of Intraoperative Ventilation Strategies to Prevent Pulmonary Complications: Is Low Tidal Volume Alone Sufficient to Protect Healthy Lungs? Background: The clinical benefits of intraoperative low tidal volume (LTV) mechanical ventilation with concomittent positive end expiratory pressure (PEEP) and intermittent recruitment maneuvers-termed "protective lung ventilation" (PLV)-have not been investigated systematically in otherwise healthy patients undergoing general anesthesia.
Methods: Our group performed a meta-analysis of 16 studies (n = 1054) comparing LTV (n = 521) with conventional lung ventilation (n = 533) for associated postoperative incidence of atelectasis, lung infection, acute lung injury (ALI), and length of hospital stay. A secondary analysis of 3 studies comparing PLV (n = 248) with conventional lung ventilation (n = 247) was performed.
Results: Although intraoperative LTV ventilation was associated with a decreased incidence of postoperative lung infection (odds ratio [OR] = 0.33; 95% confidence interval [CI], 0.16-0.68; P = 0.003) compared with a conventional strategy, no difference was noted between groups in incidence of postoperative ALI (OR = 0.38; 95% CI, 0.10-1.52; P = 0.17) or atelectasis (OR = 0.86; 95% CI, 0.26-2.81; P = 0.80). Analysis of trials involving protective ventilation (LTV + PEEP + recruitment maneuvers) showed a statistically significant reduction in incidence of postoperative lung infection (OR = 0.21; 95% CI, 0.09-0.50; P = 0.0003), atelectasis (OR = 0.36; 95% CI, 0.20-0.64; P = 0.006), and ALI (OR = 0.15; 95% CI, 0.04-0.61; P = 0.008) and length of hospital stay (Mean Difference = -2.08; 95% CI, -3.95 to -0.21; P = 0.03) compared with conventional ventilation.
Conclusions: Intraoperative LTV ventilation in conjunction with PEEP and intermittent recruitment maneuvers is associated with significantly improved clinical pulmonary outcomes and reduction in length of hospital stay in otherwise healthy patients undergoing general surgery. Providers should consider application of all the 3 elements for a comprehensive protective ventilation strategy.
abstract_id: PUBMED:26643098
Intraoperative ventilation strategies to prevent postoperative pulmonary complications: Systematic review, meta-analysis, and trial sequential analysis. For many years, mechanical ventilation with high tidal volumes (V(T)) was common practice in operating theaters because this strategy recruits collapsed lung tissue, improves ventilation-perfusion mismatch, and thus decreases the need for high oxygen fractions. Positive end-expiratory pressure (PEEP) was seldom used because it could cause cardiac compromise. Increasing advances in the understanding of the mechanisms of ventilator-induced lung injury from animal studies and randomized controlled trials in patients with uninjured lungs in intensive care unit and operation room have pushed anesthesiologists to consider lung-protective strategies during intraoperative ventilation. These strategies at least include the use of low V(T), and perhaps also the use of PEEP, which when compared to high V(T) with low PEEP may prevent the occurrence of postoperative pulmonary complications (PPCs). Such protective effects, however, are likely ascribed to low V(T) rather than to PEEP. In fact, at least in nonobese patients undergoing open abdominal surgery, high PEEP does not protect against PPCs, and it can impair the hemodynamics. Further studies shall determine whether a strategy consisting of low V(T) combined with PEEP and recruitment maneuvers reduces PPCs in obese patients and other types of surgery (e.g., laparoscopic and thoracic), compared to low V(T) with low PEEP. Furthermore, the role of driving pressure for titrating ventilation settings in patients with uninjured lungs shall be investigated.
abstract_id: PUBMED:26103140
Mechanical ventilation strategies for the surgical patient. Purpose Of Review: To summarize clinical evidence for intraoperative ventilation settings, which could protect against postoperative pulmonary complications (PPCs) in surgical patients with uninjured lungs.
Recent Findings: There is convincing evidence for protection against PPCs by low tidal volumes: benefit was found in several randomized controlled trials, and was recently confirmed in meta-analyses. Evidence for protection against PPCs by high levels of positive end-expiratory pressure (PEEP) is less definite. Although benefit was found in several randomized controlled trials, most of them compared a bundle of low tidal volume and high level of PEEP with conventional ventilation; one recent large randomized controlled trial that compared high with low levels of PEEP showed that ventilation with high level of PEEP did not protect against PPCs but caused intraoperative complications instead. A recent individual patient data meta-analysis of trials comparing bundles of low tidal volume and high levels of PEEP to conventional intraoperative ventilation suggested that protection against PPCs comes from tidal volume reductions, and not from increasing levels of PEEP.
Summary: The understanding on the protective roles of tidal volume and PEEP settings against PPCs has rapidly expanded. During intraoperative ventilation, low tidal volumes are protective, the protective role of high levels of PEEP is uncertain.
abstract_id: PUBMED:33518385
Tidal volume during 1-lung ventilation: A systematic review and meta-analysis. Background: The selection of tidal volumes for 1-lung ventilation remains unclear, because there exists a trade-off between oxygenation and risk of lung injury. We conducted a systematic review and meta-analysis to determine how oxygenation, compliance, and clinical outcomes are affected by tidal volume during 1-lung ventilation.
Methods: A systematic search of MEDLINE and EMBASE was performed. A systematic review and random-effects meta-analysis was conducted. Pooled mean difference estimated arterial oxygen tension, compliance, and length of stay; pooled odds ratio was calculated for composite postoperative pulmonary complications. Risk of bias was determined using the Cochrane risk of bias and Newcastle-Ottawa tools.
Results: Eighteen studies were identified, comprising 3693 total patients. Low tidal volumes (5.6 [±0.9] mL/kg) were not associated with significant differences in partial pressure of oxygen (-15.64 [-88.53-57.26] mm Hg; P = .67), arterial oxygen tension to fractional intake of oxygen ratio (14.71 [-7.83-37.24]; P = .20), or compliance (2.03 [-5.22-9.27] mL/cmH2O; P = .58) versus conventional tidal volume ventilation (8.1 [±3.1] mL/kg). Low versus conventional tidal volume ventilation had no significant impact on hospital length of stay (-0.42 [-1.60-0.77] days; P = .49). Low tidal volumes are associated with significantly decreased odds of pulmonary complications (pooled odds ratio, 0.40 [0.29-0.57]; P < .0001).
Conclusions: Low tidal volumes during 1-lung ventilation do not worsen oxygenation or compliance. A low tidal volume ventilation strategy during 1-lung ventilation was associated with a significant reduction in postoperative pulmonary complications.
abstract_id: PUBMED:34671638
Effect of Intraoperative Ventilation Strategies on Postoperative Pulmonary Complications: A Meta-Analysis. Introduction: The role of intraoperative ventilation strategies in subjects undergoing surgery is still contested. This meta-analysis study was performed to assess the relationship between the low tidal volumes strategy and conventional mechanical ventilation in subjects undergoing surgery. Methods: A systematic literature search up to December 2020 was performed in OVID, Embase, Cochrane Library, PubMed, and Google scholar, and 28 studies including 11,846 subjects undergoing surgery at baseline and reporting a total of 2,638 receiving the low tidal volumes strategy and 3,632 receiving conventional mechanical ventilation, were found recording relationships between low tidal volumes strategy and conventional mechanical ventilation in subjects undergoing surgery. Odds ratio (OR) or mean difference (MD) with 95% confidence intervals (CIs) were calculated between the low tidal volumes strategy vs. conventional mechanical ventilation using dichotomous and continuous methods with a random or fixed-effect model. Results: The low tidal volumes strategy during surgery was significantly related to a lower rate of postoperative pulmonary complications (OR, 0.60; 95% CI, 0.44-0.83, p < 0.001), aspiration pneumonitis (OR, 0.63; 95% CI, 0.46-0.86, p < 0.001), and pleural effusion (OR, 0.72; 95% CI, 0.56-0.92, p < 0.001) compared to conventional mechanical ventilation. However, the low tidal volumes strategy during surgery was not significantly correlated with length of hospital stay (MD, -0.48; 95% CI, -0.99-0.02, p = 0.06), short-term mortality (OR, 0.88; 95% CI, 0.70-1.10, p = 0.25), atelectasis (OR, 0.76; 95% CI, 0.57-1.01, p = 0.06), acute respiratory distress (OR, 1.06; 95% CI, 0.67-1.66, p = 0.81), pneumothorax (OR, 1.37; 95% CI, 0.88-2.15, p = 0.17), pulmonary edema (OR, 0.70; 95% CI, 0.38-1.26, p = 0.23), and pulmonary embolism (OR, 0.65; 95% CI, 0.26-1.60, p = 0.35) compared to conventional mechanical ventilation. Conclusions: The low tidal volumes strategy during surgery may have an independent relationship with lower postoperative pulmonary complications, aspiration pneumonitis, and pleural effusion compared to conventional mechanical ventilation. This relationship encouraged us to recommend the low tidal volumes strategy during surgery to avoid any possible complications.
abstract_id: PUBMED:33315635
Mechanical ventilation of the healthy lungs: lessons learned from recent trials. Purpose Of Review: Although there is clear evidence for benefit of protective ventilation settings [including low tidal volume and higher positive end-expiratory pressure (PEEP)] in patients with acute respiratory distress syndrome (ARDS), it is less clear what the optimal mechanical ventilation settings are for patients with healthy lungs.
Recent Findings: Use of low tidal volume during operative ventilation decreases postoperative pulmonary complications (PPC). In the critically ill patients with healthy lungs, use of low tidal volume is as effective as intermediate tidal volume. Use of higher PEEP during operative ventilation does not decrease PPCs, whereas hypotension occurred more often compared with use of lower PEEP. In the critically ill patients with healthy lungs, there are conflicting data regarding the use of a higher PEEP, which may depend on recruitability of lung parts. There are limited data suggesting that higher driving pressures because of higher PEEP contribute to PPCs. Lastly, use of hyperoxia does not consistently decrease postoperative infections, whereas it seems to increase PPCs compared with conservative oxygen strategies.
Summary: In patients with healthy lungs, data indicate that low tidal volume but not higher PEEP is beneficial. Thereby, ventilation strategies differ from those in ARDS patients.
abstract_id: PUBMED:27536534
Intraoperative mechanical ventilation strategies in patients undergoing one-lung ventilation: a meta-analysis. Background: Postoperative pulmonary complications (PPCs), which are not uncommon in one-lung ventilation, are among the main causes of postoperative death after lung surgery. Intra-operative ventilation strategies can influence the incidence of PPCs. High tidal volume (V T) and increased airway pressure may lead to lung injury, while pressure-controlled ventilation and lung-protective strategies with low V T may have protective effects against lung injury. In this meta-analysis, we aim to investigate the effects of different ventilation strategies, including pressure-controlled ventilation (PCV), volume-controlled ventilation (VCV), protective ventilation (PV) and conventional ventilation (CV), on PPCs in patients undergoing one-lung ventilation. We hypothesize that both PV with low V T and PCV have protective effects against PPCs in one-lung ventilation.
Methods: A systematic search (PubMed, EMBASE, the Cochrane Library, and Ovid MEDLINE; in May 2015) was performed for randomized trials comparing PCV with VCV or comparing PV with CV in one-lung ventilation. Methodological quality was evaluated using the Cochrane tool for risk. The primary outcome was the incidence of PPCs. The secondary outcomes included the length of hospital stay, intraoperative plateau airway pressure (Pplateau), oxygen index (PaO2/FiO2) and mean arterial pressure (MAP).
Results: In this meta-analysis, 11 studies (436 patients) comparing PCV with VCV and 11 studies (657 patients) comparing PV with CV were included. Compared to CV, PV decreased the incidence of PPCs (OR 0.29; 95 % CI 0.15-0.57; P < 0.01) and intraoperative Pplateau (MD -3.75; 95 % CI -5.74 to -1.76; P < 0.01) but had no significant influence on the length of hospital stay or MAP. Compared to VCV, PCV decreased intraoperative Pplateau (MD -1.46; 95 % CI -2.54 to -0.34; P = 0.01) but had no significant influence on PPCs, PaO2/FiO2 or MAP.
Conclusions: PV with low V T was associated with the reduced incidence of PPCs compared to CV. However, PCV and VCV had similar effects on the incidence of PPCs.
abstract_id: PUBMED:32323398
Association of tidal volume during mechanical ventilation with postoperative pulmonary complications in pediatric patients undergoing major scoliosis surgery. Background: The use of lung-protective ventilation strategies with low tidal volumes may reduce the occurrence of postoperative pulmonary complications. However, evidence of the association of intraoperative tidal volume settings with pulmonary complications in pediatric patients undergoing major spinal surgery is insufficient.
Aims: This study examined whether postoperative pulmonary complications were related to tidal volume in this population and, if so, what factors affected the association.
Methods: In this retrospective cohort study, data from pediatric patients (<18 years old) who underwent posterior spinal fusion between 2016 and 2018 were collected from the hospital electronic medical record. The associations between tidal volume and the clinical outcomes were examined by multivariate logistic regression and stratified analysis.
Results: Postoperative pulmonary complications occurred in 41 (16.1%) of 254 patients who met the inclusion criteria. For the entire cohort, tidal volume was associated with an elevated risk of pulmonary complications (adjusted odds ratio [OR] per 1 mL/kg ideal body weight [IBW] increase in tidal volume, 1.28; 95% confidence interval [CI], 1.01-1.63, P = .038). In subgroup analysis, tidal volume was associated with an increased risk of pulmonary complications in patients older than 3 years (adjusted OR per 1 mL/kg IBW increase in tidal volume, 1.43, 95% CI: 1.12-1.84), but not in patients aged 3 years or younger (adjusted OR, 0.78, 95% CI: 0.46-1.35), indicating a significant age interaction (P = .035).
Conclusion: In pediatric patients undergoing major spinal surgery, high tidal volume was associated with an elevated risk of postoperative pulmonary complications. However, the effect of tidal volume on pulmonary outcomes in the young subgroup (≤3 years) differed from that in the old (>3 years). Such information may help to optimize ventilation strategy for children of different ages.
abstract_id: PUBMED:27392439
Does intraoperative lung-protective ventilation reduce postoperative pulmonary complications? Background: Recent studies show that intraoperative protective ventilation is able to reduce postoperative pulmonary complications (PPC).
Objectives: This article provides an overview of the definition and ways to predict PPC. We present different factors that lead to ventilator-induced lung injury and explain the concepts of stress and strain as well as driving pressure. Different strategies of mechanical ventilation to avoid PPC are discussed in light of clinical evidence.
Materials And Methods: The Medline database was used to selectively search for randomized controlled trials dealing with intraoperative mechanical ventilation and outcomes.
Results: Low tidal volumes (VT) and high levels of positive end-expiratory pressure (PEEP), combined with recruitment maneuvers, are able to prevent PPC. Non-obese patients undergoing open abdominal surgery show better lung function with the use of higher PEEP levels and recruitment maneuvers, however such strategy can lead to hemodynamic impairment, while not reducing the incidence of PPC, hospital length of stay and mortality. An increase in the level of PEEP that results in an increase in driving pressure is associated with a greater risk of PPC.
Conclusions: The use of intraoperative VT ranging from 6 to 8 ml/kg based on ideal body weight is strongly recommended. Currently, a recommendation regarding the level of PEEP during surgery is not possible. However, a PEEP increase that leads to a rise in driving pressure should be avoided.
abstract_id: PUBMED:37305134
Effect of protective lung ventilation on pulmonary complications after laparoscopic surgery: a meta-analysis of randomized controlled trials. Introduction: Compared with traditional open surgery, laparoscopic surgery is widely used in surgery, with the advantages of being minimally invasive, having good cosmetic effects, and having short hospital stays, but in laparoscopic surgery, pneumoperitoneum and the Trendelenburg position can cause complications, such as atelectasis. Recently, several studies have shown that protective lung ventilation strategies are protective for abdominal surgery, reducing the incidence of postoperative pulmonary complications (PPCs). Ventilator-associated lung injury can be reduced by protective lung ventilation, which includes microtidal volume (4-8 mL/kg) ventilation and positive end-expiratory pressure (PEEP). Therefore, we used randomized, controlled trials (RCTs) to assess the results on this topic, and RCTs were used for meta-analysis to further evaluate the effect of protective lung ventilation on pulmonary complications in patients undergoing laparoscopic surgery.
Methods: In this meta-analysis, we searched the relevant literature contained in six major databases-CNKI, CBM, Wanfang Medical, Cochrane, PubMed, and Web of Science-from their inception to October 15, 2022. After screening the eligible literature, a randomized, controlled method was used to compare the occurrence of postoperative pulmonary complications when a protective lung ventilation strategy and conventional lung ventilation strategy were applied to laparoscopic surgery. After statistical analysis, the results were verified to be statistically significant.
Results: Twenty-three trials were included. Patients receiving protective lung ventilation were 1.17 times less likely to develop pulmonary complications after surgery than those receiving conventional lung ventilation (hazard ratio [RR] 0.18, 95% confidence interval [CI] 1.13-1.22; I2 = 0%). When tested for bias (P = 0.36), the result was statistically significant. Patients with protective lung ventilation were less likely to develop pulmonary complications after laparoscopic surgery.
Conclusion: Compared with conventional mechanical ventilation, protective lung ventilation reduces the incidence of postoperative pulmonary complications. For patients undergoing laparoscopic surgery, we suggest the use of protective lung ventilation, which is effective in reducing the incidence of lung injury and pulmonary infection. Implementation of a low tidal volume plus moderate positive end-expiratory pressure strategy reduces the risk of postoperative pulmonary complications.
Answer: The meta-analysis conducted by the group in PUBMED:26720429 found that intraoperative low tidal volume (LTV) ventilation alone was associated with a decreased incidence of postoperative lung infection compared with conventional strategies. However, there was no significant difference in the incidence of postoperative acute lung injury (ALI) or atelectasis between the groups. When protective lung ventilation (PLV), which includes LTV in conjunction with positive end-expiratory pressure (PEEP) and intermittent recruitment maneuvers, was analyzed, there was a statistically significant reduction in the incidence of postoperative lung infection, atelectasis, ALI, and length of hospital stay compared with conventional ventilation. This suggests that while LTV alone can reduce the incidence of lung infection, a comprehensive protective ventilation strategy that includes LTV, PEEP, and recruitment maneuvers is more effective in improving clinical pulmonary outcomes and reducing hospital stay in otherwise healthy patients undergoing general surgery.
Other studies and meta-analyses have also investigated the role of intraoperative ventilation strategies in preventing postoperative pulmonary complications (PPCs). For instance, PUBMED:26643098 highlights that low tidal volume strategies are protective against PPCs, but the protective effects are likely ascribed to low tidal volume rather than to PEEP. PUBMED:26103140 confirms that low tidal volumes are protective against PPCs, but the role of high levels of PEEP is uncertain and may not contribute to protection against PPCs. PUBMED:33518385 and PUBMED:34671638 also support the use of low tidal volumes during surgery, associating it with a significant reduction in postoperative pulmonary complications without worsening oxygenation or compliance.
In summary, the evidence suggests that low tidal volume alone can provide some protection against pulmonary complications, particularly lung infections, but a more comprehensive protective lung ventilation strategy that includes LTV, PEEP, and recruitment maneuvers is more effective in preventing a broader range of pulmonary complications and reducing the length of hospital stay in healthy patients undergoing surgery (PUBMED:26720429). |
Instruction: Does the laparoscopic approach significantly affect cardiac functions in laparoscopic surgery?
Abstracts:
abstract_id: PUBMED:31850246
Laparoscopic Splenectomy: Postero-Lateral Approach. In paediatric population, the laparoscopic splenectomy has been preferred to the open surgery during the last years. Due to the improvement of the technique and the devices, the indications to the laparoscopic splenectomy have been increased, even though there is still a variety of conditions in which the execution of this technique is arduous. During the preoperative consult there is the need to carefully evaluate the existence of cholecystic lithiasis, the haemoglobin level in patients with SCA, platelet count in children with ITP and the vaccination status. An anterior and a lateral or hanging spleen approach are primarily used for laparoscopic splenectomy. In the last four years, near the Section of Pediatric Surgery of the Department of Pediatrics, Obstetrics and Medicine of the Reproduction of Siena University, 8 cases of splenomegaly have been treated, 7 by lateral videolaparoscopic splenectomy (5 males and 2 females, with medium age of 10,5 years) and 1 by anterior approach (10 years). The advantages shown by these techniques allow the laparoscopic splenectomy to be considered as a valid alternative to the open surgery. In children's laparoscopic splenectomy, the rate of complications is considerably low and the the major problem is the intraoperative hemorrhage. With increasing surgical experience, the minimally invasive approach appears to be superior in terms of faster postoperative recovery, shorter hospital stay, perioperative and postoperative advantages. Therefore, the laparoscopic technique may soon be accepted as the standard method in patients requiring splenectomy.
abstract_id: PUBMED:24082738
Suprapubic approach for laparoscopic appendectomy. Objective: To evaluate the results of laparoscopic appendectomy using two suprapubic port incisions placed below the pubic hair line.
Design: Prospective hospital based descriptive study.
Settings: Department of surgery of a tertiary care teaching hospital located in Rohtas district of Bihar. The study was carried out over a period of 11months during November 2011 to September 2012.
Participants: Seventy five patients with a diagnosis of acute appendicitis.
Materials And Methods: All patients underwent laparoscopic appendectomy with three ports (one 10-mm umbilical for telescope and two 5 mm suprapubic as working ports) were included. Operative time, conversion, complications, hospital stay and cosmetic results were analyzed.
Results: Total number of patients was 75 which included 46 (61.33%) females and 29 (38.67%) males with Mean age (±Standard deviation {SD}) at the time of the diagnosis was 30.32 (±8.86) years. Mean operative time was 27.2 (±5.85) min. One (1.33%) patient required conversion to open appendectomy. No one patient developed wound infection or any other complication. Mean hospital stay was 22.34 (±12.18) h. Almost all patients satisfied with their cosmetic results.
Conclusion: A laparoscopic approach using two supra pubic ports yields the better cosmetic results and also improves the surgeons working position during laparoscopic appendectomy. Although, this study had shown better cosmetic result and better working position of the surgeon, however it needs further comparative study and randomized controlled trial to confirm our findings.
abstract_id: PUBMED:11433903
Does the laparoscopic approach significantly affect cardiac functions in laparoscopic surgery? Pilot study in non-obese and morbidly obese patients. Background: Laparoscopy in bariatric surgery represents a modern method generally associated with lower morbidity and mortality, compared with the traditional surgical approach. However, in patients with impaired cardiovascular function, the laparoscopic approach is limited by the potential adverse hemodynamic impact. We assessed the influence of some laparoscopic procedures on selected cardiac functions in significantly obese patients and in subjects with normal body weight, using transesophageal echocardiography (TEE).
Patients And Methods: Six subjects with normal body weight (mean BMI 25.3 +/- 3.6 kg/m2), and six patients undergoing laparoscopic gastric banding for morbid obesity (mean BMI 45.8 +/- 7.5 kg/m2) were studied. Heart rate (HR), blood pressure (BP), ejection fraction, cardiac output (CO) and transmitral flow were measured. Parameters were recorded at baseline before the operation (BL), after installation of capnoperitoneum (CP), and after positioning the patient for surgery (SP).
Results: Compared to BL, CP and SP were characterized by an increase in HR and BP in both groups of patients. As ejection fraction did not change significantly, the HR changes were accompanied by an increase in CO: (BL 5.8 +/- 2.2 l/min, CP 6.5 +/- 2.6 l/min, SP 6.7 +/- 2.7 l/min, p < 0.05 BL vs CP and SP). Transmitral flow parameters did not change significantly. Hemodynamic changes in subgroups with normal body build and in the obese patients were comparable. There was an increase in CO and pressure-rate product in obese individuals.
Conclusions: Our results suggest that the hemodynamic response to laparoscopic surgery is characterized by an increase in CO (due to increased HR) and BP. In subjects without a manifest cardiovascular disease, neither systolic nor diastolic performance was significantly affected by the introduction of capnoperitoneum and positioning of the patient for surgery. Similar results were observed in obese and non-obese subjects. Phase II of this on-going study is focusing on impact and safety of laparoscopy in obese patients with known cardiovascular disease.
abstract_id: PUBMED:26865897
Laparoscopic approach in the treatment of large epiphrenic esophageal diverticulum. Epiphrenic diverticulum of the lower third of the esophagus is a relatively rare disorder. We present the case of a large, 7.5 cm diameter esophageal epiphrenic diverticulum treated by the laparoscopic approach. Surgery was indicated by the severity of the patient's symptoms and size of the diverticulum. A laparoscopic transhiatal diverticulectomy with a myotomy and Dor fundoplication was carried out. The overall operative time was 180 min. The patient tolerated the surgery well and was discharged from hospital 4 days after the surgery. From the 10(th) postoperative day the patient resumed a regular diet. Four weeks after the operation the patient had no complaints, symptoms of dysphagia or vomiting. The laparoscopic approach in the treatment of a large, 7.5 cm epiphrenic diverticulum of the esophagus is feasible, safe and well tolerated by the patient.
abstract_id: PUBMED:34568414
Case Report: 21 Cases of Umbilical Hernia Repair Using a Laparoscopic Cephalic Approach Plus a Posterior Sheath and Extraperitoneal Approach. Purpose: In this study, a novel surgical technique was developed for umbilical hernias, in which a laparoscopic cephalic approach plus a posterior sheath and an extraperitoneal approach was employed. The aim of this study was to determine the results of this new technique. Methods: From 2019 to 2020, 21 patients (81.8% men) with an umbilical hernia underwent a laparoscopic cephalic approach plus a posterior sheath and extraperitoneal approach, performed by two surgeons specializing in abdominal wall surgery, in two academic hospitals. Intraoperative and postoperative complications, operation time, blood loss, and hernia recurrence were assessed. Results: Twenty-one cases of umbilical hernia were successfully completed. The size of the hernia ring was 1.5-3 cm2, with an average of 2.39 ± 0.47 cm2. The operation time was 120-240 min (average, 177.3 ± 42.15 min), and the blood loss volume was 30-40 ml (average, 33.73 ± 3.55 ml). The mean follow-up period was 6 months, and there were no short-term complications and no cases of recurrence. Conclusion: A laparoscopic cephalic approach plus a posterior sheath and extraperitoneal approach is a safe alternative for the repair of an umbilical hernia. The intraoperative complication rate was low.
abstract_id: PUBMED:21170236
Laparoscopic adrenalectomy: Gaining experience by graded approach. Introduction: Laparoscopic adrenalectomy (LA) has become a gold standard in management of most of the adrenal disorders. Though report on the first laparoscopic adrenalectomy dates back to 1992, there is no series of LA reported from India. Starting Feb 2001, a graded approach to LA was undertaken in our center. Till March 2006, a total of 34 laparoscopic adrenalectomies were performed with success.
Materials And Methods: The endocrinology department primarily evaluated all patients. Patients were divided into Group A - unilateral LA and Group B - bilateral LA (BLA). The indications in Group A were pheochromocytoma (n=7), Conn's syndrome (n=3), Cushing's adenoma (n=2), incidentaloma (n=2); and in Group B, Cushing's disease (CD) following failed trans-sphenoid pituitary surgery (n = 8); ectopic ACTH- producing Cushing's syndrome (n=1) and congenital adrenal hyperplasia (CAH) (n=1). The lateral transabdominal route was used.
Results: The age group varied from 12-54 years, with mean age of 28.21 years. Average duration of surgery in Group A was 166.43 min (40-270 min) and 190 min (150-310 min) in Group B. Average blood loss was 136.93 cc (20-400 cc) in Group A and 92.5 cc (40-260 cc) in Group B. There was one conversion in each group. Mean duration of surgical stay was 1.8 days (1-3 days) in Group A and 2.6 days (2-4 days) in Group B. All the patients in both groups were cured of their illness. Three patients in Group B developed Nelson's syndrome. The mean follow up was of 24.16 months (4-61 months).
Conclusion: LA though technically demanding, is feasible and safe. Graded approach to LA is the key to success.
abstract_id: PUBMED:37843160
Clinical study on laparoscopic minimally invasive surgery and transumbilical single-port laparoscopic surgery in the treatment of benign ovarian tumours and its influence on ovarian functions. Objective: The objective of this study was to explore the influence of traditional laparoscopic surgery and transumbilical single-port laparoscopic surgery on ovarian function in patients with benign ovarian tumours.
Materials And Methods: Forty-four patients with benign ovarian tumours who were treated in our hospital from January 2020 to June 2021 were selected and randomly divided into two groups, with 22 cases in each group according to random number table. The conventional group was treated with conventional laparoscopic surgery, while the modified group was treated with transumbilical single-port laparoscopic surgery. The measurement method was t-test, and the enumeration method was two tests. The clinical operation-related indicators, ovarian function (follicle-stimulating hormone, E2 and luteinising hormone), complication incidence, Visual Analogue Scale (VAS) and landscaping satisfaction scores of the two groups were compared.
Results: There were no significant differences in complications and operation duration between the two groups (P > 0.05). After treatment, the ovarian function indexes and beautification satisfaction scores of the modified group were significantly superior to those of the conventional group (P < 0.05). Besides, the intraoperative bleeding volume, post-operative exhaust time, hospital stay and three-dimensional VAS scores on day 1 and day 3 after surgery of the modified group were lower than those of the conventional group (P < 0.05).
Conclusion: Transumbilical single-port laparoscopic surgery for benign ovarian tumours has a significant clinical effect, which can effectively reduce bleeding during the operation, improve ovarian function, relieve surgical pain, promote rapid post-operative recovery and improve patients' satisfaction with landscaping. It is worthy of clinical application.
abstract_id: PUBMED:20941977
Laparoscopic approach in gallbladder agenesis--an intraoperative surprise Introduction: The congenital absence of the gallbladder in the absence of biliary atresia is extremely rare, world literature recognizing only 413 cases. The aim of this study is to clarify the diagnostic and therapeutic approach of this rare condition.
Method: There were retrospectively analyzed the first 2 cases of gallbladder agenesis admitted and surgically approached in the Emergency Hospital, Bucharest.
Results: The first case (woman, 23 years old) had typically biliary complaints at admission, shrinked gallbladder and lithiasis on ultrasound. There was a laparoscopic approach but we didn't find any gallbladder. After a non-therapeutic laparoscopy the biliary symptoms disappeared. In the second case (woman, 52 years old) the admission was for upper abdominal quadrant colicative pain and the transparietal abdominal ultrasound showed chronic cholecystitis. Common bile duct dilatation was revealed during laparoscopy. After conversion to laparotomy there was performed intraoperative colangiography, but no other biliary pathology was revealed. The initial complaints also disappeared after surgery.
Conclusions: We find the laparoscopic approach an effective method for the diagnosis of gallbladder agenesis. Postoperative Magnetic Resonance Cholangiopancreatography represents a very useful imagistic tool to rule out an intrahepatic gallbladder.
abstract_id: PUBMED:37029287
Thoracoscopic and laparoscopic approach for pleuroperitoneal communication under peritoneal dialysis: a report of four cases. Background: Pleuroperitoneal communication (PPC) is a rare complication of continuous ambulatory peritoneal dialysis (CAPD) and often forces patients to switch to hemodialysis. Some efficiencies of video-assisted thoracic surgery (VATS) for PPC have been reported recently; however, there is no standard approach for these complications. In this case series, we present a combined thoracoscopic and laparoscopic approach for PPC in four patients to better assess its feasibility and efficiency.
Case Presentation: Clinical characteristics, perioperative findings, surgical procedures, and clinical outcomes were retrospectively analyzed. We combined VATS with a laparoscopic approach to detect and repair the diaphragmatic lesions responsible for PPC. We first performed pneumoperitoneum in all patients following thoracoscopic exploration. In two cases, we found bubbles gushing out of a small pore in the central tendon of the diaphragm. The lesions were closed with 4-0 non-absorbable monofilament sutures, covered with a sheet of absorbable polyglycolic acid (PGA) felt, and sprayed with fibrin glue. In the other two cases without bubbles, a laparoscope was inserted, and we observed the diaphragm from the abdominal side. In one of the two cases, two pores were detected on the abdominal side. The lesions were closed using sutures and reinforced using the same procedure. In one case, we failed to detect a pore using VATS combined with the laparoscopic approach. Therefore, we covered the diaphragm with only a sheet of PGA felt and fibrin glue. There was no recurrence of PPC, and CAPD was resumed at an average of 11.3 days.
Conclusions: The combined thoracoscopic and laparoscopic approach is an effective treatment for detecting and repairing the lesions responsible for PPC.
abstract_id: PUBMED:31803289
Adrenalectomy by laparoscopic anterolateral transperitoneal approach for patients with previous abdominal surgery. Adrenal surgery has been radically changed by laparoscopic approach and we wonder whether the increase in the number of adrenalectomies is entirely justified by better understanding of the pathology and a developed diagnosis methods. The type of approach (transabdominal/retroperitoneal) remains a matter of the surgeon's experience. Method: In the past 8 years, we have performed more than 200 laparoscopic adrenalectomies by transperitoneal approach, 24 of them having previously significant abdominal surgery (cholecistectomy, gastric surgery, colectomy, bowel obstruction, exploratory laparoscopy, and adrenalectomy). The patients had a variety of adrenal pathologies such as Cushing disease, Cushing syndrome, Conn syndrome, incidentaloma, pheochromocytoma and even carcinoma. Results: 3 cases were converted to open approach, only one because of the adhesions. Reasons for conversion were also: spleen intarctisation and a difficulty in mobilizing the tumor. Operating time was not significantly prolonged because of the adhesions (40-360 min, median time 127 min). Postoperative evolution was simple with no morbidity or mortality and a fast recovery was recorded. Conclusions: Choosing the type of approach is related to surgeon experience, although 79-94% of the surgeons prefer the transabdominal lateral approach. We believe that with an experienced surgical team, there is no difficulty in performing adrenalectomy by transabdominal approach, with no significantly prolonged operating time, even though the patient has previously had abdominal surgery.
Answer: The laparoscopic approach does have an impact on cardiac functions during surgery. A pilot study assessed the influence of laparoscopic procedures on selected cardiac functions in both significantly obese patients and subjects with normal body weight using transesophageal echocardiography (TEE). The study found that compared to baseline measurements, the establishment of capnoperitoneum (CP) and patient positioning for surgery (SP) were associated with an increase in heart rate (HR) and blood pressure (BP) in both groups of patients. However, the ejection fraction did not change significantly, and the changes in HR were accompanied by an increase in cardiac output (CO). The transmitral flow parameters did not change significantly. Hemodynamic changes were comparable in both the non-obese and obese patients, with an increase in CO and pressure-rate product observed in obese individuals. The study concluded that in subjects without manifest cardiovascular disease, neither systolic nor diastolic performance was significantly affected by the introduction of capnoperitoneum and patient positioning for surgery. Similar results were observed in obese and non-obese subjects (PUBMED:11433903).
Therefore, while the laparoscopic approach does affect cardiac functions, such as increasing HR and BP, it does not significantly alter systolic or diastolic performance in patients without existing cardiovascular disease, regardless of obesity status. |
Instruction: Do maternal opioids reduce neonatal regional brain volumes?
Abstracts:
abstract_id: PUBMED:24945162
Do maternal opioids reduce neonatal regional brain volumes? A pilot study. Objective: A substantial number of children exposed to gestational opioids have neurodevelopmental, behavioral and cognitive problems. Opioids are not neuroteratogens but whether they affect the developing brain in more subtle ways (for example, volume loss) is unclear. We aimed to determine the feasibility of using magnetic resonance imaging (MRI) to assess volumetric changes in healthy opioid-exposed infants.
Study Design: Observational pilot cohort study conducted in two maternity hospitals in New South Wales, Australia. Maternal history and neonatal urine and meconium screens were obtained to confirm drug exposure. Volumetric analysis of MRI scans was performed with the ITK-snap program.
Result: Scans for 16 infants (mean (s.d.) gestational age: 40.9 (1.5) weeks, birth weight: 3022.5 (476.6) g, head circumference (HC): 33.7 (1.5 cm)) were analyzed. Six (37.5%) infants had HC <25th percentile. Fourteen mothers used methadone, four used buprenorphine and 11 used more than one opioid (including heroin, seven). All scans were structurally normal whole brain volumes (357.4 (63.8)) and basal ganglia (14.5 (3.5)) ml were significantly smaller than population means (425.4 (4.8), 17.1 (4.4) ml, respectively) but lateral ventricular volumes (3.5 (1.8) ml) were larger than population values (2.1(1.5)) ml.
Conclusion: Our pilot study suggests that brain volumes of opioid-exposed babies may be smaller than population means and that specific regions, for example, basal ganglia, that are involved in neurotransmission, may be particularly affected. Larger studies including correlation with neurodevelopmental outcomes are warranted to substantiate this finding.
abstract_id: PUBMED:37008014
Maternal opioids age-dependently impair neonatal respiratory control networks. Infants exposed to opioids in utero are an increasing clinical population and these infants are often diagnosed with Neonatal Abstinence Syndrome (NAS). Infants with NAS have diverse negative health consequences, including respiratory distress. However, many factors contribute to NAS, confounding the ability to understand how maternal opioids directly impact the neonatal respiratory system. Breathing is controlled centrally by respiratory networks in the brainstem and spinal cord, but the impact of maternal opioids on developing perinatal respiratory networks has not been studied. Using progressively more isolated respiratory network circuitry, we tested the hypothesis that maternal opioids directly impair neonatal central respiratory control networks. Fictive respiratory-related motor activity from isolated central respiratory networks was age-dependently impaired in neonates after maternal opioids within more complete respiratory networks (brainstem and spinal cords), but unaffected in more isolated networks (medullary slices containing the preBötzinger Complex). These deficits were due, in part, to lingering opioids within neonatal respiratory control networks immediately after birth and involved lasting impairments to respiratory pattern. Since opioids are routinely given to infants with NAS to curb withdrawal symptoms and our previous work demonstrated acute blunting of opioid-induced respiratory depression in neonatal breathing, we further tested the responses of isolated networks to exogenous opioids. Isolated respiratory control networks also demonstrated age-dependent blunted responses to exogenous opioids that correlated with changes in opioid receptor expression within a primary respiratory rhythm generating region, the preBötzinger Complex. Thus, maternal opioids age-dependently impair neonatal central respiratory control and responses to exogenous opioids, suggesting central respiratory impairments contribute to neonatal breathing destabilization after maternal opioids and likely contribute to respiratory distress in infants with NAS. These studies represent a significant advancement of our understanding of the complex effects of maternal opioids, even late in gestation, contributing to neonatal breathing deficits, necessary first steps in developing novel therapeutics to support breathing in infants with NAS.
abstract_id: PUBMED:34342017
Maternal choline supplementation mitigates alcohol exposure effects on neonatal brain volumes. Background: Prenatal alcohol exposure (PAE) is associated with smaller regional and global brain volumes. In rats, gestational choline supplementation mitigates adverse developmental effects of ethanol exposure. Our recent randomized, double-blind, placebo-controlled maternal choline supplementation trial showed improved somatic and functional outcomes in infants at 6.5 and 12 months postpartum. Here, we examined whether maternal choline supplementation protected the newborn brain from PAE-related volume reductions and, if so, whether these volume changes were associated with improved infant recognition memory.
Methods: Fifty-two infants born to heavy-drinking women who had participated in a choline supplementation trial during pregnancy underwent structural magnetic resonance imaging with a multi-echo FLASH protocol on a 3T Siemens Allegra MRI (median age = 2.8 weeks postpartum). Subcortical regions were manually segmented. Recognition memory was assessed at 12 months on the Fagan Test of Infant Intelligence (FTII). We examined the effects of choline on regional brain volumes, whether choline-related volume increases were associated with higher FTII scores, and the degree to which the regional volume increases mediated the effects of choline on the FTII.
Results: Usable MRI data were acquired in 50 infants (choline: n = 27; placebo: n = 23). Normalized volumes were larger in six of 12 regions in the choline than placebo arm (t ≥ 2.05, p ≤ 0.05) and were correlated with the degree of maternal choline adherence (β ≥ 0.28, p ≤ 0.04). Larger right putamen and corpus callosum were related to higher FTII scores (r = 0.36, p = 0.02) with a trend toward partial mediation of the choline effect on recognition memory.
Conclusions: High-dose choline supplementation during pregnancy mitigated PAE-related regional volume reductions, with larger volumes associated with improved 12-month recognition memory. These results provide the first evidence that choline may be neuroprotective against PAE-related brain structural deficits in humans.
abstract_id: PUBMED:33716765
Maternal Methadone Destabilizes Neonatal Breathing and Desensitizes Neonates to Opioid-Induced Respiratory Frequency Depression. Pregnant women and developing infants are understudied populations in the opioid crisis, despite the rise in opioid use during pregnancy. Maternal opioid use results in diverse negative outcomes for the fetus/newborn, including death; however, the effects of perinatal (maternal and neonatal) opioids on developing respiratory circuitry are not well understood. Given the profound depressive effects of opioids on central respiratory networks controlling breathing, we tested the hypothesis that perinatal opioid exposure impairs respiratory neural circuitry, creating breathing instability. Our data demonstrate maternal opioids increase apneas and destabilize neonatal breathing. Maternal opioids also blunted opioid-induced respiratory frequency depression acutely in neonates; a unique finding since adult respiratory circuity does not desensitize to opioids. This desensitization normalized rapidly between postnatal days 1 and 2 (P1 and P2), the same age quantal slowing emerged in respiratory rhythm. These data suggest significant reorganization of respiratory rhythm generating circuits at P1-2, the same time as the preBötzinger Complex (key site of respiratory rhythm generation) becomes the dominant respiratory rhythm generator. Thus, these studies provide critical insight relevant to the normal developmental trajectory of respiratory circuits and suggest changes to mutual coupling between respiratory oscillators, while also highlighting how maternal opioids alter these developing circuits. In conclusion, the results presented demonstrate neurorespiratory disruption by maternal opioids and blunted opioid-induced respiratory frequency depression with neonatal opioids, which will be important for understanding and treating the increasing population of neonates exposed to gestational opioids.
abstract_id: PUBMED:30010813
Heritability of Regional Brain Volumes in Large-Scale Neuroimaging and Genetic Studies. Brain genetics is an active research area. The degree to which genetic variants impact variations in brain structure and function remains largely unknown. We examined the heritability of regional brain volumes (P ~ 100) captured by single-nucleotide polymorphisms (SNPs) in UK Biobank (n ~ 9000). We found that regional brain volumes are highly heritable in this study population and common genetic variants can explain up to 80% of their variabilities (median heritability 34.8%). We observed omnigenic impact across the genome and examined the enrichment of SNPs in active chromatin regions. Principal components derived from regional volume data are also highly heritable, but the amount of variance in brain volume explained by the component did not seem to be related to its heritability. Heritability estimates vary substantially across large-scale functional networks, exhibit a symmetric pattern across left and right hemispheres, and are consistent in females and males (correlation = 0.638). We repeated the main analysis in Alzheimer's Disease Neuroimaging Initiative (n ~ 1100), Philadelphia Neurodevelopmental Cohort (n ~ 600), and Pediatric Imaging, Neurocognition, and Genetics (n ~ 500) datasets, which demonstrated that more stable estimates can be obtained from the UK Biobank.
abstract_id: PUBMED:35430247
Neonatal Intensive Care Unit Network Neurobehavioral Scale Profiles in Full-Term Infants: Associations with Maternal Adversity, Medical Risk, and Neonatal Outcomes. Objectives: To examine healthy, full-term neonatal behavior using the Neonatal Intensive Care Unit Network Neurobehavioral Scale (NNNS) in relation to measures of maternal adversity, maternal medical risk, and infant brain volumes.
Study Design: This was a prospective, longitudinal, observational cohort study of pregnant mothers followed from the first trimester and their healthy, full-term infants. Infants underwent an NNNS assessment and high-quality magnetic resonance imaging 2-5 weeks after birth. A latent profile analysis of NNNS scores categorized infants into neurobehavioral profiles. Univariate and multivariate analyses compared differences in maternal factors (social advantage, psychosocial stress, and medical risk) and neonatal characteristics between profiles.
Results: The latent profile analysis of NNNS summary scales of 296 infants generated 3 profiles: regulated (46.6%), hypotonic (16.6%), and fussy (36.8%). Infants with a hypotonic profile were more likely to be male (χ2 = 8.601; P = .014). Fussy infants had smaller head circumferences (F = 3.871; P = .022) and smaller total brain (F = 3.522; P = .031) and cerebral white matter (F = 3.986; P = .020) volumes compared with infants with a hypotonic profile. There were no differences between profiles in prenatal maternal health, social advantage, or psychosocial stress.
Conclusions: Three distinct neurobehavioral profiles were identified in healthy, full-term infants with hypotonic and fussy neurobehavioral features related to neonatal brain volumes and head circumference, but not prenatal exposure to socioeconomic or psychosocial adversity. Follow-up beyond the neonatal period will determine if identified profiles at birth are associated with subsequent clinical or developmental outcomes.
abstract_id: PUBMED:30662399
Maternal Adiposity Influences Neonatal Brain Functional Connectivity. The neural mechanisms associated with obesity have been extensively studied, but the impact of maternal obesity on fetal and neonatal brain development remains poorly understood. In this study of full-term neonates, we aimed to detect potential neonatal functional connectivity alterations associated with maternal adiposity, quantified via body-mass-index (BMI) and body-fat-mass (BFM) percentage, based on seed-based and graph theoretical analysis using resting-state fMRI data. Our results revealed significant neonatal functional connectivity alterations in all four functional domains that are implicated in adult obesity: sensory cue processing, reward processing, cognitive control, and motor control. Moreover, some of the detected areas showing regional functional connectivity alterations also showed global degree and efficiency differences. These findings provide important clues to the potential neural basis for cognitive and mental health development in offspring of obese mothers and may lead to the derivation of imaging-based biomarkers for the early identification of risks for timely intervention.
abstract_id: PUBMED:31711132
Maternal Dietary Intake of Omega-3 Fatty Acids Correlates Positively with Regional Brain Volumes in 1-Month-Old Term Infants. Maternal nutrition is an important factor for infant neurodevelopment. However, prior magnetic resonance imaging (MRI) studies on maternal nutrients and infant brain have focused mostly on preterm infants or on few specific nutrients and few specific brain regions. We present a first study in term-born infants, comprehensively correlating 73 maternal nutrients with infant brain morphometry at the regional (61 regions) and voxel (over 300 000 voxel) levels. Both maternal nutrition intake diaries and infant MRI were collected at 1 month of life (0.9 ± 0.5 months) for 92 term-born infants (among them, 54 infants were purely breastfed and 19 were breastfed most of the time). Intake of nutrients was assessed via standardized food frequency questionnaire. No nutrient was significantly correlated with any of the volumes of the 61 autosegmented brain regions. However, increased volumes within subregions of the frontal cortex and corpus callosum at the voxel level were positively correlated with maternal intake of omega-3 fatty acids, retinol (vitamin A) and vitamin B12, both with and without correction for postmenstrual age and sex (P < 0.05, q < 0.05 after false discovery rate correction). Omega-3 fatty acids remained significantly correlated with infant brain volumes after subsetting to the 54 infants who were exclusively breastfed, but retinol and vitamin B12 did not. This provides an impetus for future larger studies to better characterize the effect size of dietary variation and correlation with neurodevelopmental outcomes, which can lead to improved nutritional guidance during pregnancy and lactation.
abstract_id: PUBMED:36668788
Intrauterine and Neonatal Exposure to Opioids: Toxicological, Clinical, and Medico-Legal Issues. Opioids have a rapid transplacental passage (i.e., less than 60 min); furthermore, symptoms characterize the maternal and fetal withdrawal syndrome. Opioid withdrawal significantly impacts the fetus, inducing worse outcomes and a risk of mortality. Moreover, neonatal abstinence syndrome (NAS) follows the delivery, lasts up to 10 weeks, and requires intensive management. Therefore, the prevention and adequate management of NAS are relevant public health issues. This review aims to summarize the most updated evidence in the literature regarding toxicological, clinical, and forensic issues of intrauterine exposure to opioids to provide a multidisciplinary, evidence-based approach for managing such issues. Further research is required to standardize testing and to better understand the distribution of opioid derivatives in each specimen type, as well as the clinically relevant cutoff concentrations in quantitative testing results. A multidisciplinary approach is required, with obstetricians, pediatricians, nurses, forensic doctors and toxicologists, social workers, addiction specialists, and politicians all working together to implement social welfare and social services for the baby when needed. The healthcare system should encourage multidisciplinary activity in this field and direct suspected maternal and neonatal opioid intoxication cases to local referral centers.
abstract_id: PUBMED:29560921
Salivary Cortisol and Regional Brain Volumes Among Veterans With and Without Posttraumatic Stress Disorder. Background: Human studies have often found that brain regions rich in glucocorticoid receptors exhibit smaller volume in samples with past trauma and ongoing stress; however, relatively little research has addressed the hypothesis that such smaller volumes can be traced to elevated circulating glucocorticoid hormones (GCs). This issue takes on renewed interest in light of recent proposals to treat symptoms of stress disorders such as posttraumatic stress disorder (PTSD) with exogenous synthetic GCs. We sought to examine the relation of circulating GCs to brain macrostructure among veterans with and without PTSD.
Methods: Participants (n = 90) included combat veterans with and without PTSD. Veterans completed self-report surveys, home-based cortisol samples, reactive cortisol samples over the course of two serial Trier Social Stress Tests, a low-dose dexamethasone suppression test, and structural magnetic resonance brain imaging over the course of 3 to 5 days.
Results: No associations were observed between any salivary cortisol index and the volumes of the hippocampus or amygdala. A negative association was observed between evening basal cortisol and both FreeSurfer global volume and BrainImage supratentorial tissue volume. This effect was moderated by PTSD. Also observed was a positive association between reactive cortisol and these same brain volumes.
Conclusions: Estimates of cortical but not hippocampal or amygdala volume were moderately associated with evening basal salivary cortisol and cortisol reactivity to a social stressor. Existing models relating GC receptor density, circulating cortisol levels, and regional brain volumes received little support.
Answer: Yes, maternal opioids may reduce neonatal regional brain volumes. A pilot study conducted in New South Wales, Australia, found that brain volumes of opioid-exposed babies may be smaller than population means, and specific regions such as the basal ganglia, which are involved in neurotransmission, may be particularly affected (PUBMED:24945162). Additionally, maternal opioids have been shown to age-dependently impair neonatal central respiratory control networks, which could suggest broader impacts on the developing brain (PUBMED:37008014). Furthermore, maternal methadone use has been associated with destabilizing neonatal breathing and desensitizing neonates to opioid-induced respiratory frequency depression, indicating that maternal opioids alter developing respiratory circuits in the brain (PUBMED:33716765). These findings suggest that maternal opioid use during pregnancy can have significant effects on the development of the neonatal brain, including reductions in regional brain volumes. |
Instruction: Intraoperative examination of sentinel nodes in breast cancer: is the glass half full or half empty?
Abstracts:
abstract_id: PUBMED:15525830
Intraoperative examination of sentinel nodes in breast cancer: is the glass half full or half empty? Background: Intraoperative identification of positive sentinel lymph nodes in patients with breast cancer may avoid a return to the operating room.
Methods: In a group of 402 consecutive patients with primary breast cancer who underwent sentinel lymph node biopsy, an intraoperative examination (IE) was obtained in 236 cases either by frozen section (FS; n = 68) or by touch preparation cytology (TP; n = 168).
Results: IE had an accuracy of 89% (209 of 236), but it identified only 52 of 77 positive cases (sensitivity, 68%). There were 25 false-negative cases (13.7%), of which 7 were macrometastases and 18 by micrometastases (P < .001). Six macrometastases were missed by TP and one by FS (P = .9). There were two false-positive cases (3.7%). Overall, 48 (20%) of 236 patients avoided a delayed return to the operating room for a completion lymphadenectomy because of IE findings. This occurred in 10% of patients with tumors <1 cm in diameter, in 20% of those with tumors between 1 and 2 cm, and in 34% of those with tumors >2 cm in diameter (P = .05). The cost savings for the Italian Health System amounted to 198,040 (US$223,794) in these patients.
Conclusions: IE has acceptable sensitivity for lymph node macrometastases, but it is a weak tool for diagnosing micrometastases. FS and TP are roughly equivalent. IE allows management changes, because approximately 20% of all patients are expected to undergo synchronous axillary dissection, and it is particularly helpful in T2 patients. This may allow substantial cost savings for the health-care system.
abstract_id: PUBMED:12903590
Extemporaneous examination of the sentinel lymph node in breast cancer: is the glass half full or half empty? Intra-operative examination of sentinel LN is controversial. Concordance with definitive exam of SLN in this series was 81%, though only 54% of positive cases were diagnosed. Micrometastases and ITC were usually lost intraoperatively, accounting for 14% of cases. Frozen section and touch prep of the SLN were approximately equivalent. The latter has the advantage of preserving tissue for step-analysis of SLN. The ultimate method of intraoperative analysis of SLN which can combine cost-effectiveness and accuracy needs to be determined.
abstract_id: PUBMED:24416560
Diagnostic value of intraoperative histopathological examination of the sentinel nodes in breast cancer and skin melanoma-Preliminary results of single centre retrospective study. Objective: Intraoperative histopatological examination of the sentinel nodes enables selection of patients who need dissection of the regional lymphatic system during the same operation. The aim of this study is to evaluate the diagnostic value of intraoperative histopathological examination of the sentinel nodes in breast cancer and skin melanoma. Intraoperative histopathology of the sentinel nodes as a diagnostic method is used in patients with melanoma and breast cancer. Recent studies have proved it to be an effective method for evaluating the nodes in the final histopathology. Intraoperative histopathological examination of the sentinel nodes is not performed routinely and there is no clear position on this issue. In this paper we try to prove that intraoperative test gives patients the simultaneous benefits of removal of regional lymph nodes metastases and earlier initiation of adjuvant therapy.
Methods: The study comprises 137 patients with breast cancer and 35 patients with malignant skin melanoma. Sentinel nodes were intraoperatively sectioned and examined by means of the imprint method and frozen section evaluation. The patients with positive sentinel nodes underwent immediate dissection of regional lymph nodes. Those with negative sentinel nodes diagnosed in the intraoperative examination, but positive in final pathologic results, underwent subsequent dissection of regional lymph nodes.
Results: 60 sentinel lymph nodes were found in 35 patients with skin melanoma. In 3 patients, 3 sentinel lymph nodes were false negative in the intraoperative histopathological examination. No false positive sentinel lymph nodes were found. 249 sentinel lymph nodes were found in the intraoperative histopathological examination in 137 patients with breast cancer. There were no false positive sentinel nodes, but there were 7 false negative sentinel nodes. In this study, only 5 (3.6%) patients with breast cancer and 3 (8.5%) patients with skin melanoma required another regional operation.
Conclusion: The method of intraoperative histopathological evaluation of the sentinel nodes enables identification of metastases in these lymph nodes and gives a possibility to carry out a one-step regional lymphadenectomy and start the adjuvant therapy earlier.
abstract_id: PUBMED:15567689
How to avoid the uncertainties of intraoperative examination of the sentinel lymph node in breast cancer? The sentinel lymph node procedure is now admitted by many teams for axillary evaluation in the early stage of breast cancer. The classical technique consists in an intraoperative examination of the sentinel lymph node under general anaesthesia during tumorectomy, deciding whether or not complete axillary lymphadenectomy must be done. Intraoperative examination seems to us to have a poor predictive value. In the case of a false positive, the surgeon would perform lymphadenectomy unnecessarily, while a false negative would mean that the patient would have to be re-operated for lymphadenectomy once the definitive results have become available. For all these reasons, we propose the detection of the sentinel lymph node under local anaesthesia and to await its definitive analysis before carrying out tumorectomy on the patient and axillary lymphadenectomy if necessary under general anesthesia. Hence, we consider that the best way to avoid the uncertainties of an intraoperative examination of the sentinel lymph node is not to carry out intraoperative examinations.
abstract_id: PUBMED:14584643
How to avoid the uncertainties of intraoperative examination of the sentinel lymph node in breast cancer? Numerous researchers have confirmed the diagnostic relevance of the sentinel lymph node (SLN) examination in breast carcinoma. Many technical problems are analyzed which are correlated with the intraoperative examination of the SLN and its sensitivity and specificity. In order to avoid the incidence of false positive or false negative intraoperative diagnoses, the authors propose the examination of SLN under local anesthesia, awaiting its definitive analysis before carrying out tumorectomy and/or axillary lymphadenectomy.
abstract_id: PUBMED:35077067
Application of fluorescent immunocytochemistry in intraoperative diagnostics of metastases in sentinel lymph nodes in early breast cancer. The aim of the study was to show the possibilities of fluorescent immunocytochemistry in urgent intraoperative examination of sentinel lymph nodes in patients with early breast cancer. The authors analyzed the data on the state of the lymph nodes in 94 patients with early breast cancer who had been operated on since December 2016 to January 2018 in the Department of reconstructive plastic surgery of the breast and skin of the P.A. Herzen Moscow Oncological Institute. As a result of the use of the «Tekhnefit99ᵐTc» radiopharmaceutical during the operation, sentinel lymph nodes were isolated, the state of which was assessed by the method of urgent intraoperative cytology. In difficult-to-diagnose cases, fluorescent immunocytochemistry was used, which made it possible to avoid hypo- and overdiagnosis in 30 patients with early breast cancer. The sensitivity of the urgent cytological method for examining the sentinel lymph nodes smears was 83.3%, the specificity - 100%, the efficiency -83%, the predictive value of a positive result - 83.3%, and the predictive value of a negative result - 100%. Thus, the diagnostic accuracy of urgent cytological examination of the sentinel lymph node was 94%.
abstract_id: PUBMED:18191603
Sentinel lymph node biopsy under local anaesthesia: how to avoid the disadvantages of intraoperative examination? The sentinel lymph node procedure has become the standard in the surgical management of localised breast cancer. However, it is submitted to the uncertainties of intraoperative examination. Indeed, intraoperative examination has three major disadvantages: the type of histological method (frozen section versus imprint cytology), the size of sentinel node metastasis (macro- versus micrometastases) and the time requested for this technique. All of these limits are responsible for secondary re-interventions to complete axillary lymph node dissection. Few medical teams have described a new surgical strategy to avoid these limits. They proposed the detection of the sentinel lymph node under local anaesthesia and to wait for the definitive histological analysis before carrying out lumpectomy and axillary lymphadenectomy if necessary under general anaesthesia. We realized a review of the literature on this new procedure to evaluate its feasibility and to assess the technical aspects.
abstract_id: PUBMED:32949894
Predictive factors of lymph node metastasis and effectiveness of intraoperative examination of sentinel lymph node in breast carcinoma: A retrospective Belgian study. Recently, several trials demonstrated the safety of omitting axillary lymph node dissection in clinically N0 patients with positive sentinel nodes in select subgroups. However, this fact is still troublesome to clarify to surgeons and clinicians, as they used to perform intraoperative examination of the sentinel node and axillary dissection for many years. Hence, we decided to review our practice. This is to firstly highlight the predictive factors of node metastasis and secondly, to evaluate the effectiveness of intraoperative examination of the sentinel node. There were 406 total procedures. The rate of positive lymph nodes in the final diagnosis was 27%. Factors associated with metastasis were age, tumour size, TNM classification, tumour grade, vascular invasion, molecular classification and KI-67 index. The rate of reoperation was 6.2% in cases with final positive nodes, however, the complementary ALND was justified in only 2.7%. Forty-nine percent of SLN were examined during surgery (IOESLN), whereby the false negative rate was 11.8%. Sixty-three intraoperative examinations were necessary to prevent a second operation on a patient. We recommend changing the clinical management of the axilla, resulting in fewer ALNDs in selected cN0, SLN-positive patients. In keeping with recent large clinical trial (ACOSOG Z0011, AMAROS and OTOASOR) data, our results support that intraoperative exam in selected cN0, SLN-positive Belgian patients is no longer effective.
abstract_id: PUBMED:11776496
The problem of the accuracy of intraoperative examination of axillary sentinel nodes in breast cancer. Background: Sentinel node (SN) biopsy has become accepted as a reliable method of predicting the state of the axilla in breast cancer. The key issue, however, is the accuracy of the pathological evaluation of the biopsied node, which should be done intraoperatively whenever possible.
Methods: In our initial experience on 192 patients using a conventional intraoperative frozen section method, the false-negative rate was 6.3%, and the negative predictive value was 93.7%. We devised a new and exhaustive intraoperative method, requiring about 40 minutes, in which pairs of sections are taken every 50 microm for the first 15 sections and every 100 microm thereafter, sampling the entire node. Sentinel node metastases were found in 143 of the 376 T1N0 cases examined (38%).
Results: Metastases were always identified on hematoxylin and eosin sections, although in 4% of cases, cytokeratin immunostaining on adjacent sections was useful for confirming malignancy. In 233 patients the SNs were disease-free; of these patients, 222 had metastasis-free axillary nodes, and 11 (4.7%) had another metastatic node.
Conclusion: Extensive intraoperative examination of frozen sentinel nodes correctly predicts an uninvolved axilla in 95.3% of cases (negative predictive value). This method is, therefore, suitable for identifying patients in whom axillary dissection can be avoided.
abstract_id: PUBMED:21656717
Multicentre evaluation of intraoperative molecular analysis of sentinel lymph nodes in breast carcinoma. Background: Ideally, intraoperative sentinel lymph node (SLN) analysis in breast cancer should be automated, have high concordance with extensive histopathology, and be applicable in any hospital setting. A prospective multicentre evaluation of the one-step nucleic acid amplification (OSNA) automated molecular diagnostic system of SLN analysis was undertaken.
Methods: Intraoperative examination of SLNs from 204 patients with breast cancer was performed by OSNA at four sites in the UK. Half of each SLN was assessed by OSNA (for cytokeratin 19 mRNA) and the remaining half was paraffin embedded for intensive histological examination at ten levels. Discordant cases were reanalysed by further molecular biological techniques and by additional histological examination of all remaining nodal material to ascertain whether the discordance was due to an uneven distribution of metastases, known as tissue allocation bias (TAB).
Results: After exclusion of samples affected by TAB, the overall concordance rate for OSNA versus histopathology was 96.0 per cent, with a sensitivity of 91.7 per cent and a specificity of 96·9 per cent. The median time to process a single SLN was 32 (range 22-97) min, and that for two nodes 42 (30-73) min.
Conclusion: OSNA enables accurate automated intraoperative diagnosis and can be used successfully in different UK hospitals. When the SLN is shown to be positive, the patient can undergo immediate axillary clearance under the same anaesthetic rather than having a delayed second procedure.
Answer: The intraoperative examination of sentinel lymph nodes in breast cancer presents both advantages and limitations, which can be likened to the metaphor of whether the glass is half full or half empty.
On the positive side, intraoperative examination can avoid a return to the operating room for patients with positive sentinel lymph nodes, as it allows for immediate completion lymphadenectomy. This was found to be particularly helpful in patients with larger tumors (T2 patients), with approximately 20% of all patients avoiding delayed surgery due to intraoperative examination findings (PUBMED:15525830). Additionally, the use of fluorescent immunocytochemistry in urgent intraoperative cytology has shown high diagnostic accuracy, with a sensitivity of 83.3% and a specificity of 100%, thus potentially avoiding under- or overdiagnosis in patients with early breast cancer (PUBMED:35077067).
However, the limitations of intraoperative examination include its relatively low sensitivity for detecting micrometastases, with a sensitivity of only 68% reported in one study, and a significant number of false negatives (PUBMED:15525830). The concordance with definitive examination of sentinel lymph nodes was only 81%, and micrometastases and isolated tumor cells (ITCs) were often missed intraoperatively (PUBMED:12903590). Furthermore, the false-negative rate of intraoperative examination was found to be 11.8% in another study, indicating that a significant number of patients might still require a second operation (PUBMED:32949894).
Some researchers have proposed avoiding the uncertainties of intraoperative examination by not performing it at all, instead opting for sentinel lymph node detection under local anesthesia and awaiting definitive analysis before proceeding with further surgery (PUBMED:14584643, PUBMED:15567689, PUBMED:18191603). This approach aims to reduce the incidence of false positive or false negative intraoperative diagnoses.
In conclusion, the intraoperative examination of sentinel lymph nodes in breast cancer can provide immediate and potentially cost-saving benefits by identifying patients who require further surgery during the initial operation. However, its limitations, particularly in detecting micrometastases, suggest that the technique is not foolproof and may lead to additional surgeries for some patients. Whether the glass is seen as half full or half empty may depend on the perspective and priorities of the healthcare providers and patients involved. |
Instruction: Transnational, social, and neighborhood ties and smoking among Latino immigrants: does gender matter?
Abstracts:
abstract_id: PUBMED:25121808
Transnational, social, and neighborhood ties and smoking among Latino immigrants: does gender matter? Objectives: We examined whether transnational ties, social ties, and neighborhood ties were independently associated with current smoking status among Latino immigrants. We also tested interactions to determine whether these associations were moderated by gender.
Methods: We conducted a series of weighted logistic regression analyses (i.e., economic remittances, number of return visits, friend support, family support, and neighborhood cohesion) using the Latino immigrant subsample (n = 1629) of the National Latino and Asian American Study in 2002 and 2003.
Results: The number of past-year return visits to the country-of-origin was positively associated with current smoker status. Gender moderated the association between economic remittances, friend support, and smoking. Remittance behavior had a protective association with smoking, and this association was particularly pronounced for Latino immigrant women. Friendship support lowered the odds of smoking among men, but not women.
Conclusions: Our results underscore the growing importance of transnational networks for understanding Latino immigrant health and the gendered patterns of the associations between social ties, transnational ties, and health risk behaviors.
abstract_id: PUBMED:34262963
Transnational and Local Co-ethnic Social Ties as Coping Mechanisms Against Perceived Discrimination - A Study on the Life Satisfaction of Turkish and Moroccan Minorities in the Netherlands. Perceived ethnic discrimination is known to decrease minorities' life satisfaction. This research investigates the extent to which minorities' local and transnational co-ethnic social ties mitigate the negative effects of perceived discrimination on life satisfaction. Put differently, focusing on the experiences of Turkish and Moroccan minorities, we discuss whether co-ethnic social ties, both locally and transnationally embedded, can be considered as coping mechanisms against perceived discrimination. Furthermore, we investigate whether these mechanisms work differently for first- and second-generation minorities. Using Netherlands Longitudinal Life-course Study, we reveal that perceived discrimination is positively associated with local co-ethnic social ties in Netherlands which consequently predicts higher life satisfaction for both generations. Surprisingly, we also show that only among the second generation perceived discrimination is associated with stronger transnational co-ethnic social ties, but not the first generation. Having these transnational ties however are beneficial for life satisfaction of both generations. Consequently, we highlight the importance of recognizing transnational embeddedness of minorities and studying the effects transnational co-ethnic social ties on subjective well-being outcomes especially for second-generation minorities.
abstract_id: PUBMED:36093266
Transnational ties with the home country matters: the moderation effect of the relationship between perceived discrimination and self-reported health among foreign workers in Korea. Background: Little attention has been paid to the relationship between perceived discrimination and self-rated health (SRH) among foreign workers in Korea. Transnational ties with the home country are known to be critical among immigrants, as they allow the maintenance of social networks and support. Nonetheless, as far as we know, no studies have examined the impact of transnational ties on SRH itself and the relationship between perceived discrimination and SRH, which the current study tries to examine.
Methods: Logistic regression analyses were conducted using the 2013 Survey on Living Conditions of Foreign Workers in Korea. Adult foreign workers from different Asian countries (n = 1,370) participated in this study. The dependent variable was good SRH and the independent variable was perceived discrimination. Transnational ties with the home country, as a moderating variable, was categorized into broad (i.e., contacting family members in the home country) vs. narrow types (i.e., visiting the home country).
Results: Foreign workers who perceived discrimination had a lower rate of good SRH than those who did not perceive discrimination. Broad social transnational ties moderated the relationship between perceived discrimination and SRH; narrow social transnational ties did not.
Conclusions: In line with previous studies, an association was found between perceived discrimination and SRH. Broad social transnational ties can be a good source of social support and buffer against the distress of perceived discrimination.
abstract_id: PUBMED:23947776
Contextualizing nativity status, Latino social ties, and ethnic enclaves: an examination of the 'immigrant social ties hypothesis'. Objectives: Researchers have posited that one potential explanation for the better-than-expected health outcomes observed among some Latino immigrants, vis-à-vis their US-born counterparts, may be the strength of social ties and social support among immigrants.
Methods: We examined the association between nativity status and social ties using data from the Chicago Community Adult Health Study's Latino subsample, which includes Mexicans, Puerto Ricans, and other Latinos. First, we used ordinary least squares (OLS) regression methods to model the effect of nativity status on five outcomes: informal social integration; social network diversity; network size; instrumental support; and informational support. Using multilevel mixed-effects regression models, we estimated the association between Latino/immigrant neighborhood composition and our outcomes, and whether these relationships varied by nativity status. Lastly, we examined the relationship between social ties and immigrants' length of time in the USA.
Results: After controlling for individual-level characteristics, immigrant Latinos had significantly lower levels of social ties than their US-born counterparts for all the outcomes, except informational support. Latino/immigrant neighborhood composition was positively associated with being socially integrated and having larger and more diverse social networks. The associations between two of our outcomes (informal social integration and network size) and living in a neighborhood with greater concentrations of Latinos and immigrants were stronger for US-born Latinos than for immigrant Latinos. US-born Latinos maintained a significant social ties advantage over immigrants - regardless of length of time in the USA - for informal social integration, network diversity, and network size.
Conclusion: At the individual level, our findings challenge the assumption that Latino immigrants would have larger networks and/or higher levels of support and social integration than their US-born counterparts. Our study underscores the importance of understanding the contexts that promote the development of social ties. We discuss the implications of these findings for understanding Latino and immigrant social ties and health outcomes.
abstract_id: PUBMED:25090146
Transnational ties and past-year major depressive episodes among Latino immigrants. Latino immigrants live in an increasingly global world in which maintaining contact with kin in the home country is easier than ever. We examined (a) the annual distribution of remittances burden (percentage of remittances/household income) and visits to the home country, (b) the association of these transnational ties with a past-year major depressive episode (MDE), and (c) moderation by Latino subethnicity or gender. We conducted weighted logistic regression analyses with the Latino immigrant subsample (N = 1,614) of the National Latino and Asian American Study. Mexican and Other Latino immigrants had greater remittances burden than Puerto Rican migrants. Cuban immigrants made the fewest visits back home. After adjustment for sociodemographics and premigration psychiatric history, remittances burden decreased odds of MDE (odds ratio [OR] = 0.80, 95% confidence interval [CI] [0.67, .0.98]), whereas visits back home increased odds of MDE (OR = 1.04, 95% CI [1.01, 1.06]). Latino subethnicity was not a significant moderator. Visits back home were more strongly linked to depression among women than men. The distribution of transnational ties differs by Latino subgroup, although its association with depression is similar across groups. Monetary giving through remittances might promote a greater sense of self-efficacy, and caregiving for relatives back home that positively affect mental health. Visits back home, especially for women, might signal social stress from strained relationships with kin, spouses, or children left behind, or increased caregiving demands that negatively affect mental health. Clinical practice with immigrants should routinely assess the social resources and strains that fall outside national borders.
abstract_id: PUBMED:19833986
Toward a dynamic conceptualization of social ties and context: implications for understanding immigrant and Latino health. Researchers have posited that social ties and social support may contribute to better-than-expected health outcomes among Mexican immigrants vis-à-vis their US-born counterparts. However, in our review of studies examining social ties and health by immigration-related variables among this group, we found little support for this hypothesis. To better understand the social factors that contribute to the health of Mexicans in the United States, we conducted a qualitative analysis of social relationships and social context among first- and second-generation Mexican women. Our results highlight the interplay between immigration processes and social ties, draw attention to the importance of identity support and transnational social relationships, and suggest ways to reconceptualize the relationship between social contexts, social ties, and immigrant and Latino health.
abstract_id: PUBMED:32133864
Neighborhood Walkability and Overweight/Obese Weight Status Among Latino Adults. Purpose: To examine whether aerobic physical activity mediates the association between neighborhood walkability and overweight/obesity weight status among Latino adults and whether the relative contribution of this pathway linking neighborhood walkability and aerobic activity varies by level of neighborhood social cohesion.
Design: Cross-sectional.
Setting: National Health Interview Survey (NHIS) 2015.
Sample: NHIS adult Latino participants ≥18 years of age (n = 4303).
Measures: Neighborhood walkability, neighborhood social cohesion, body mass index, and aerobic physical activity.
Analysis: To determine whether physical activity mediates the relationship of walkability with overweight/obese weight status, a simple mediation analysis was conducted. Additionally, a moderated mediation analysis was conducted to test whether neighborhood social cohesion had a moderating effect on this relationship.
Results: On average, the sample was 41 years old, 51% were male, 34% had less than a high school education, and 57% were foreign-born. Neighborhood walkability was statistically significantly related to overweight/obese weight status (standardized effect= -0.05, standard error [SE] = 0.02, P = .01). The interaction between walkability and neighborhood social cohesion on physical activity was not significant (standardized effect = 0.06, SE = 0.03, P = .09). Thus, the indirect effect of walkability on overweight/obesity weight status through physical activity was not shown to be modified by neighborhood social cohesion.
Conclusion: Other neighborhood environment factors may play a role in the contribution of neighborhood walkability to overweight/obese weight status among Latinos.
abstract_id: PUBMED:29349245
Mental health of sub-saharan african migrants: The gendered role of migration paths and transnational ties. In Europe, migrants are at higher risk of common mental disorders or psychological distress than are natives. Little is known regarding the social determinants of migrant mental health, particularly the roles played by migration conditions and transnational practices, which may manifest themselves in different ways for men and for women. The goal of this paper was to understand the gendered roles of migration paths and transnational ties in mental health among sub-Saharan African migrants residing in the Paris, France, metropolitan area. This study used data from the Parcours study conducted in 2012-2013, which employed a life-event approach to collect data from a representative sample of migrants who visited healthcare facilities (n = 2468). We measured anxiety and depressive symptoms at the time of data collection with the Patient Health Questionnaire-4 (PHQ-4). Reasons for migration, the living conditions in the host country and transnational ties after migration were taken into account by sex and after adjustment. Our study demonstrates that among sub-Saharan African migrants, mental health is related to the migratory path and the migrant's situation in the host country but differently for women and men. Among women, anxiety and depressive symptoms were strongly related to having left one's home country because of threats to one's life. Among men, residing illegally in the host country was related to impaired mental health. For both women and men, cross-border separation from a child less than 18 years old was not independently associated with anxiety and depressive symptoms. In addition, social and emotional support from relatives and friends-both from the society of origin and of destination-were associated with lower anxiety and depressive symptoms. Migrant mental health may be impaired in the current context of anti-migrant policies and an anti-immigrant social environment in Europe.
abstract_id: PUBMED:28392746
Social ties and embeddedness in old age: older Turkish labour migrants in Vienna. This paper focuses on older Turkish labour migrants and their spouses, who mostly came to Vienna as young adults in the 1960s and thereafter. They are now entering retirement age and constitute a significant part of Vienna's older population. I analyse their understandings of transnational ageing, their social ties and feelings of social embeddedness. For those still mobile, active participation in one of Vienna's Turkish cultural/religious/political associations is identified as a particular source of social embeddedness. I argue that these voluntary associations provide an important place for older migrants to strengthen social ties and are relatively easy to access, including in old age. Nevertheless, I demonstrate that older Turkish labour migrants are exposed to several forms of discrimination, some of which are felt especially strongly in old age, including a lack of adequate institutionalised late life care. In the discussion of the paper, I critically revisit the debate on ethnicity as a resource versus ethnicity as a vulnerability factor in old age. I argue that this debate is misleading since it camouflages other central social categories and relations. I conclude by suggesting closer attention be paid to the specific but multiple generational experiences of older labour migrants and their spouses.
abstract_id: PUBMED:32911445
The effects of neighborhood social ties and networks on mental health and well-being: A qualitative case study of women residents in a middle-class Korean urban neighborhood. Despite the general understanding of the significance of neighborhood social capital on mental health, our understanding of the mechanisms of these effects is limited. This study thus explores the characteristics of neighborhood social ties and networks experienced by women residents in their urban neighborhood in the Seoul metropolitan area, Korea, and how these social relationships contributed to their mental health and well-being. We draw on a phenomenological qualitative study conducted in 2016, which included interviews with 18 full-time or part-time housewives with children. The participants in this study experienced the neighborhood social networks and ties as unessential but supportive relationships, as an extension of mothering, and as a source of stressful social comparisons. With these characteristics, the social relationships had both positive and negative effects on the participants' mental health and well-being because they provided social support but also generated everyday stress. This study illustrates an example of the complex link between neighborhood social capital and mental health. It also contributes to our understanding of the everyday processes that influence the health and well-being of women, highlighting the possible oppressive attributes of neighborhood communities.
Answer: Yes, gender does matter in the relationship between transnational, social, and neighborhood ties and smoking among Latino immigrants. The study conducted on Latino immigrants found that the number of past-year return visits to the country-of-origin was positively associated with current smoker status. However, gender moderated the association between economic remittances, friend support, and smoking. Specifically, remittance behavior had a protective association with smoking, and this association was particularly pronounced for Latino immigrant women. Friendship support lowered the odds of smoking among men, but not women. These results highlight the gendered patterns of the associations between social ties, transnational ties, and health risk behaviors among Latino immigrants (PUBMED:25121808). |
Instruction: Does elastic resistance affect finger pinch discrimination?
Abstracts:
abstract_id: PUBMED:24218906
Does elastic resistance affect finger pinch discrimination? Objective: The sensitivity of pinch movement discrimination between the thumb and index finger was assessed with and without elastic resistance.
Background: Researchers have examined the effect of elastic resistance on control of single upper-limb movements; however, no one has explored how elastic resistance affects proprioceptive acuity when using two digits simultaneously in a coordinated movement.
Method: For this study, 16 right-handed, healthy young adults undertook an active finger pinch movement discrimination test for the right and left hands, with and without elastic resistance. We manipulated pinch movement distance by varying the size of the object that created the physical stop to end the pinch action.
Results: Adding elastic resistance from a spring to the thumb-index finger pinch task did not affect accuracy of pinch discrimination measured as either the just noticeable difference, F(1, 15) = 1.78, p = .20, or area under the curve, F(1, 15) = 0.07, p = .80.
Conclusion: Having elastic resistance to generate lever return in pincers, tweezers, and surgical equipment or in virtual instruments is unlikely to affect pinch movement discrimination.
Application: Elastic resistance did not affect finger pinch discrimination in the present study, suggesting that return tension on equipment lever arms has a practical but not perceptual function. An active finger pinch movement discrimination task, with or without elastic resistance, could be used for hand proprioceptive training and as a screening tool to identify those with aptitude or decrements in fine finger movement control.
abstract_id: PUBMED:28479225
Biomechanics and Pinch Force of the Index Finger Under Simulated Proximal Interphalangeal Arthrodesis. Purpose: To analyze the effect of simulated proximal interphalangeal (PIP) joint arthrodesis on distal interphalangeal (DIP) joint free flexion-extension (FE) and maximal voluntary pinch forces.
Methods: Five healthy subjects were tested with the PIP joint unconstrained and constrained to selected angles to produce (1) free FE movements of the DIP joint at 2 selected angles of the metacarpophalangeal joint, and (2) maximal voluntary tip (thumb and index finger) and chuck (thumb, index, and middle fingers) pinch forces. Kinematic data from a motion analysis system, pinch force data from a mechanical pinch meter, and electromyography (EMG) data recorded from 2 flexor and extensor muscles of the index finger were collected during free FE movements of the DIP joint and pinch tests for distinct PIP joint constraint angles.
Results: The EMG root mean square (RMS) values of the flexor digitorum profundus (FDP) and extensor digitorum (ED) did not change during free FE of the DIP joint. The extension angle of the range of motion of the DIP joint changed during free FE. It increased as the PIP constraint angle increased. The EMG RMS value of FDP and ED showed maximum values when the PIP joint was unconstrained and constrained at 0° to 20° of flexion during tip and chuck pinch. Neither the index finger metacarpophalangeal and DIP joint positions nor pinch force measurements differed with imposed PIP joint arthrodesis.
Conclusions: The PIP joint arthrodesis angle affects DIP joint extension. A minimal overall impact from simulated PIP arthrodesis in muscle activity and pinch force of the index finger was observed. The EMG RMS values of the FDP and ED revealed that a PIP arthrodesis at 0° to 20° of flexion leads to a more natural finger posture during tip and chuck pinch.
Clinical Relevance: This study provided a quantitative comparison of free FE motion of the DIP joint, as well as FDP and ED forces during pinch, under simulated index finger PIP arthrodesis angles.
abstract_id: PUBMED:36494256
Isolated A1 pulley release surgery for trigger finger leads to significant increase in tip-to-tip pinch strength. Background: Even in the first application of patients with early complaints of trigger finger, pinch strength of the hand may be affected. Therefore, it is difficult to assess the change of strength as a result of treatment in this problem. In this study, we aimed to evaluate the change of strength taking into account both measured and expected pinch strengths before and after A1 pulley release surgery.
Methods: Thirty fingers (9 thumbs, 12 middle, 8 ring and 1 index fingers) of 26 patients (17 women, 9 men) who underwent A1 pulley release were included into this study. The mean age of the patients was 53 (16-71). Tip-to-tip finger pinch strengths were measured pre-operatively and at 3 months postoperatively. The expected strengths were calculated using the values obtained from the healthy side and taking into account the dominance effect. In the analysis, pre-operative and postoperative measured strength/expected strength ratios were compared.
Results: The mean of measured pinch strength/expected pinch strength ratio was 0.91 ± 0.3 pre-operatively and 1.14 ± 0.3 postoperatively (p < 0.05).
Conclusion: With the calculation method used in this study, it was found that there was a significant increase in the tip-to-tip pinch strength after surgical A1 pulley release for the trigger finger.
Level Of Evidence: III (Retrospective cohort study).
abstract_id: PUBMED:25596633
Digit mechanics in relation to endpoint compliance during precision pinch. This study investigates the mechanics of the thumb and index finger in relation to compliant endpoint forces during precision pinch. The objective was to gain insight into how individuals modulate motor output at the digit endpoints and joints according to compliance-related sensory feedback across the digits. Thirteen able-bodied subjects performed precision pinch upon elastic resistance bands of a customized apparatus instrumented with six degree-of-freedom load-cells. Compliance levels were discretely adjusted according to the number of bands connected. Subjects were provided visual feedback to control the rate of force application. Fifteen repetitions of low-to-moderate force (<20N) pinches were analyzed at each of five compliance levels, during which force and motion data were collected. Joint angles and moments normalized by pinch force magnitude were computed. Second-order polynomials were used to characterize joint mechanics as a function of compliance. The joint degrees-of-freedom (DOFs) at the finger showed greater dependence on compliance for angular position while the thumb joint DOFs demonstrated greater dependence for normalized joint moment. The digits also adjusted coordination of their endpoint forces according to compliance. Overall, the finger may be altering its position to increase load to the joints of the thumb with changing compliance. These findings describe naturally emergent changes in digit mechanics for compliant precision pinch, which involves motor execution in response to endpoint sensory feedback. Identifying and understanding these motor patterns may provide theoretical basis for restoring and rehabilitating sensorimotor pathologies of the hand.
abstract_id: PUBMED:27269518
Estimation of conditions evoking fracture in finger bones under pinch loading based on finite element analysis. A finger finite element (FE) model was created from CT images of a Japanese male in order to obtain a shape-biofidelic model. Material properties and articulation characteristics of the model were taken from the literature. To predict bone fracture and realistically represent the fracture pattern under various loading conditions, the ESI-Wilkins-Kamoulakos rupture model in PAM-CRASH (ESI Group S.A., Paris, France) was utilized in this study with parameter values of the rupture model determined by compression testing and simulation of porcine fibula. A finger pinch simulation was then conducted to validate the finger FE model. The force-displacement curve and fracture load from the pinch simulation was compared to the result of finger pinch test using cadavers. Simulation results are coincident with the test result, indicating that the finger FE model can be used in an analysis of finger bone fracture during pinch accident. With this model, several pinch simulations were conducted with different pinching object's stiffness and pinching energy. Conditions for evoking finger bone fracture under pinch loading were then estimated based on these results. This study offers a novel method to predict possible hazards of manufactured goods during the design process, thus finger injury due to pinch loading can be avoided.
abstract_id: PUBMED:24443624
Quantifying Digit Force Vector Coordination during Precision Pinch. A methodology was established to investigate the contact mechanics of the thumb and the index finger at the digit-object interface during precision pinch. Two force/torque transducers were incorporated into an apparatus designed to overcome the thickness of each transducer and provide a flexible pinch span for digit placement and force application. To demonstrate the utility of the device, five subjects completed a pinch task with the pulps of their thumb and index finger. Inter-digit force vector coordination was quantified by examining the 1) force vector component magnitudes, 2) resultant force vector magnitudes, 3) coordination angle - the angle formed by the resultant vectors of each digit, 4) direction angles - the angle formed by each vector and the coordinate axes, and 5) center of pressure locations. It was shown that the resultant force magnitude of the index finger exceeded that of the thumb by 0.8 ± 0.3 N and that the coordination angle between the digit resultant force vectors was 160.2 ± 4.6°. The experimental apparatus and analysis methods provide a valuable tool for the quantitative examination of biomechanics and motor control during dexterous manipulation.
abstract_id: PUBMED:15179851
Manipulabilities of the index finger and thumb in three tip-pinch postures. Tip-pinch, in which the tips of the index finger and thumb pick up and hold a very fine object, plays an important role in the function of the hand. The objective of this study was to investigate how human subjects affect manipulabilities of the tips of the index finger and thumb within the flexion/extension plane of the finger in three different tip-pinch postures. The index finger and thumb of twenty male subjects, were modeled as linkages, based on measurement results obtained using two three-dimensional position measurement devices. The manipulabilities of the index finger and thumb were investigated in three tip-pinch postures, using three criteria indicating the form and posture of the manipulability ellipse of the linkage model. There were no significant differences (p > 0.05, ANOVA) in each criterion of each digit across the subjects, except for two criteria of the thumb. The manipulabilities of the index finger and thumb were separately similar across all subjects in tip-pinch postures. It was found that the manipulability for the cooperation of the index finger and thumb of all the subjects in tip-pinch depended on the posture of the index finger, but not on the posture of the thumb. In two-dimensional tip-pinch, it was possible that the index finger worked actively while the thumb worked passively to support the manipulation of the index finger.
abstract_id: PUBMED:1766801
Human pinch-force discrimination. Pinch-sustaining tasks such as holding a pencil, fork, or key require the exertion of different levels of force. There is little information concerning normal subjects' ability to discriminate differences in their pinching force, so the purpose of this study was to evaluate the ability of 24 normal young women to discriminate differences in their self-generated isometric tip and lateral pinching force. Resistance forces of 10, 25, 50, and 75% of known normal maximum pinching force were selected as standards. Subjects were presented a series of paired resistance settings of which the first resistance in each pair was the standard and the second resistance a comparator of some greater amount. This procedure of paired comparisons was continued until subjects' threshold of discrimination between two pinching forces was established. The results indicated that subjects' pinch-force discrimination at the standard of 50% of reported maximum pinching force was significantly better for the tip condition than for the lateral condition. This study has described an instrumentation and the methodology for assessing individuals' ability to discriminate differences in their pinching force at submaximal levels.
abstract_id: PUBMED:37056194
The Iatrogenic Injury Potential of Self-adherent Elastic Bandages in Finger Injuries. Background: The use of a self-adherent, elastic bandage is a practical way to dress finger injuries. Multiple reports describe iatrogenic injuries from elastic bandages, ranging from skin necrosis to finger gangrene, necessitating amputations. This study investigated whether elastic bandages can compromise digital perfusion by occluding arterial blood flow in healthy volunteers and evaluated the utility of pulse oximetry as a monitoring tool for digital perfusion. A technique for safe bandage application is proposed.
Methods: A commercially available elastic bandage was wrapped around the index finger of 20 healthy volunteers at varying degrees of stretch. Digital perfusion measurements were carried out using photoelectric pulse transduction, laser Doppler flowmetry, and pulse oximetry. Intracompartmental pressure measurements were recorded using a separate in vitro experimental model.
Results: Elastic bandages applied at maximum stretch did not change digital brachial index or pulse oximetry values, suggesting arterial blood flow was preserved distal to the bandage. Intracompartmental pressure measurements at maximum stretch remained below the systolic digital pressure. In contrast, superficial dermal perfusion fell to 32% of normal as measured by laser Doppler flow, at 100% bandage stretch.
Conclusion: This study suggests a risk for iatrogenic injury when using elastic bandages for finger dressings. While arterial inflow was never compromised, pressures were high enough to occlude superficial venous outflow, which may begin at 20% bandage stretch. Pulse oximetry failed to detect changes distal to applied dressings, and we do not recommend it to detect digital vascular compromise in this setting.
abstract_id: PUBMED:17678670
Strength training improves the tri-digit finger-pinch force control of older adults. Objective: To investigate the effect of unilateral upper-limb strength training on the finger-pinch force control of older men.
Design: Pretest and post-test 6-week intervention study.
Setting: Exercise science research laboratory.
Participants: Eleven neurologically fit older men (age range, 70-80y).
Intervention: The strength training group (n=7) trained twice a week for 6 weeks, performing dumbbell bicep curls, wrist flexions, and wrists extensions, while the control group subjects (n=4) maintained their normal activities.
Main Outcome Measures: Changes in force variability, targeting error, peak power frequency, proportional power, sample entropy, digit force sharing, and coupling relations were assessed during a series of finger-pinch tasks. These tasks involved maintaining a constant or sinusoidal force output at 20% and 40% of each subject's maximum voluntary contraction. All participants performed the finger-pinch tasks with both the preferred and nonpreferred limbs.
Results: Analysis of covariance for between-group change scores indicated that the strength training group (trained limb) experienced significantly greater reductions in finger-pinch force variability and targeting error, as well as significantly greater increases in finger-pinch force, sample entropy, bicep curl, and wrist flexion strength than did the control group.
Conclusions: A nonspecific upper-limb strength-training program may improve the finger-pinch force control of older men.
Answer: No, elastic resistance does not affect finger pinch discrimination. A study assessed the sensitivity of pinch movement discrimination between the thumb and index finger with and without elastic resistance and found that adding elastic resistance from a spring to the thumb-index finger pinch task did not affect the accuracy of pinch discrimination. This was measured as either the just noticeable difference or area under the curve, with statistical analysis showing no significant difference (F(1, 15) = 1.78, p = .20 for just noticeable difference and F(1, 15) = 0.07, p = .80 for area under the curve) (PUBMED:24218906). |
Instruction: Does compliance to patient safety tasks improve and sustain when radiotherapy treatment processes are standardized?
Abstracts:
abstract_id: PUBMED:24958638
Does compliance to patient safety tasks improve and sustain when radiotherapy treatment processes are standardized? Purpose: To realize safe radiotherapy treatment, processes must be stabilized. Standard operating procedures (SOP's) were expected to stabilize the treatment process and perceived task importance would increase sustainability in compliance. This paper presents the effects on compliance to safety related tasks of a process redesign based on lean principles.
Method: Compliance to patient safety tasks was measured by video recording of actual radiation treatment, before (T0), directly after (T1) and 1.5 years after (T2) a process redesign. Additionally, technologists were surveyed on perceived task importance and reported incidents were collected for three half-year periods between 2007 and 2009.
Results: Compliance to four out of eleven tasks increased at T1, of which improvements on three sustained (T2). Perceived importance of tasks strongly correlated (0.82) to compliance rates at T2. The two tasks, perceived as least important, presented low base-line compliance, improved (T1), but relapsed at T2. The reported near misses (patient-level not reached) on accelerators increased (P < 0.001) from 144 (2007) to 535 (2009), while the reported misses (patient-level reached) remained constant.
Conclusions: Compliance to specific tasks increased after introducing SOP's and improvements sustained after 1.5 years, indicating increased stability. Perceived importance of tasks correlated positively to compliance and sustainability. Raising the perception of task importance is thus crucial to increase compliance. The redesign resulted in increased willingness to report incidents, creating opportunities for patient safety improvement in radiotherapy treatment.
abstract_id: PUBMED:20410047
Compliance to technical guidelines for radiotherapy treatment in relation to patient safety. Objective: To determine the compliance of radiation technologists to technical guidelines in daily practice for radiotherapy treatment and whether there are differences in compliance across organizational units.
Design: On the basis of consensus, radiation technologists constructed a flowchart describing the work procedure of the irradiation of patients with breast cancer. Using video recordings, technologists in two units were observed to determine whether treatment was conducted in accordance with the flowchart.
Setting: Data have been collected on one linear accelerator at the MAASTRO clinic, a radiotherapy clinic in the Netherlands.
Participants: Fifty-six treatments for breast cancer were analyzed in two treatment units.
Main Outcome Measure: Percentage compliance to the most important issues for patient safety.
Results: An overall compliance of 59% (range: 2-100%) was shown on the 18 most important tasks for patient safety. Between the two units, the compliance varied from 21% to 81%. Tasks considered important by independent assessment had higher levels of compliance.
Conclusions: Video-taped observation proved to be an effective tool for determining compliance in daily practice. A large variation in practice within and across units was detected by the video observations suggesting a need for standard operating procedures to improve the safety of radiotherapy.
abstract_id: PUBMED:22531872
Patient participation to patient safety in radiotherapy: a reality to be developed Background: Patient safety during radiotherapy has become a central priority for public policy further to the various accidents arisen at Epinal, Toulouse and Grenoble for the most symbolic. In this context, patients' involvement in the management of their own safety can be a way to improve the quality of care in general.
Objective And Method: This study was carried out in the radiotherapy department of the Georges-Pompidou European Hospital and aimed at analyzing the role of patients in the management of patient safety. Interviews have been conducted with patients and with professionals in order to understand if patients could have a role in the safety of their treatment and to describe the possible forms of participation.
Results: The results describe two main forms of patient participation. On one hand, active participation, which refers to preventive and corrective actions carried out spontaneously by the patients. On the other hand, compliance, which consists in the respect of the recommended or prescribed behavior.
Conclusion: Patient participation is a reality which remains almost invisible for professionals and which needs to be encouraged insofar as it is a means to improve healthcare safety by a cooperative risk management. However, it must be a possibility offered to the patients and not an obligation, a source of additional stress.
abstract_id: PUBMED:30381715
Efforts toward Patient Safety in External Radiotherapy In past decade, several reports for patient safety in radiotherapy were published. Process of radiotheray has been recognized complex because its sub-processes are performed with interaction by multidisciplinaly team. Thus, there are many opportunities to occur human failure such as communication error and equipment operation error in the sub-process. This tutorial paper was focused non-technical issues towards patient safety in external radiotherapy.
abstract_id: PUBMED:38128248
Australian radiographer roles in the emergency department; evidence of regulatory compliance to improve patient safety - A narrative review. Objectives: Using a narrative approach, this paper aims to determine the extent of Australian radiographers' regulatory compliance to improve patient safety when performing appendicular X-ray and non-contrast brain computed tomography (CT) in the Emergency Department (ED).
Key Findings: A narrative review explored relevant literature and key regulatory policy. Ten documents were identified, three main themes were developed related to the radiographer roles in X-ray request justification, dose optimisation and preliminary image evaluation (PIE). Radiographers were equally aware of justification and optimisation pre and post the introduction of a Medical Code of Practice. The collective PIE accuracy of radiographers remained unaffected by changes in mode of PIE delivery and regulatory factors but varied based on the anatomical region.
Conclusion: While current Australian regulations mandate radiographer request justification, dose optimisation and PIE, the degree of compliance by Australian radiographers remains uncertain. Current literature provides evidence that radiographers can improve patient care and safety through justification, optimisation, and PIE delivery. Change in workplace practice, supported by key stakeholders including radiologists, is essential to integrate radiographers' functions into routine ED clinical practice. Further research is required to audit radiographers' regulatory compliance to improve patient safety.
Implications For Practice: Patient safety in ED can be improved with timely and accurate diagnosis provided by radiographers. Radiographers have a professional obligation to adhere to the capabilities and standards for safe medical radiation practice defined by Australian regulations. Therefore, radiographers must justify the X-ray request, optimise the radiation dose where appropriate and communicate urgent or unexpected findings to the referrer.
abstract_id: PUBMED:17367043
The roles of safety and compliance in determining effectiveness of topical therapy for psoriasis. Topical therapies are the mainstays of treatment for most patients with psoriasis because they relieve symptoms and reduce the size and severity of lesions. The effectiveness of a therapeutic intervention is a function of drug efficacy (determined by randomized clinical trial results) and patient safety and compliance. Alterations in any parameter can have a substantial influence on clinical outcomes. However, topical agents can be associated with unwanted and potentially toxic side effects that make physicians reluctant to prescribe them, and patients intentionally discontinue treatment with these topical agents. To maximize effectiveness and improve patient safety, physicians may prescribe medications in combination, sequential, or rotational therapeutic regimens. This treatment strategy has the potential to improve the overall efficacy and safety of topical therapy; however, the effectiveness of this method may be compromised because the complexity of the therapeutic regimen may decrease patient compliance. Newer topical therapies that have a convenient once-daily dosing schedule are needed and will have important implications for patient compliance.
abstract_id: PUBMED:32245711
A systematic review of effectiveness of interventions applicable to radiotherapy that are administered to improve patient comfort, increase patient compliance, and reduce patient distress or anxiety. Objectives: The aim of this review was to search existing literature to identify comfort interventions that can be used to assist an adult patient to undergo complex radiotherapy requiring positional stability for periods greater than 10 min. The objectives of this review were to; 1) identify comfort interventions used for clinical procedures that involve sustained inactivity similar to radiotherapy; 2) define characteristics of comfort interventions for future practice; and 3) determine the effectiveness of identified comfort interventions. The Preferred Reporting Items for Systematic Reviews and meta-analyses statement and the Template-for-Intervention-Description-and Replication guide were used.
Key Findings: The literature search was performed using PICO criteria with five databases (AMED, CINAHL EMBASE, MEDLINE, PsycINFO) identifying 5269 titles. After screening, 46 randomised controlled trials met the inclusion criteria. Thirteen interventions were reported and were grouped into four categories: Audio-visual, Psychological, Physical, and Other interventions (education/information and aromatherapy). The majority of aromatherapy, one audio-visual and one educational intervention were judged to be clinically significant for improving patient comfort based on anxiety outcome measures (effect size ≥ 0.4, mean change is greater than the Minimal-Important-Difference and low-risk-of-bias). Medium to large effect sizes were reported in many interventions where differences did not exceed the Minimal-Important-Difference for the measure. These interventions were deemed worthy of further investigation.
Conclusion: Several interventions were identified that may improve comfort during radiotherapy assisting patients to sustain and endure the same position over time. This is crucial for the continual growth of complex radiotherapy requiring a need for comfort to ensure stability for targeted treatment.
Implications For Practice: Further investigation of comfort interventions is warranted, including tailoring interventions to patient choice and determining if multiple interventions can be used concurrently to improve effectiveness.
abstract_id: PUBMED:36147088
Surgical safety checklist compliance: The clinical audit. Introduction: The surgical safety checklist consists of three components: sign-in, performed before the induction of anesthesia; time-out, performed before skin incision; and sign-out, performed immediately after skin closure or before the patient leaves the operating theatre. This study aims to assess compliance with the World Health Organization (WHO) Surgical Safety Checklist (SSC) and explore the barriers facing in properly implementing the surgical safety checklist in operation theatres of a tertiary care hospital.
Methodology: The observational clinical audit was conducted in Surgical Unit I, Benazir Bhutto Hospital, Rawalpindi, Pakistan. Compliance with the surgical safety checklist was observed before and after the educational intervention. After completion of the clinical audit operating theatre staff was asked about the barriers to compliance with the surgical safety checklist using an interview sheet. Mean, and standard deviation was calculated for quantitative variables, whereas frequencies and percentages were calculated for categorical variables using SPSS version 25.0.
Results: Compliance with all the steps of the surgical safety checklist was improved after an educational intervention, with the highest improvement in compliance (66.7%) observed with the Sign-out step "Count of sponges and needles & instruments complete?" Moreover, filling of the patient board and documentation of procedure in the patient file were also improved. Lack of awareness and training to follow the surgical safety checklist was the commonest barrier to compliance with the surgical safety checklist.
Conclusion: Implementing the surgical safety checklist will not only upgrade the patient safety measures but also integrate teamwork skills and improve the local departmental culture.
abstract_id: PUBMED:30908139
Monitoring and Developing a Volunteer Patient Navigation Intervention to Improve Mammography Compliance in a Safety Net Hospital. Purpose: Although mammography screening is crucial for cancer detection, screening rates have been declining, particularly in patients of low socioeconomic status and minorities. We sought to evaluate and improve the compliance rates at our safety net hospital through a prospective randomized controlled trial of a volunteer-run patient navigation intervention.
Methods: Baseline 90-day institutional mammography compliance rates were evaluated for patients who received a physician order for screening mammograms over a 1-month period. This analysis aided in the creation of a prospective randomized controlled trial of a volunteer-run patient navigation intervention to improve compliance, with 49 total participants. The primary outcome was 14-day mammography compliance rates. Secondary analysis examined the efficacy of the intervention with respect to patient demographics, prior mammography compliance, family history of cancer, beliefs on mammography, and past medical history.
Results: Analysis of baseline institutional compliance revealed a 47.87% compliance rate, with the majority of compliance occurring within 14 days of order placement. The patient navigation intervention significantly improved compliance by 34% (42% in the control group, 76% in the intervention group). Additional findings included significantly improved compliance in patients who believed they had a low susceptibility to cancer, those who understood the benefits of mammography and early diagnosis, those who had a prior mammogram, those who were employed, and those with a family history of cancer.
Conclusion: A system to monitor compliance and intervene using patient navigation significantly improved mammography compliance of patients in a safety net hospital. The relatively straightforward design of this volunteer-based intervention makes it affordable, easily replicable, and perhaps beneficial at other institutions.
abstract_id: PUBMED:37744414
When Illegitimate Tasks Threaten Patient Safety Culture: A Cross-Sectional Survey in a Tertiary Hospital. Objectives: The current study investigates the prevalence of illegitimate tasks in a hospital setting and their association with patient safety culture outcomes, which has not been previously investigated. Methods: We conducted a cross-sectional survey in a tertiary referral hospital. Patient safety culture outcomes were measured using the Hospital Survey on Patient Safety Culture questionnaire; the primary outcome measures were a low safety rating for the respondent's unit and whether the respondent had completed one or more safety event reports in the last 12 months. Analyses were adjusted for hospital department and staff member characteristics relating to work and health. Results: A total of 2,276 respondents answered the survey (participation rate: 35.0%). Overall, 26.2% of respondents perceived illegitimate tasks to occur frequently, 8.1% reported a low level of safety in their unit, and 60.3% reported having completed one or more safety event reports. In multivariable analyses, perception of a higher frequency of illegitimate tasks was associated with a higher risk of reporting a low safety rating and with a higher chance of having completed event reports. Conclusion: The prevalence of perceived illegitimate tasks was rather high. A programme aiming to reduce illegitimate tasks could provide support for a causal effect of these tasks on safety culture outcomes.
Answer: Yes, compliance to patient safety tasks does improve and sustain when radiotherapy treatment processes are standardized. The introduction of standard operating procedures (SOPs) based on lean principles led to increased compliance to specific safety-related tasks, and these improvements were sustained 1.5 years after the process redesign. The study found that compliance to four out of eleven tasks increased immediately after the introduction of SOPs, and improvements on three of these tasks were sustained over time. Additionally, the perceived importance of tasks was found to strongly correlate with compliance rates, suggesting that raising the perception of task importance is crucial for increasing and sustaining compliance. The redesign also resulted in an increased willingness to report incidents, which creates opportunities for further improvement in patient safety in radiotherapy treatment (PUBMED:24958638).
Moreover, another study showed that compliance varied significantly across organizational units, with an overall compliance of 59% to the most important tasks for patient safety. This variation suggested the need for standard operating procedures to improve the safety of radiotherapy (PUBMED:20410047).
These findings are consistent with the broader literature on patient safety and compliance in healthcare settings, where standardization of processes and the implementation of checklists and guidelines have been shown to improve compliance and patient safety outcomes across various medical disciplines (PUBMED:36147088, PUBMED:30908139). |
Instruction: Clinical features and radiological findings in large vessel vasculitis: are Takayasu arteritis and giant cell arteritis 2 different diseases or a single entity?
Abstracts:
abstract_id: PUBMED:25399386
Clinical features and radiological findings in large vessel vasculitis: are Takayasu arteritis and giant cell arteritis 2 different diseases or a single entity? Objective: Takayasu arteritis (TAK) and giant cell arteritis (GCA) are 2 major variants of large vessel vasculitis (LVV). The frequent involvement of large vessels in GCA has raised the possibility that TAK and GCA should be regarded as 1 disease. By detailed phenotyping of a single-center cohort, we aimed to define the differences between TAK and GCA.
Methods: Forty-five patients (23 TAK, 22 GCA) were identified. Baseline characteristics, clinical symptoms, laboratory data, enhanced computed tomography/magnetic resonance imaging, treatments, and clinical courses were retrospectively assessed with descriptive statistics. In addition, latent class analysis of the 45 patients was performed to explore phenotypic differences.
Results: Patients with GCA had more frequent headache (p < 0.01), higher C-reactive protein levels (p = 0.01), and higher erythrocyte sedimentation rates (p = 0.03) than did patients with TAK at diagnosis. With the exception of subdiaphragmatic lesions, the distributions of vessel lesions were not different between TAK and GCA. However, focusing on subclavian and carotid arteries, long tapered-type stenotic lesions were more frequent in GCA than in TAK (p < 0.01). The proportion of patients without relapse was higher in GCA (60%) than in TAK (22%, p = 0.01). Latent class analysis also divided patients with LVV into 2 separate groups consistent with TAK and GCA.
Conclusion: The differences observed in clinical symptoms, inflammatory markers, radiological findings, and clinical courses suggested that TAK and GCA were 2 different diseases. Latent class analysis supported these results. The shape of stenotic lesions in the subclavian and carotid arteries is a useful discriminator between TAK and GCA.
abstract_id: PUBMED:33480825
Clinical features of large vessel vasculitis (LVV): Elderly-onset versus young-onset. Objectives: We compared large vessel vasculitis (LVV) clinical features between age groups.
Methods: We retrospectively examined clinical features and therapies in 41 LVV patients at our hospital from January 2010 to March 2020. We compared two patient groups, elderly (≥50 years) and young (<50 years).
Results: Of all patients, 29 were elderly and 12 were young. In the younger group, upper extremity symptoms (p <.05), bruits (p <.01), and cardiovascular complications (p <.01) were more common. Of the elderly group, 7 (24%) met classification criteria for giant cell arteritis while none of the younger group met these criteria; however, 10 (83%) of the younger group and 3 (10%) of the elderly group met the ACR classification criteria for Takayasu arteritis (p <.01). In the elderly group, 16 patients (66%) met no criteria (p <.01). There were no significant differences in laboratory findings but imaging showed a significantly higher incidence of head and neck artery lesions in the younger group (p <.05). The younger group was more likely to receive additional tocilizumab (p <.01) and cardiovascular complications were more likely to occur in younger patients (p < .01).
Conclusion: LVV clinical features differed between elderly- and young-age-onset groups.
abstract_id: PUBMED:26272121
Use of positron emission tomography (PET) for the diagnosis of large-vessel vasculitis. The term vasculitis encompasses a heterogeneous group of diseases that share the presence of inflammatory infiltrates in the vascular wall. The diagnosis of large-vessel vasculitis is often a challenge because the presenting clinical features are nonspecific in many cases and they are often shared by different types of autoimmune and inflammatory diseases including other systemic vasculitides. Moreover, the pathogenesis of large-vessel vasculitis is not fully understood. Nevertheless, the advent of new imaging techniques has constituted a major breakthrough to establish an early diagnosis and a promising tool to monitor the follow-up of patients with largevessel vasculitis. This is the case of the molecular imaging with the combination of positron emission tomography with computed tomography (PET/CT) using different radiotracers, especially the (18)F-fluordeoxyglucose ((18)F-FDG). In this review we have focused on the contribution of (18)F-FDG PET in the diagnosis of large-vessel vasculitis.
abstract_id: PUBMED:36983569
Clinical Features and Outcomes of Japanese Patients with Giant Cell Arteritis: A Comparison with Takayasu Arteritis. Background: Giant cell arteritis (GCA) and Takayasu arteritis (TA) are distinct types of large-vessel vasculitis; however, the clinical features of the diseases have some similarities. Limited data are available regarding Japanese patients with GCA and TA. The present study aimed to compare the clinical features and outcomes of Japanese patients with GCA and TA and the effects of large vessel involvement (LVI).
Methods: We performed a retrospective cohort study of the patients with GCA (n = 15) and TA (n = 30) who visited our department from April 2012 to June 2022. Signs and symptoms attributed to the disease, treatment, clinical outcomes, and mortality were recorded using a standardized database.
Results: The median age of onset was significantly higher in the GCA group at 24 years (range, 16-72 years) in the TA group and 77 years (range, 57-89 years) in the GCA group (p < 0.001). There were no significant differences in survival rates or the cumulative rates of cardiovascular events between the GCA and TA groups. However, relapse-free survival rates were significantly higher in patients with GCA than in patients with TA. Seven of the 15 patients with GCA had large vessel involvement, which did not affect the survival rates. Prednisolone (PSL) doses were significantly decreased after induction therapy in both groups, and the rates of achieving steroid tapering (PSL < 5.0 mg/day) were significantly higher in patients with GCA compared with those in patients with TA.
Conclusions: Our study demonstrated no significant difference in the survival rates of Japanese patients with GCA and TA. The relapse-free survival rates were significantly higher in the GCA group than in the TA group. LVI may not be associated with disease relapse or survival rate in Japanese patients with GCA.
abstract_id: PUBMED:33511455
Patient Reported Outcomes in Large Vessel Vasculitides. Purpose Of Review: The goal of this paper is to review current and future uses of patient-reported outcomes in large vessel vasculitis. The large vessel vasculitides comprise Giant Cell Arteritis and Takayasu arteritis; both are types of systemic vasculitis which affect the larger blood vessels. Patient-reported outcomes (PROs) capture the impact of these diseases on health-related quality of life.
Recent Findings: Generic PROs such as the SF-36 are currently used to compare HRQOL of people with GCA and TAK within clinical trials and observational studies and to make comparisons with the general population and HRQoL in other diseases. The development of a disease-specific PRO for GCA is currently underway. Beyond clinical trials, there is much interest in the use of PROs within routine clinical care, particularly E-PROs for remote use. Further work will be needed to complete the development of disease-specific PROs for people with large vessel vasculitis and to establish feasibility, acceptability, and utility of E-PROs.
abstract_id: PUBMED:23891316
Nonurgent aortic disease: clinical-radiological diagnosis of aortitis. Aortitis is a pathological term designating inflammation of the aortic wall, regardless of its cause. The clinical presentation of aortitis is nonspecific and variable. Symptoms include abdominal pain, fever, and weight loss; acute phase reactants may also be elevated. Aortitis can be caused by a wide spectrum of entities, including from infectious processes to autoimmune diseases (Takayasu arteritis and giant cell arteritis are among the most common of these causing aortitis), and the prognosis and treatment of these entities vary widely. Various imaging techniques can be used to evaluate the lumen and wall of the aorta (such as multidetector computed tomography, magnetic resonance imaging, angiography, or PET-CT). This review focuses on the most common diseases that cause aortitis and on the clinical and radiological findings that are most useful for diagnosing and treating this condition appropriately.
abstract_id: PUBMED:36286971
Non-infectious diseases of the aorta and large arteries This article describes the various forms of inflammatory lesions of the aorta and large arteries, including chronic periaortitis, as well as the diagnostic methods are considered. Large vessel vasculitis represent the most common entities, however, there is also an association with other rheumatological or inflammatory diseases, drug-induced or paraneoplastic entities. Instrumental imaging modalities play an important role in the diagnosis.
abstract_id: PUBMED:29624864
Takayasu arteritis and giant cell arteritis: are they a spectrum of the same disease? Giant cell arteritis (GCA) and Takayasu arteritis (TAK) are forms of large-vessel vasculitides that affect the aorta and its branches. There is ongoing debate about whether they are within a spectrum of the same disease or different diseases. Shared commonalities include clinical features, evidence of systemic inflammation, granulomatous inflammation on biopsy, role of T-helper (Th)-1 and Th17 in the pathogenesis, and, abnormalities of the aorta and its branches on imaging. However, there are also several differences in the geographic distribution, genetics, inflammatory cells and responses to treatment. This review highlights the similarities and differences in the epidemiology, pathogenesis, clinical manifestations, imaging findings and treatment responses in these conditions. Current data supports that they are two distinct conditions despite the numerous similarities.
abstract_id: PUBMED:23636438
Inflammatory aortic diseases Inflammatory diseases of the aorta fall under the umbrella term "aortitis" and encompass a broad spectrum of autoimmune and infectious disorders. Giant cell arteritis and Takayasu arteritis represent the most common causative primary large vessel vasculitides. Isolated aortitis is a further important entity, whilst IgG4-associated systemic disease today represents yet another disease entity belonging to this group. Clinical presentation is highly variable, ranging from asymptomatic incidental findings to acute aortic syndrome due to rupture. Diagnostic imaging techniques form the cornerstone of the diagnostic workup. These techniques should also be used in screening for diseases with frequent aortic involvement. The treatment approach depends on the underlying disease and on the extent of aortic damage secondary to vascular inflammation. Against this background, differentiation between autoimmune and infectious aortitis is therefore necessary due to the contrasting treatment strategies. The present article is intended to provide the internist with an overview of this heterogeneous disease entity.
abstract_id: PUBMED:33067687
Morbidity and Mortality of Large-Vessel Vasculitides. Purpose Of Review: Giant cell arteritis (GCA) and Takayasu arteritis (TAK) comprise the primary systemic large-vessel vasculitides. In these conditions, arterial stenosis, occlusion, aneurysm, and dissection can lead to severe disease-related consequences. This review focuses on disease-related manifestations of GCA and TAK, emphasizing the impact of these findings on long-term morbidity and mortality.
Recent Findings: Vision loss remains a main contributor of morbidity in GCA. Non-invasive imaging allows for recognition of aortic disease in GCA but monitoring and intervention guidelines require further development. TAK represents a severe disease of early-onset with high risk of morbidity due to aortic, pulmonary, cardiovascular, and neurologic involvement. Overall, patients with GCA have similar mortality rates to comparators but mortality is notably higher than the general population in TAK. A multidisciplinary approach of expert subspecialists is required to assist with the complex care of patients with GCA and TAK in order to appropriately surveil, identify, and address the multi-faceted co-morbidities of these diseases.
Answer: Takayasu arteritis (TAK) and giant cell arteritis (GCA) are both forms of large vessel vasculitis (LVV), but there is ongoing debate about whether they are two distinct diseases or represent a spectrum of the same disease. Clinical features, radiological findings, and outcomes have been studied to understand the similarities and differences between these two conditions.
Patients with GCA often present with more frequent headaches, higher C-reactive protein levels, and higher erythrocyte sedimentation rates than those with TAK at diagnosis. Radiologically, long tapered-type stenotic lesions in the subclavian and carotid arteries are more common in GCA than in TAK. Additionally, the proportion of patients without relapse is higher in GCA compared to TAK, suggesting differences in the clinical course of the diseases (PUBMED:25399386).
Comparisons between elderly-onset and young-onset LVV have shown that clinical features differ between the two age groups. In the younger group, upper extremity symptoms, bruits, and cardiovascular complications were more common. The younger group was also more likely to meet the classification criteria for Takayasu arteritis, whereas a portion of the elderly group met criteria for giant cell arteritis (PUBMED:33480825).
Studies have also indicated that there are no significant differences in survival rates or cumulative rates of cardiovascular events between GCA and TAK groups. However, relapse-free survival rates were significantly higher in patients with GCA than in those with TAK. Large vessel involvement in GCA did not affect survival rates (PUBMED:36983569).
Despite these differences, both conditions share commonalities such as evidence of systemic inflammation, granulomatous inflammation on biopsy, and abnormalities of the aorta and its branches on imaging. They also have similarities in the role of T-helper cells in pathogenesis. However, differences in geographic distribution, genetics, inflammatory cells, and responses to treatment support the view that they are two distinct conditions (PUBMED:29624864).
In conclusion, while TAK and GCA share some clinical features and radiological findings as LVVs, the differences observed in symptoms, inflammatory markers, radiological findings, clinical courses, and responses to treatment suggest that they are two different diseases rather than a single entity (PUBMED:25399386; PUBMED:29624864). |
Instruction: Calcific degeneration of the aortic valve in old age: is the development of flow obstruction predictable?
Abstracts:
abstract_id: PUBMED:8772627
Calcific degeneration of the aortic valve in old age: is the development of flow obstruction predictable? Objective: Aortic valve calcification and stenosis become increasingly common with advancing age. This work aimed at assessing whether a time-dependent reduction of aortic valve area is detectable in an unselected elderly population and whether the rate of reduction can be predicted from clinical or biochemical characteristics.
Design: A population-based prospective echocardiographic follow-up study.
Setting: A university hospital.
Subjects: In 1990, randomly selected persons born in 1904, 1909 and 1914 (total n = 501) underwent a Doppler echocardiographic study of aortic valve and biochemical tests of glucose, lipid and calcium metabolism. In 1993, echocardiography was repeated in 333 survivors of the original cohorts. These individuals constitute the present study population.
Main Outcome Measures: Three-year changes in the aortic valve area and velocity ratio (peak outflow tract velocity/peak aortic jet velocity) determined by Doppler echocardiography.
Results: Aortic valve area decreased from a mean of 1.95 cm2 (95% confidence interval of mean, 1.88-2.03 cm2) to 1.78 cm2 (1.71-1.85 cm2) within 3 years (P < 0.001). Concomitantly, the velocity ratio decreased from 0.75 (0.73-0.77) to 0.68 (0.67-0.70) (P < 0.001). The changes in aortic valve area and velocity ratio were unrelated to age, sex, presence of hypertension, coronary artery disease or diabetes, and to all assessed biochemical characteristics. A weak positive statistical association was found between the decrease in aortic valve area and the body mass index at entry (r = 0.16, P < 0.01).
Conclusions: A time-dependent reduction of the aortic valve flow orifice can be demonstrated in persons representing the general elderly population. The deterioration of aortic valve function within a span of 3 years is neither clinically nor biochemically predictable. A longer follow-up may be necessary to identify the risk factors of aortic valve stenosis in old age.
abstract_id: PUBMED:33922670
Degeneration of Aortic Valves in a Bioreactor System with Pulsatile Flow. Calcific aortic valve disease is the most common valvular heart disease in industrialized countries. Pulsatile pressure, sheer and bending stress promote initiation and progression of aortic valve degeneration. The aim of this work is to establish an ex vivo model to study the therein involved processes. Ovine aortic roots bearing aortic valve leaflets were cultivated in an elaborated bioreactor system with pulsatile flow, physiological temperature, and controlled pressure and pH values. Standard and pro-degenerative treatment were studied regarding the impact on morphology, calcification, and gene expression. In particular, differentiation, matrix remodeling, and degeneration were also compared to a static cultivation model. Bioreactor cultivation led to shrinking and thickening of the valve leaflets compared to native leaflets while gross morphology and the presence of valvular interstitial cells were preserved. Degenerative conditions induced considerable leaflet calcification. In comparison to static cultivation, collagen gene expression was stable under bioreactor cultivation, whereas expression of hypoxia-related markers was increased. Osteopontin gene expression was differentially altered compared to protein expression, indicating an enhanced protein turnover. The present ex vivo model is an adequate and effective system to analyze aortic valve degeneration under controlled physiological conditions without the need of additional growth factors.
abstract_id: PUBMED:34687365
Impact of calcific aortic valve disease on valve mechanics. The aortic valve is a highly dynamic structure characterized by a transvalvular flow that is unsteady, pulsatile, and characterized by episodes of forward and reverse flow patterns. Calcific aortic valve disease (CAVD) resulting in compromised valve function and increased pressure overload on the ventricle potentially leading to heart failure if untreated, is the most predominant valve disease. CAVD is a multi-factorial disease involving molecular, tissue and mechanical interactions. In this review, we aim at recapitulating the biomechanical loads on the aortic valve, summarizing the current and most recent research in the field in vitro, in-silico, and in vivo, and offering a clinical perspective on current strategies adopted to mitigate or approach CAVD.
abstract_id: PUBMED:33158204
Current Evidence and Future Perspectives on Pharmacological Treatment of Calcific Aortic Valve Stenosis. Calcific aortic valve stenosis (CAVS), the most common heart valve disease, is characterized by the slow progressive fibro-calcific remodeling of the valve leaflets, leading to progressive obstruction to the blood flow. CAVS is an increasing health care burden and the development of an effective medical treatment is a major medical need. To date, no effective pharmacological therapies have proven to halt or delay its progression to the severe symptomatic stage and aortic valve replacement represents the only available option to improve clinical outcomes and to increase survival. In the present report, the current knowledge and latest advances in the medical management of patients with CAVS are summarized, placing emphasis on lipid-lowering agents, vasoactive drugs, and anti-calcific treatments. In addition, novel potential therapeutic targets recently identified and currently under investigation are reported.
abstract_id: PUBMED:34513952
Decreased Glucagon-Like Peptide-1 Is Associated With Calcific Aortic Valve Disease: GLP-1 Suppresses the Calcification of Aortic Valve Interstitial Cells. Objectives: This study explores the concentration and role of glucagon-like peptide-1 (GLP-1) in calcific aortic valve disease (CAVD). Background: Calcific aortic valve disease is a chronic disease presenting with aortic valve degeneration and mineralization. We hypothesized that the level of GLP-1 is associated with CAVD and that it participates in the calcification of aortic valve interstitial cells (AVICs). Methods: We compared the concentration of GLP-1 between 11 calcific and 12 normal aortic valve tissues by immunohistochemical (IHC) analysis. ELISA was used to measure GLP-1 in serum of the Control (n = 197) and CAVD groups (n = 200). The effect of GLP-1 on the calcification of AVICs and the regulation of calcific gene expression were also characterized. Results: The GLP-1 concentration in the calcific aortic valves was 39% less than that in the control non-calcified aortic valves. Its concentration in serum was 19.3% lower in CAVD patients. Multivariable regression analysis demonstrated that GLP-1 level was independently associated with CAVD risk. In vitro, GLP-1 antagonized AVIC calcification in a dose- and time-dependent manner and it down-regulated RUNX2, MSX2, BMP2, and BMP4 expression but up-regulated SOX9 expression. Conclusions: A reduction in GLP-1 was associated with CAVD, and GLP-1 participated in the mineralization of AVICs by regulating specific calcific genes. GLP-1 warrants consideration as a novel treatment target for CAVD.
abstract_id: PUBMED:33050133
Ultrastructural Pathology of Atherosclerosis, Calcific Aortic Valve Disease, and Bioprosthetic Heart Valve Degeneration: Commonalities and Differences. Atherosclerosis, calcific aortic valve disease (CAVD), and bioprosthetic heart valve degeneration (alternatively termed structural valve deterioration, SVD) represent three diseases affecting distinct components of the circulatory system and their substitutes, yet sharing multiple risk factors and commonly leading to the extraskeletal calcification. Whereas the histopathology of the mentioned disorders is well-described, their ultrastructural pathology is largely obscure due to the lack of appropriate investigation techniques. Employing an original method for sample preparation and the electron microscopy visualisation of calcified cardiovascular tissues, here we revisited the ultrastructural features of lipid retention, macrophage infiltration, intraplaque/intraleaflet haemorrhage, and calcification which are common or unique for the indicated types of cardiovascular disease. Atherosclerotic plaques were notable for the massive accumulation of lipids in the extracellular matrix (ECM), abundant macrophage content, and pronounced neovascularisation associated with blood leakage and calcium deposition. In contrast, CAVD and SVD generally did not require vasculo- or angiogenesis to occur, instead relying on fatigue-induced ECM degradation and the concurrent migration of immune cells. Unlike native tissues, bioprosthetic heart valves contained numerous specialised macrophages and were not capable of the regeneration that underscores ECM integrity as a pivotal factor for SVD prevention. While atherosclerosis, CAVD, and SVD show similar pathogenesis patterns, these disorders demonstrate considerable ultrastructural differences.
abstract_id: PUBMED:26386747
Osseous and chondromatous metaplasia in calcific aortic valve stenosis. Background: Aortic valve replacement for calcific aortic valve stenosis is one of the more common cardiac surgical procedures. However, the underlying pathophysiology of calcific aortic valve stenosis is poorly understood. We therefore investigated the histologic findings of aortic valves excised for calcific aortic valve stenosis and correlated these findings with their associated clinical features.
Results And Methods: We performed a retrospective analysis on 6685 native aortic valves excised for calcific stenosis and 312 prosthetic tissue aortic valves with calcific degeneration at a single institution between 1987 and 2013. Patient demographics were correlated with valvular histologic features diagnosed on formalin-fixed, decalcified, and paraffin embedded hematoxylin and eosin stained sections. Of the analyzed aortic valves, 5200 (77.8%) were tricuspid, 1473 (22%) were bicuspid, 11 (0.2%) were unicuspid, and 1 was quadricuspid. The overall prevalence of osseous and/or chondromatous metaplasia was 15.6%. Compared to tricuspid valves, bicuspid valves had a higher prevalence of metaplasia (30.1% vs. 11.5%) and had an earlier mean age of excision (60.2 vs. 75.1 years old). In addition, the frequency of osseous metaplasia and/or chondromatous metaplasia increased with age at time of excision of bicuspid aortic valves, while tricuspid aortic valves showed the same incidence regardless of patient age. Males had a higher prevalence of metaplasia in both bicuspid (33.5% vs. 22.3%) and tricuspid (13.8% vs. 8.6%) aortic valves compared to females. Osseous metaplasia and/or chondromatous metaplasia was also more common in patients with bicuspid aortic valves and concurrent chronic kidney disease or atherosclerosis than in those without (33.6% vs. 28.3%). No osseous or chondromatous metaplasia was observed within the cusps of any of the prosthetic tissue valves.
Conclusions: Osseous and chondromatous metaplasia are common findings in native aortic valves but do not occur in prosthetic tissue aortic valves. Bicuspid valves appear to have an inherent proclivity for metaplasia, as demonstrated by their higher rates of osseous metaplasia and/or chondromatous metaplasia both overall and at earlier age compared to tricuspid and prosthetic tissue aortic valves. This predilection could be due to aberrant hemodynamic forces on bicuspid valves, as well as intrinsic genetic changes associated with bicuspid valve formation. Aortic valve interstitial cells may play a central role in this process. Calcification of prosthetic tissue valves is most likely a primarily dystrophic phenomenon.
abstract_id: PUBMED:32642204
Are the dynamic changes of the aortic root determinant for thrombosis or leaflet degeneration after transcatheter aortic valve replacement? The role of the aortic root is to convert the accumulated elastic energy during systole into kinetic flow energy during diastole, in order to improve blood distribution in the coronary tree. Therefore, the sinuses of Valsalva of the aortic root are not predisposed to accept any bulky material, especially in case of uncrushed solid calcific agglomerates. This concept underlines the differences between surgical aortic valve replacement, in which decalcification is a main part of the procedure, and transcatheter aortic valve replacement (TAVR). Cyclic changes in shape and size of the aortic root influence blood flow in the Valsalva sinuses. Recent papers have been investigating the dynamic changes of the aortic root and whether those differences might be correlated with clinical effects, and this paper aims to summarize part of this flourishing literature. Post-TAVR aortic root remodeling, dynamic flow and TAVR complications might have a fluidodynamic background, and clinically observed side effects such as thrombosis or leaflet degeneration should be further investigated in basic researches. Also, aortic root changes could impact valve type and size selection, affecting the decision of over-sizing or under-sizing in order to prevent valve embolization or coronary ostia obstruction.
abstract_id: PUBMED:34239905
NOTCH Signaling in Aortic Valve Development and Calcific Aortic Valve Disease. NOTCH intercellular signaling mediates the communications between adjacent cells involved in multiple biological processes essential for tissue morphogenesis and homeostasis. The NOTCH1 mutations are the first identified human genetic variants that cause congenital bicuspid aortic valve (BAV) and calcific aortic valve disease (CAVD). Genetic variants affecting other genes in the NOTCH signaling pathway may also contribute to the development of BAV and the pathogenesis of CAVD. While CAVD occurs commonly in the elderly population with tri-leaflet aortic valve, patients with BAV have a high risk of developing CAVD at a young age. This observation indicates an important role of NOTCH signaling in the postnatal homeostasis of the aortic valve, in addition to its prenatal functions during aortic valve development. Over the last decade, animal studies, especially with the mouse models, have revealed detailed information in the developmental etiology of congenital aortic valve defects. In this review, we will discuss the molecular and cellular aspects of aortic valve development and examine the embryonic pathogenesis of BAV. We will focus our discussions on the NOTCH signaling during the endocardial-to-mesenchymal transformation (EMT) and the post-EMT remodeling of the aortic valve. We will further examine the involvement of the NOTCH mutations in the postnatal development of CAVD. We will emphasize the deleterious impact of the embryonic valve defects on the homeostatic mechanisms of the adult aortic valve for the purpose of identifying the potential therapeutic targets for disease intervention.
abstract_id: PUBMED:2611797
The mechanism of the obstruction in calcific aortic stenosis with bicuspid valve: a reason for failure of balloon aortic valvuloplasty in the elderly. Percutaneous aortic balloon valvuloplasty failed to relieve the obstruction in 2 elderly patients with calcific aortic stenosis. Intraoperative and pathologic examination showed bicuspid aortic valve with symmetric cusps, straight and fibrotic cusp edges and fractured calcific nodules of the aortic valve. Failure of balloon valvuloplasty in these patients, in spite of successful fractures of calcific nodules, was due to inability to influence the spring-like action of the thickened edges of the valve which represents a specific additional cause of obstruction in calcific bicuspid aortic valve of the elderly.
Answer: The development of flow obstruction due to calcific degeneration of the aortic valve in old age does not appear to be predictable based on clinical or biochemical characteristics. A population-based prospective echocardiographic follow-up study found that a time-dependent reduction of the aortic valve flow orifice can be demonstrated in persons representing the general elderly population. However, the deterioration of aortic valve function within a span of 3 years was neither clinically nor biochemically predictable, with changes in aortic valve area and velocity ratio being unrelated to age, sex, presence of hypertension, coronary artery disease or diabetes, and to all assessed biochemical characteristics. A weak positive statistical association was found between the decrease in aortic valve area and the body mass index at entry, but overall, a longer follow-up may be necessary to identify the risk factors of aortic valve stenosis in old age (PUBMED:8772627).
Therefore, based on the evidence from this study, it is not possible to predict the development of flow obstruction due to calcific degeneration of the aortic valve in the elderly based on the parameters investigated. |
Instruction: Is testing for inherited coagulation inhibitor deficiencies in young stroke patients worthwhile?
Abstracts:
abstract_id: PUBMED:11154808
Is testing for inherited coagulation inhibitor deficiencies in young stroke patients worthwhile? Objective: To test the hypothesis that in patients under age 50, with a first, arterial, ischemic cerebral infarct, whose family history and medical history do not suggest an inherited coagulation inhibitor deficiency, the yield of a laboratory search for these disorders will be low.
Materials And Methods: In 55 such patients under age 50, we systematically searched for deficiencies of protein C, protein S, and antithrombin III.
Results: No abnormalities of protein C or antithrombin III were found. One patient had a deficiency of protein S, which was most likely acquired rather than inherited.
Conclusions: In patients who lack clinical features of a prothrombotic state, the yield of testing for protein C, S and AT III deficiency is likely to be low.
abstract_id: PUBMED:31271700
Elevated thrombin generation in patients with congenital disorder of glycosylation and combined coagulation factor deficiencies. Background: Congenital disorders of glycosylation are rare inherited diseases affecting many different proteins. The lack of glycosylation notably affects the hemostatic system and leads to deficiencies of both procoagulant and anticoagulant factors.
Objective: To assess the hemostatic balance in patients with multiple coagulation disorders by using a thrombin generation assay.
Method: We performed conventional coagulation assays and a thrombin generation assay on samples from patients with congenital disorder of glycosylation. The thrombin generation assay was performed before and after activation of the protein C system by the addition of soluble thrombomodulin.
Results: A total of 35 patients were included: 71% and 57% had low antithrombin and factor XI levels, respectively. Protein C and protein S levels were abnormally low in 29% and 26% of the patients, respectively, whereas only 11% displayed low factor IX levels. Under baseline conditions, the thrombin generation assay revealed a significantly higher endogenous thrombin potential and thrombin peak in patients, relative to controls. After spiking with thrombomodulin, we observed impaired involvement of the protein C system. Hence, 54% of patients displayed a hypercoagulant phenotype in vitro. All the patients with a history of stroke-like episodes or thrombosis displayed this hypercoagulant phenotype.
Conclusion: A thrombin generation assay revealed a hypercoagulant in vitro phenotype under baseline condition; this was accentuated by impaired involvement of the protein C system. This procoagulant phenotype may thus reflect the risk of severe vascular complications. Further research will have to determine whether the thrombin generation assay is predictive of vascular events.
abstract_id: PUBMED:20854623
A review of hereditary and acquired coagulation disorders in the aetiology of ischaemic stroke. The diagnostic workup in patients with ischaemic stroke often includes testing for prothrombotic conditions. However, the clinical relevance of coagulation abnormalities in ischaemic stroke is uncertain. Therefore, we reviewed what is presently known about the association between inherited and acquired coagulation disorders and ischaemic stroke, with a special emphasis on the methodological aspects. Good-quality data in this field are scarce, and most studies fall short on epidemiological criteria for causal inference. While inherited coagulation disorders are recognised risk factors for venous thrombosis, there is no substantial evidence for an association with arterial ischaemic stroke. Possible exceptions are the prothrombin G20210A mutation in adults and protein C deficiency in children. There is proof of an association between the antiphospholipid syndrome and ischaemic stroke, but the clinical significance of isolated mildly elevated antiphospholipid antibody titres is unclear. Evidence also suggests significant associations of increased homocysteine and fibrinogen concentrations with ischaemic stroke, but whether these associations are causal is still debated. Data on other acquired coagulation abnormalities are insufficient to allow conclusions regarding causality. For most coagulation disorders, a causal relation with ischaemic stroke has not been definitely established. Hence, at present, there is no valid indication for testing all patients with ischaemic stroke for these conditions. Large prospective population-based studies allowing the evaluation of interactive and subgroup effects are required to appreciate the role of coagulation disorders in the pathophysiology of arterial ischaemic stroke and to guide the management of individual patients.
abstract_id: PUBMED:26585761
Thrombophilia testing in young patients with ischemic stroke. Introduction: The possible significance of thrombophilia in ischemic stroke remains controversial. We aimed to study inherited and acquired thrombophilias as risk factors for ischemic stroke, transient ischemic attack (TIA) and amaurosis fugax in young patients.
Materials And Methods: We included patients aged 18 to 50 years with ischemic stroke, TIA or amaurosis fugax referred to thrombophilia investigation at Aarhus University Hospital, Denmark from 1 January 2004 to 31 December 2012 (N=685). Clinical information was obtained from the Danish Stroke Registry and medical records. Thrombophilia investigation results were obtained from the laboratory information system. Absolute thrombophilia prevalences and associated odds ratios (OR) with 95% confidence intervals (95% CI) were reported for ischemic stroke (N=377) and TIA or amaurosis fugax (N=308). Thrombophilia prevalences for the general population were obtained from published data.
Results: No strong associations were found between thrombophilia and ischemic stroke, but patients with persistent presence of lupus anticoagulant (3%) had an OR at 2.66 (95% CI 0.84-9.15) for ischemic stroke. A significantly higher risk of TIA/amaurosis fugax was found for factor V Leiden heterozygote (12%) (OR: 1.99 (95% CI 1.14-3.28)). No other inherited or acquired thrombophilia was associated with ischemic stroke, TIA or amaurosis fugax.
Conclusions: In young patients, thrombophilia did not infer an increased risk of ischemic stroke. Only factor V Leiden heterozygote patients had an increased risk of TIA/amaurosis fugax, and persistent presence of lupus anticoagulant was likely associated with ischemic stroke. We suggest the testing restricted to investigation of persistent presence of lupus anticoagulant.
abstract_id: PUBMED:11721097
Natural coagulation inhibitor proteins in young patients with cerebral ischemia. Disturbances of coagulation and fibrinolytic pathways were studied in 53 young patients with cerebral ischemia. Upon admission 26 of 53 patients had abnormality in at least one of the antithrombin-III, protein C, protein S activities or in activated protein C (APC) ratios. Three months after the first examination the majority of the previously detected abnormalities returned to normal values and the most frequent alterations were decrease in protein S activity (3 patients) and APC resistance (3 patients). Conditions resulting in impaired fibrinolysis were frequently detected upon admission. Elevation of plasminogen activator inhibitor-1, lipoprotein (a), and alpha-2-antiplasmin was present in 23, 10, and 4 cases, respectively. It is concluded that abnormalities of coagulation as well as of the fibrinolytic systems are prevalent in the acute phase of cerebral ischemia, however, the results may be significantly influenced by the disease process or the acute phase effect.
abstract_id: PUBMED:24963788
Analysis of the influence of dabigatran on coagulation factors and inhibitors. Introduction: Dabigatran is an oral intake thrombin inhibitor for preventive administration against stroke accompanied by atrial fibrillation. Although dabigatran causes prolonged activated partial thromboplastin time (APTT), the effect of dabigatran on each coagulation factor and coagulation factor inhibitor remains to be investigated. Our aim was to analyze the influence of dabigatran on coagulation factors and coagulation factor inhibitors.
Methods: We administered dabigatran to 40 patients. In 26 of these 40, we analyzed the activity of several coagulation factors and their inhibitors. We used Fisher's exact test to determine statistical significance.
Results: The activities of many coagulation factors changed during the dabigatran therapy. Factor II levels decreased in all patients showing prolongation of partial thromboplastin (PT) and APTT. The antifactor VIII inhibitor was positive in the majority of patients with prolonged PT and APTT, while activities of protein C, protein S, and antifactor IX inhibitor were not associated with PT and APTT prolongation.
Conclusion: Dabigatran affects the activities of many coagulation factors, including factors II, V, VIII, and IX, as well as the antifactor VIII inhibitor.
abstract_id: PUBMED:33469905
Specific Point-of-Care Testing of Coagulation in Patients Treated with Dabigatran. Background And Purpose: Accurate and rapid assessment of coagulation status is necessary to guide thrombolysis or reversal of anticoagulation in stroke patients, but commercially available point-of-care (POC) assays are not suited for coagulation testing in patients treated with direct oral anticoagulants (DOACs). We aimed to evaluate the direct thrombin monitoring (DTM) test card by Helena Laboratories (Texas, United States) for anti-IIa-specific POC coagulation testing, hypothesizing that its POC-ecarin clotting time (POC-ECT) accurately reflects dabigatran plasma concentrations.
Methods: A prospective single-center diagnostic study (ClinicalTrials.gov-identifier: NCT02825394) was conducted enrolling patients receiving a first dose of dabigatran and patients already on dabigatran treatment. Blood samples were collected before drug intake and 0.5, 1, 2, 8, and 12 hours after intake. POC-ECT was performed using whole blood (WB), citrated blood (CB), and citrated plasma (CP). Dabigatran plasma concentrations were determined by mass spectrometry.
Results: In total, 240 blood samples from 40 patients contained 0 to 275 ng/mL of dabigatran. POC-ECT with WB/CB/CP ranged from 20 to 186/184/316 seconds. Pearson's correlation coefficient showed a strong correlation between dabigatran concentrations and POC-ECT with WB/CB/CP (R2 = 0.78/0.90/0.92). Dabigatran concentrations >30 and >50 ng/mL (thresholds for thrombolysis, surgery, and reversal therapy according to clinical guidelines) were detected by POC-ECT with WB/CB/CP (>36/35/45 and >43/45/59 seconds) with 95/97/97 and 96/98/97% sensitivity, and 81/87/94 and 74/60/91% specificity.
Conclusion: This first study evaluating DOAC-specific POC coagulation testing revealed an excellent correlation of POC-ECT with actual dabigatran concentrations. Detecting clinically relevant dabigatran levels with high sensitivity/specificity, the DTM assay represents a suitable diagnostic tool in acute stroke, hemorrhage, and urgent surgery.
abstract_id: PUBMED:17433903
Inherited thrombophilia in arterial disease: a selective review. Thrombophilia may be defined as an acquired or congenital abnormality of hemostasis predisposing to thrombosis. Because arterial thrombosis is usually linked with classical risk factors such as smoking, hypertension, dyslipidemia, or diabetes, a thrombophilia workup is usually not considered in case of arterial thrombosis. The most accepted inherited hemostatic abnormalities associated with venous thromboembolism are factor V Leiden (FVL) and factor II (FII) G20210A mutations, as well as deficiencies in antithrombin (AT), protein C (PC), and protein S (PS). This review focuses on the link between these abnormalities and arterial thrombosis. Overall, the association between these genetic disorders and the three main arterial complications (myocardial infarction [MI], ischemic stroke [IS], and peripheral arterial disease [PAD]) is modest. Routine screening for these disorders is therefore not warranted in most cases of arterial complications. However, when such an arterial event occurs in a young person, inherited abnormalities of hemostasis seem to play a role, particularly when associated with smoking or oral contraceptive use. These abnormalities also seem to play a role in the risk of premature occlusion after revascularization procedures. Therefore thrombophilia tests may be informative in a very restricted population with arterial events. Anticoagulants rather than antiplatelet therapy may be preferable for these patients, although this remains to be proven.
abstract_id: PUBMED:27899756
Coagulation Testing in Acute Ischemic Stroke Patients Taking Non-Vitamin K Antagonist Oral Anticoagulants. Background And Purpose: In patients who present with acute ischemic stroke while on treatment with non-vitamin K antagonist oral anticoagulants (NOACs), coagulation testing is necessary to confirm the eligibility for thrombolytic therapy. We evaluated the current use of coagulation testing in routine clinical practice in patients who were on NOAC treatment at the time of acute ischemic stroke.
Methods: Prospective multicenter observational RASUNOA registry (Registry of Acute Stroke Under New Oral Anticoagulants; February 2012-2015). Results of locally performed nonspecific (international normalized ratio, activated partial thromboplastin time, and thrombin time) and specific (antifactor Xa tests, hemoclot assay) coagulation tests were documented. The implications of test results for thrombolysis decision-making were explored.
Results: In the 290 patients enrolled, nonspecific coagulation tests were performed in ≥95% and specific coagulation tests in 26.9% of patients. Normal values of activated partial thromboplastin time and international normalized ratio did not reliably rule out peak drug levels at the time of the diagnostic tests (false-negative rates 11%-44% [95% confidence interval 1%-69%]). Twelve percent of patients apparently failed to take the prescribed NOAC prior to the acute event. Only 5.7% (9/159) of patients in the 4.5-hour time window received thrombolysis, and NOAC treatment was documented as main reason for not administering thrombolysis in 52.7% (79/150) of patients.
Conclusions: NOAC treatment currently poses a significant barrier to thrombolysis in ischemic stroke. Because nonspecific coagulation test results within normal range have a high false-negative rate for detection of relevant drug concentrations, rapid drug-specific tests for thrombolysis decision-making should be established.
Clinical Trial Registration: URL: http://www.clinicaltrials.gov. Unique identifier: NCT01850797.
abstract_id: PUBMED:20662756
Screening for coagulation disorders in patients with ischemic stroke. The role of coagulation disorders in the pathogenesis of (recurrent) ischemic stroke is uncertain. Therefore, the clinical utility of screening patients with ischemic stroke for these conditions and the therapeutic implications of a detected coagulation disorder in a patient who experienced ischemic stroke are uncertain. We reviewed the currently available data on the relationship between various inherited and acquired coagulation abnormalities (factor V Leiden and prothrombin G20210A mutations, deficiencies of protein C, protein S and anti-thrombin, hyperhomocysteinemia, the antiphospholipid syndrome and increased levels of fibrinogen) and ischemic stroke. Based on the existing evidence we discuss the usefulness of screening stroke patients for prothrombotic conditions and current recommendations regarding the optimal management of ischemic stroke patients in whom a coagulation disorder is found.
Answer: Testing for inherited coagulation inhibitor deficiencies in young stroke patients may not be universally worthwhile. A study that systematically searched for deficiencies of protein C, protein S, and antithrombin III in patients under age 50 with a first arterial ischemic cerebral infarct found no abnormalities of protein C or antithrombin III and only one case of protein S deficiency, which was likely acquired rather than inherited. This suggests that in patients without clinical features of a prothrombotic state, the yield of testing for these deficiencies is likely to be low (PUBMED:11154808).
However, other studies have shown that certain thrombophilias may be associated with ischemic stroke, transient ischemic attack (TIA), and amaurosis fugax in young patients. For instance, factor V Leiden heterozygote was found to be associated with a higher risk of TIA/amaurosis fugax, and persistent presence of lupus anticoagulant was likely associated with ischemic stroke (PUBMED:26585761). Additionally, congenital disorders of glycosylation, which affect the hemostatic system, have been associated with a hypercoagulant in vitro phenotype and severe vascular complications, including stroke-like episodes or thrombosis (PUBMED:31271700).
A review of hereditary and acquired coagulation disorders in the etiology of ischemic stroke highlighted that while inherited coagulation disorders are recognized risk factors for venous thrombosis, there is no substantial evidence for an association with arterial ischemic stroke, with possible exceptions being the prothrombin G20210A mutation in adults and protein C deficiency in children (PUBMED:20854623).
In conclusion, the decision to test for inherited coagulation inhibitor deficiencies in young stroke patients should be individualized, taking into account the patient's clinical features, family history, and the presence of other risk factors. Routine screening for these conditions in all young stroke patients may not be justified based on the current evidence, but it may be considered in selected cases, especially when an arterial event occurs in a young person or in the presence of other risk factors (PUBMED:17433903). |
Instruction: Should thyroid-stimulating hormone goals be reviewed in patients with type 1 diabetes mellitus?
Abstracts:
abstract_id: PUBMED:6151903
Thyroid-stimulating immunoglobulins in insulin-dependent diabetes mellitus. Increased frequencies of thyroid diseases and thyroid microsomal antibodies have been observed in insulin-dependent diabetes mellitus. However, the exact prevalence of thyroid-stimulating immunoglobulins has not been established. In the present study these antibodies were measured by both a radioreceptor and an adenylate-cyclase stimulation assay. In forty-six patients with insulin-dependent diabetes mellitus without endogeneous insulin production (C-peptide concentration less than or equal to 0.06 nmol 1(-1)) the receptor assay was positive in ten and the stimulation assay in fifteen patients. The immunoglobulins of four patients inhibited the adenylate cyclase, and one of these was positive in the receptor assay. In nine patients with post-prandial C-peptide 0.07-0.19 nmol 1(-1), five had adenylate-cyclase-stimulating antibodies, while none were positive in the receptor assay. Thyroid hormones and thyrotropin concentrations were not different in the forty-six patients without endogenous insulin production with thyroid-stimulating immunoglobulins compared with patients without these antibodies. Patients with thyroid-stimulating immunoglobulins required a daily median amount of 0.71 IE of insulin kg-1 compared to median of 0.57 IE kg-1 in patients without these antibodies (P less than 0.03), despite a similar degree of diabetic regulation. The level of tri-iodothyronine was correlated to the antibody level in patients with adenylate-cyclase-stimulating antibodies. While the prognostic and possibly pathogenetic importance of these antibodies in Graves' disease have been established, their significance in insulin-dependent diabetes mellitus remains to be demonstrated.
abstract_id: PUBMED:1363083
Measurement of thyroid-stimulating immunoglobulins by incorporation of tritiated-adenine into intact FRTL-5 cells: a viable alternative to radioimmunoassay for the measurement of cAMP. Objective: To evaluate the utility of 3H-adenine incorporation into intact rat thyroid epithelial cells (FRTL-5) as an alternative to radioimmunoassay for the measurement of cAMP following stimulation of these cells by serum thyroid-stimulating immunoglobulins from patients with Graves' disease.
Design: We determined the cAMP produced by FRTL-5 cells following incubation with serum from patients with a spectrum of autoimmune thyroid and other diseases using the 3H-adenine assay.
Patients: We studied 27 patients with untreated Graves' disease, 10 with Graves' disease complicated by ophthalmopathy (all on antithyroid medication), 11 with Hashimoto's thyroiditis, five with multinodular goitre, one with thyroid carcinoma, 23 with type 1 diabetes mellitus, 19 with other autoimmune diseases and 10 controls.
Measurements: The 3H-cAMP produced in cells incubated with either bovine TSH (bTSH) or polyethylene glycol-precipitated serum immunoglobulins, was separated by sequential chromatography on Dowex and alumina columns, and counted. The thyroid-stimulating immunoglobulins index (3H-cAMP patient immunoglobulins/3H-cAMP control immunoglobulins) was calculated for each serum and considered positive if greater than 1.5 (+ve thyroid-stimulating immunoglobulins index, i.e. > 2 standard deviations above control). The thyroid-stimulating immunoglobulins index was correlated with measurement of thyrotrophin binding inhibitory immunoglobulins (TBII) by radioreceptor assay.
Results: The 3H-adenine assay has a sensitivity of 10(-11) M bTSH with maximal stimulation at 10(-9) M bTSH (30-fold). Twenty-five of 27 patients (92%) with untreated Graves' disease and four of 10 patients with Graves' disease complicated by ophthalmopathy had +ve thyroid-stimulating immunoglobulin indices. The thyroid-stimulating immunoglobulins index in patients with untreated Graves' disease correlated with their TBII assay result (r = 0.63, P < 0.001). In addition, the index was negative in patients with Hashimoto's thyroiditis, multinodular goitre, thyroid carcinoma, and type 1 diabetes mellitus. Of the patients with other autoimmune diseases only one (a patient with systemic lupus erythematosis) had a +ve thyroid-stimulating immunoglobulin index. Direct comparison of cAMP measurement by 3H-adenine incorporation and commercial radioimmunoassay showed an equal sensitivity to both bTSH and Graves' immunoglobulins. After cell preparation, results are obtained more quickly with the 3H-adenine assay than with a cAMP radioimmunoassay (5 hours compared to 2 days), and far more cheaply than by commercial radioimmunoassays.
Conclusions: Measurement of thyroid-stimulating immunoglobulins using the incorporation of 3H-adenine into cAMP in FRTL-5 cells is sensitive, reproducible, rapid and specific. These features make this assay a viable alternative to RIA for the measurement of thyroid-stimulating immunoglobulins in patients with Graves's disease.
abstract_id: PUBMED:24961827
Should thyroid-stimulating hormone goals be reviewed in patients with type 1 diabetes mellitus? Results from the Brazilian Type 1 Diabetes Study Group. Aims: To investigate if thyroid-stimulating hormone (TSH) levels are associated with any differences in glycaemic control or diabetes-related complications in individuals with Type 1 diabetes.
Methods: This observational, cross-sectional and multicentre study included patients with Type 1 diabetes for ≥ 5 years, with a recent TSH measurement and without a known previous thyroid disease. Patients were divided into three groups according to TSH levels: 0.4-2.5 mU/l; 2.5-4.4 mU/l; and ≥ 4.5 mU/l.
Results: We included 1205 individuals with a mean ± sd age of 23.8 ± 11.3 years. Seven patients had TSH levels <0.4 mU/l and were excluded from the comparison between groups. HbA1c levels, systolic and diastolic blood pressure, LDL cholesterol and disease duration were similar in all groups (P = 0.893, P = 0.548, P = 0.461, P = 0.575 and P = 0.764, respectively). The rates of diabetic retinopathy and GFR < 60/mL/min/1.73 m(2) differed between groups (P = 0.006 and P < 0.001, respectively) and were lower in those with lower TSH levels. Multivariate analysis confirmed these associations. The frequencies of retinopathy and GFR < 60 mL/min/1.73 m(2) were higher not only in patients with TSH ≥ 4.5 mU/l (odds ratio 1.878 and 2.271, respectively) but also in those with TSH levels of 2.5-4.4 mU/l (odds ratio 1.493 and 2.286, respectively), when compared with patients with TSH levels of 0.4-2.5 mU/l.
Conclusions: TSH levels of 0.4-2.5 mU/l are associated with a lower risk of diabetic retinopathy and renal failure in individuals with Type 1 diabetes, independently of glycaemic control and duration of the disease.
abstract_id: PUBMED:31627531
The preva-lence and pattern of autoimmune thyroid disease in young patients with type 1 diabetes mellitus This study whose purpose was to examine the prevalence and pattern of autoimmune disease of the thyroid gland (TG) in young patients with type I diabetes mellitus (DM1) involved 288 individuals with DM1 whose age was 5.5 to 30years; the average duration of DM1 was 5.5+4.7 years. In all the patients, thyroid ultrasonography was performed, thyroid antibodies (Abs) [thyroid peroxidase antibodies (TPO-Abs) and thyroglobulin antibodies (TG-Abs)] were determined, thyroid function was evaluated by measuring the level of thyroid-stimulating hormone (TSH) and free reactions of thyroid hormones were assessed. The detection rates of TPO-Abs and TG-Abs were 22.2 and 20.5%, respectively, which was substantially greater than those in apparently healthy individuals matched by age and gender. The frequency of positive thyroid Abs was significantly higher in females. Age at the moment of examination and the duration of DM1 were not found to have an impact on the detection rate of thyroid Abs. The ultrasound signs of autoimmune thyroid diseases were revealed in 19.1% of the cases. 10.0% of the patients with DM1 were found to have these or those TG dysfunctions the most common of which was subclinical hypothyroidism (6.60%). A comprehensive TG assessment in the young patients with DM1 demonstrated the typical signs of autoimmune thyroid disease in 14.2% of the examinees. In the patients with autoimmune thyroid disease, the thyroid was significantly larger and the levels of TSH, TPO-Abs, and TG-Abs were higher than the patients without autoimmune thyroid disease and those at risk for the latter. The proportion of females was significantly higher among the patients 19 with concomitant autoimmune thyroid disease than that among those without the signs of this condition. It has been concluded that the high incidence of autoimmune thyroid disease in young patients with DM1 permits the authors to recommend a screening for its early detection, which involves the measurement of the serum level of TPO-Abs and TG ultrasonography in all females with first detected DM1.
abstract_id: PUBMED:25285279
Thyroid abnormalities in Egyptian children and adolescents with type 1 diabetes mellitus: A single center study from Upper Egypt. Background: The aim of this study was to detect the prevalence of thyroid abnormalities among children and adolescents with type 1 diabetes mellitus (T1DM) in Upper Egypt and its relationship with disease-related variables.
Design: Cross-sectional controlled study.
Patients And Methods: The study included 94 children and adolescents with T1DM (Group 1) attending for regular follow-up in the diabetes clinic of Assiut Children University Hospital, Assiut, Egypt were enrolled in the study and 60 healthy subjects matching in age and sex were taken as a control (Group 2). History taking, clinical examination, measurement of thyroid stimulating hormone (TSH), free thyroxine (FT4) and free triiodothyronine, anti-thyroid peroxidase (anti-TPO) and anti-thyroglobulin (anti-Tg) antibodies levels as well as HbA1c were measured.
Results: Mean TSH levels were significantly higher in (Group 1) when compared to control (P < 0.01). Six children (6.3%) were found to have subclinical hypothyroidism in Group 1 compared with two children (2.1%) in the control group (P < 0.001) two children (2.1%) were found to have clinical hypothyroidism in Group 1 compared with non in the control group. Positive levels of anti-TPOAb and anti-TgAb were found in 9 (9.5%) and 6 (6.3%) in Group 1 compared with 2 (3.3)% and 1 (1.6%) of controls respectively (P < 0.01). Cases with hypothyroidism were significantly older, had longer duration of DM, higher body mass index and higher HbA1c compared with those without hypothyroidism. TSH had significant positive correlations to age (r = 0.71, P < 0.001), diabetes duration (r = 0.770, P < 0.001), Anti-TPO level (r = 0.678, P < 0.01), HbAIc level (r = -0.644, P < 0.01) and significant negative correlation with FT4 (r = -0.576, P = 0.01).
Conclusion: The present study reported high prevalence of thyroid abnormalities in children and in children and adolescents with type 1 diabetes in Upper Egypt. The study recommended yearly evaluation thyroid function tests and thyroid antibodies in all children and adolescents with type 1 diabetes commencing from the onset of diabetes.
abstract_id: PUBMED:28217497
Thyroid profile and autoantibodies in Type 1 diabetes subjects: A perspective from Eastern India. Context: There has been a rise in the incidence of type 1 diabetes mellitus (T1DM) in India. The prevalence of thyroid autoantibodies and thyroid dysfunction is common in T1DM.
Aims: The aim of this study is to determine the incidence of thyroid dysfunction and thyroid autoantibodies in T1DM subjects, without any history of thyroid disease, and the prevalence of glutamic acid decarboxylase (GAD) antibody, Islet antigen-2 antibody (IA2), thyroid peroxidase (TPO), and thyroglobulin autoantibodies (Tg-AB) in T1DM subjects.
Settings And Design: This was a cross-sectional clinical-based study.
Subjects And Methods: Fifty subjects (29 males, 31 females) with T1DM and without any history of thyroid dysfunction were included in the study. All subjects were tested for GAD antibody, IA2 antibody, TPO antibody, thyroglobulin antibody, free thyroxine, and thyroid-stimulating hormone.
Statistical Analysis Used: A Chi-square/pooled Chi-square test was used to assess the trends in the prevalence of hypothyroidism. A two-tailed P < 0.05 was considered statistically significant.
Results: The mean age of the subjects was 23.50 years. 9.8% of subjects were below the age of 12 years, 27.45% of subjects were of age 12-18 years, 37.25% of subjects were of age 19-30 years, and 25.49% of subjects were above 30 years. 78% were positive autoantibody for GAD, 30% for IA-2, 24% for TPO, and 16% were positive for Tg-AB. A total of 6.0% of T1DM subjects had evidence of clinical hypothyroidism, but the prevalence of subclinical hyperthyroidism (SCH) varied from 32% to 68.0% for we considered different definitions of SCH as advocated by different guidelines. All subjects with overt hypothyroidism had positive GAD and thyroid autoantibodies. One (2%) subject had clinical hyperthyroidism with strongly positive GAD, TPO, and Tg-AB.
Conclusions: We found a high prevalence of GAD, IA2, TPO, and Tg-AB in our T1DM subjects. A substantial proportion of our subjects had undiagnosed thyroid dysfunction with a preponderance of subclinical hypothyroidism. All T1DM subjects with overt hypothyroidism or hyperthyroidism had positive GAD and thyroid autoantibodies. The high prevalence of undiagnosed thyroid dysfunction highlights the importance of regular thyroid screening in T1DM subjects.
abstract_id: PUBMED:12060066
Predictivity of thyroid autoantibodies for the development of thyroid disorders in children and adolescents with Type 1 diabetes. Aims: To investigate the prevalence of thyroid autoantibodies and their significance for the development of thyroid disorders in children and adolescents with Type 1 diabetes.
Methods: Antibodies to thyroglobulin (anti-TG) and thyroperoxidase (anti-TPO) were measured in 216 patients (113 boys; median age 12.9 years (range 1-22 years)) with Type 1 diabetes (diabetes duration 2.5 years (0-14 years)) in a cross-sectional study. Sixteen patients with significantly elevated anti-TPO titres were followed longitudinally (6.0 years (4-13 years)) including the measurement of anti-TPO, anti-TG, T(3), T(4), thyroid-stimulating hormone (TSH) and ultrasound assessment.
Results: Twenty-two patients (10.0%) had significantly elevated titres of anti-TPO, 19 (8.7%) of anti-TG and 13 (5.9%) of both autoantibodies. Girls had more frequently elevated anti-TPO antibodies than boys (P < 0.05). Eight of 16 patients (50%) developed thyroid disorders defined by a TSH elevation (> or = 4.5 microU/ml) and/or sonographic thyroid abnormalities during a median time of 3.5 years (2-6 years) after first detection of anti-TPO positivity. They were characterized by higher levels of anti-TPO (P = 0.001) and a more frequent coexistence of anti-TG antibodies (P = 0.002) than those with no development of thyroid disorder even after an observation period of 5.5 years (5-10 years).
Conclusions: Because 50% of children with diabetes and significant titres of anti-TPO develop thyroid problems within 3-4 years, examinations of thyroid antibodies should be performed yearly. In cases of significant antibody titres, thyroid function tests and ultrasound assessment are recommended in order to minimize the risk of undiagnosed hypothyroidism in these patients.
abstract_id: PUBMED:19058589
A comparative study of thyroid hormone levels in diabetic and non-diabetic patients. Diabetic patients have a higher prevalence of thyroid disorders than the general population, this may influence diabetic management. In this study, we investigated thyroid hormone levels in uncontrolled diabetic patients. This comparative study was conducted at the Bangladesh Institute of Research and Rehabilitation in Diabetes, Endocrine and Metabolic Disorders (BIRDEM). Fifty-two diabetic patients were consecutively selected from diabetic patients attending the out-patient department of BIRDEM. Fifty control subjects were selected from non-diabetic patients who attended the out-patient department of BIRDEM for routine check-ups as advised by their attending physicians. The subjects in both groups were above 30 years of age. The concentration of thyroid stimulating hormone (TSH), free triiodothyronine (FT3) and thyroxine (FT4) were evaluated using a Microparticle Enzyme Immunoassay (MEIA) procedure. Patients with type 2 diabetes had significantly lower serum FT3 levels (p = 0.000) compared to the control groups. There were no significant differences observed in serum FT4 (p = 0.339) and TSH (p = 0.216) levels between the control and study subjects. All the diabetic patients had high fasting blood glucose levels (12.15 +/- 2.12). We conclude that FT3 levels were altered in these study patients with uncontrolled diabetes.
abstract_id: PUBMED:6431066
Thyroid hormone abnormalities at diagnosis of insulin-dependent diabetes mellitus in children. Comprehensive evaluation of thyroid hormone indices was performed in 58 children with insulin-dependent diabetes mellitus (IDDM) at the time of diagnosis and prior to insulin therapy. Two patients were found to have primary hypothyroidism, with markedly elevated TSH and very low T4, free T4, T3, and reverse T3 concentrations. The remaining 56 patients had the transient alterations in thyroid hormone indices that are characteristic of "euthyroid sick" or "low T3" syndrome. Mean TSH and reverse T3 values were significantly higher and the mean T3, T4, and free T4 levels were significantly lower than those observed in the control population. Ten of the diabetic patients had elevated TSH concentrations and normal or low free T4 values; eight had normal TSH levels and low T4 and free T4 values. The remainder of the group had thyroid indices compatible with abnormal peripheral metabolism of thyroid hormones. Elevated titers of antimicrosomal antibodies were found in 16% of the children with IDDM. We conclude that abnormal peripheral metabolism and altered hypothalamic-pituitary function are responsible for the transient changes in thyroid hormone indices in patients with untreated IDDM. The most reliable indicators of concomitant primary hypothyroidism in untreated IDDM are markedly elevated TSH and low reverse T3 values.
abstract_id: PUBMED:28781255
Pharmacological approaches for correction of thyroid dysfunctions in diabetes mellitus Thyroid diseases are closely associated with the development of types 1 and 2 diabetes mellitus (DM), and as a consequence, the development of effective approaches for their treatment is one of the urgent problems of endocrinology. Traditionally, thyroid hormones (TH) are used to correct functions of the thyroid system. However, they are characterized by many side effects, such as their negative effect on the cardiovascular system as well as the ability of TH to enhance insulin resistance and to disturb insulin-producing function of pancreas, exacerbating thereby diabetic pathology. Therefore, the analogues of TH, selective for certain types of TH receptors, that do not have these side effects, are being developed. The peptide and low-molecular weight regulators of thyroid-stimulating hormone receptor, which regulate the activity of the thyroid axis at the stage of TH synthesis and secretion in thyrocytes, are being created. Systemic and intranasal administration of insulin, metformin therapy and drugs with antioxidant activity are effective for the treatment of thyroid pathology in types 1 and 2 DM. In the review, the literature data and the results of own investigations on pharmacological approaches for the treatment and prevention of thyroid diseases in patients with types 1 and 2 DM are summarized and analyzed.
Answer: Yes, thyroid stimulating hormone (TSH) goals should potentially be reviewed in patients with type 1 diabetes mellitus (T1DM). A study investigating the association of TSH levels with glycaemic control and diabetes-related complications in individuals with T1DM found that TSH levels of 0.4-2.5 mU/l are associated with a lower risk of diabetic retinopathy and renal failure, independently of glycaemic control and duration of the disease (PUBMED:24961827). This suggests that maintaining TSH levels within a specific range may have a beneficial impact on the risk of developing certain complications in T1DM patients.
Additionally, the prevalence of thyroid autoantibodies and thyroid dysfunction is common in T1DM, and the presence of thyroid autoantibodies is predictive of the development of thyroid disorders in this population (PUBMED:12060066). Given the high incidence of autoimmune thyroid disease in young patients with T1DM, it is recommended to screen for early detection, which involves the measurement of serum levels of thyroid antibodies and thyroid ultrasonography, especially in females with newly detected T1DM (PUBMED:31627531).
Furthermore, studies have shown that diabetic patients have a higher prevalence of thyroid disorders than the general population, which may influence diabetic management (PUBMED:19058589). Thyroid hormone abnormalities, such as altered FT3 levels, have been observed in patients with uncontrolled diabetes, indicating the need for careful monitoring of thyroid function in this group (PUBMED:6431066).
In light of these findings, it appears that TSH goals and thyroid function monitoring in patients with T1DM should be carefully considered and potentially revised to optimize patient outcomes and prevent complications associated with thyroid dysfunction in this population. |
Instruction: Is paediatric assessment of motor development of very preterm and low-birthweight children appropriate?
Abstracts:
abstract_id: PUBMED:16982490
Is paediatric assessment of motor development of very preterm and low-birthweight children appropriate? Aim: To determine whether paediatricians that examine, in regular clinical practice, very preterm and very-low-birthweight children at 5 y of age detect neurological impairments and functional motor problems in these children.
Methods: We compared a paediatric judgement, a standardized neurological examination (Touwen examination) and a screening of motor development (Denver Developmental Screening Test; DDST) with the Movement ABC in 396 5-y-old very preterm and low-birthweight children.
Results: The Movement ABC detected clinically important motor disorders in 20.5% and borderline disturbances in 22.5% of the children. Compared to the Movement ABC, the sensitivity of the paediatric judgement was 0.19, Touwen examination 0.62 and DDST 0.52; the negative predictive values were 0.61, 0.74 and 0.69, respectively.
Conclusion: Paediatric assessment of motor development in 5-y-old very preterm and low-birthweight children generally is not sensitive enough to detect functional motor problems. The Movement ABC should be added to the assessment of the motor development of very preterm and low-birthweight children at 5 y of age.
abstract_id: PUBMED:35860683
Psychomotor development in very and extremely low-birth-weight preterm children: Could it be predicted by early motor milestones and perinatal complications? Preterm-born children are at risk of slower psychomotor development. This risk may be associated with low birth weight and other perinatal factors and morbidities. We aimed to assess psychomotor development in school-aged preterm children, and to determine whether some early motor and perinatal variables could be related to and/or predict the later motor achievements. Parents of 54 very low-birth-weight preterm, 24 extremely low-birth-weight preterm and 96 control children completed the Movement Assessment Battery for Children (MABC-2-C) checklist and were interviewed about the motor milestones of their children. Significant differences were found between the preterm and control groups in the MABC-2-C results. MABC-2-C outcomes were significantly predicted by the age of crawling, the use of steroids, mechanical ventilation and intraventricular hemorrhage (IVH). The use of screening tools may allow the rapid identification of psychomotor development delays. The presence of some perinatal risk factors and some motor milestone attainments could be related to motor development in the later childhood of preterm children.
abstract_id: PUBMED:28642071
The link between motor and cognitive development in children born preterm and/or with low birth weight: A review of current evidence. The current review focuses on evidence for a link between early motor development and later cognitive skills in children born preterm or with Low Birth Weight (LBW). Studies with term born children consistently show such a link. Motor and cognitive impairments or delays are often seen in children born preterm or with LBW throughout childhood and studies have established a cross-sectional association between the two. However, it is not yet clear if, and if so, how, motor and cognitive skills are longitudinally interrelated in these children. Longitudinal studies with this population including measures of motor development during the first year of life and cognitive measures at later measurement points were included. The 17 studies included usually show a link between level and/or quality of motor development during the first year of life and later cognitive skills in children born preterm and/or with LBW. However, given the small number of studies, and a possible effect of early interaction between motor and cognitive skills affecting this relation, more work is clearly needed.
abstract_id: PUBMED:23870751
Perceptual-motor abilities in pre-school preterm children. Background: Several studies report a high percentage of premature infants presenting perceptual motor difficulties at school age. The new version of the Movement Assessment Battery for Children allows the assessment of perceptual-motor abilities in children from the age of 3years.
Aims: To evaluate early perceptual-motor abilities in prematurely born children below the age of 4years.
Study Design: The Movement Assessment Battery for Children 2nd edition was administered to 105 low-risk prematurely born children (<32weeks gestation) and in a control group of 105 term-born children matched for age and sex. All children were assessed between the age of 3years and 3years-11months.
Results: 63 children (60%) had total scores above the 15th percentile, 15 (14.3%) had scores between the 5th and the 15th percentile, and 13 (12.4%) below the 5th percentile. The remaining 14 children (13.3%) refused to perform or to complete the test. The difference between preterm and control group was significant for total scores, Manual Dexterity and Aiming and Catching scores. In the preterm group there was a correlation between age at testing, total scores and Aiming and Catching subscores. The Movement ABC-2 subscores were significantly lower in children born below 29weeks.
Conclusion: Perceptual-motor difficulties can already be detected on the assessment performed before the age of 4years. Prematurely born children assessed between 3years and 3years-3months appeared to have more difficulties in performing the test than the older ones or their age matched term-born peers. These findings support the possibility of a delayed maturation in the younger age group.
abstract_id: PUBMED:25841102
Motor performance, postural stability and behaviour of non-disabled extremely preterm or extremely low birth weight children at four to five years of age. Background: Extremely preterm or extremely low birth weight (ELBW) children who are non-disabled and otherwise healthy are at risk of neurodevelopmental impairments. Further understanding of these impairments is needed before commencement of formal education to optimise participation levels at a critical time point for these children.
Aims: To explore motor co-ordination, postural stability, limb strength and behaviour of non-disabled four to five year old children with a history of extreme prematurity or ELBW.
Study Design: Prospective-descriptive-cohort-study.
Subjects: 50 children born at less than 28 weeks gestation or who had a birth weight less than 1000g with minimal/mild motor impairments and no significant neurological/cognitive impairments.
Outcome Measures: Movement Assessment Battery for Children second-edition (MABC-2), single leg stance test (SLS), lateral reach test, standing long jump test and Child Behaviour Checklist for preschool children (CBCL).
Results: The mean percentile rank of the extremely preterm or ELBW sample on MABC-2 was 31% (SD 23%). SLS right (mean ± SD; 4.6 ± 2.5s) and lateral reach to the right (10.0 ± 3.9 cm) were slightly stronger than SLS left (4.4 ± 3.3s) and lateral reach left (9.9 ± 3.5 cm). The average for standing long jump was 71.6 cm (SD 21.0 cm). All participants were classified as 'normal' on CBCL syndrome scale scores, internalizing and externalizing syndrome T scores and total problem T score.
Conclusions: This sample of non-disabled extremely preterm or ELBW children performed in the lower range of normal. These children continue to be at risk of impairments, therefore, ongoing monitoring and tailored intervention may optimise development.
abstract_id: PUBMED:29724577
Spontaneous movements of preterm infants is associated with outcome of gross motor development. Aims: We conducted a longitudinal cohort study to analyze the relationship between outcome of gross motor development in preterm infants and factors that might affect their development.
Methods: Preterm infants with a birth weight of <1500 g were recruited. We measured spontaneous antigravity limbs movements by 3D motion capture system at 3 months corrected age. Gross motor developmental outcomes at 6 and 12 months corrected age were evaluated using the Alberta Infant Motor Scale (AIMS). Statistical analysis was carried out by canonical correlation analysis.
Results: Eighteen preterm infants were included. In the 6 months corrected age analysis, spontaneous movement had a major effect on Prone and Sitting at 6 months corrected age of AIMS. In the 12 months corrected age analysis, spontaneous movement had a major effect on Sitting and Standing at 12 months corrected age of AIMS.
Conclusions: In preterm infants, better antigravity spontaneous movements at 3 months corrected age were significantly correlated with better gross motor development at 6 or 12 months corrected age.
abstract_id: PUBMED:35921693
Cognitive and motor development in preterm children from 6 to 36 months of age: Trajectories, risk factors and predictability. Background: Although numerous studies have examined the development of preterm children born very low birth weight (VLBW, birth body weight < 1500 g), variations of developmental progress within individuals have rarely been explored. The aim of this research was to examine the cognitive and motor trajectories in preterm children born VLBW at early ages and to assess the risk factors and predictability of these trajectories.
Method: Five hundred and eighty preterm infants born VLBW from three cohort studies (2003 to 2014) were prospectively assessed their mental and motor development using the Bayley Scales at 6, 12, 24, and 36 months, and cognitive, motor and behavioral outcomes using the Movement Assessment Battery for Children and the Child Behavior Checklist for Ages 1.5-5 at 4 years of age.
Results: Preterm children born VLBW manifested three cognitive patterns (stably normal [64.0 %], deteriorating [31.4 %], and persistently delayed [4.6 %]) and four motor patterns (above average [6.3 %], stably normal [60.0 %], deteriorating [28.5 %], and persistently delayed [5.2 %]) during 6-36 months. Low birth body weight, stage III-IV retinopathy of prematurity and low parental socio-economic status were associated with the deteriorating patterns; prolonged hospitalization and major brain damage were additionally associated with the persistently delayed patterns. Furthermore, the cognitive and motor deteriorating pattern was each predictive of cognitive and motor impairment at 4 years of age; whereas, the persistently delayed patterns were predictive of multiple impairments.
Conclusion And Implications: Preterm children born VLBW display heterogeneous trajectories in early cognitive and motor development that predict subsequent developmental and behavioral outcomes.
abstract_id: PUBMED:37553581
Early neurological and motor function in infants born moderate to late preterm or small for gestational age at term: a prospective cohort study. Background: There are inconsistent findings regarding neurological and motor development in infants born moderate to late preterm and infants born small for gestational age at term. The primary aim of this study was to compare neurological and motor function between preterm, term SGA and term AGA infants aged three to seven months corrected age using several common assessment tools. The secondary aim was to investigate their motor function at two years.
Methods: In this prospective cohort study, we included 43 infants born moderate to late preterm with gestational age 32-36 + 6 weeks, 39 infants born small for gestational age (SGA) at term with a birthweight ≤ 10th centile for gestational age, and 170 infants born at term with appropriate weight for gestational age (AGA). Neurological and motor function were assessed once in infancy between three to seven months corrected age by using four standardised assessment tools: Hammersmith Infant Neurological Examination (HINE), Test of Infant Motor Performance, General Movements Assessment and Alberta Infant Motor Scale. The Ages and Stages Questionnaire (ASQ-2) was used at two years.
Results: At three to seven months corrected age, mean age-corrected HINE scores were 61.8 (95% confidence interval (CI): 60.5 to 63.1) in the preterm group compared with 63.3 (95% CI: 62.6 to 63.9) in the term AGA group. Preterm infants had 5.8 (95% CI: 2.4 to 15.4) higher odds for HINE scores < 10th percentile. The other test scores did not differ between the groups. At two years, the preterm group had 17 (95% CI: 1.9 to 160) higher odds for gross motor scores below cut-off on ASQ-2 compared with the term AGA group.
Conclusions: The present study found subtle differences in neurological function between preterm and term AGA infants in infancy. At two years, preterm children had poorer gross motor function. The findings indicate that moderate prematurity in otherwise healthy infants pose a risk for neurological deficits not only during the first year, but also at two years of age when compared with term AGA children.
abstract_id: PUBMED:24690584
Preterm children have unfavorable motor, cognitive, and functional performance when compared to term children of preschool age. Objective: to compare the motor coordination, cognitive, and functional development of preterm and term children at the age of 4 years.
Methods: this was a cross-sectional study of 124 four-year-old children, distributed in two different groups, according to gestational age and birth weight, paired by gender, age, and socioeconomic level. All children were evaluated by the Movement Assessment Battery for Children - second edition (MABC-2), the Pediatric Evaluation of Disability Inventory (PEDI), and the Columbia Mental Maturity Scale (CMMS).
Results: preterm children had worse performance in all tests, and 29.1% of the preterm and 6.5% of term groups had scores on the MABC-2 indicative of motor coordination disorder (p=0.002). In the CMMS (p=0.034), the median of the standardized score for the preterm group was 99.0 (± 13.75) and 103.0 (± 12.25) for the term group; on the PEDI, preterm children showed more limited skill repertoire (p=0.001) and required more assistance from the caregiver (p=0.010) than term children.
Conclusion: this study reinforced the evidence that preterm children from different socioeconomic backgrounds are more likely to have motor, cognitive, and functional development impairment, detectable before school age, than their term peers.
abstract_id: PUBMED:19934425
Motor development in very preterm and very low-birth-weight children from birth to adolescence: a meta-analysis. Context: Infants who are very preterm (born < or = 32 weeks of gestation) and very low birth weight (VLBW) (weighing < or = 1500 g) are at risk for poor developmental outcomes. There is increasing evidence that very preterm birth and VLBW have a considerable effect on motor development, although findings are inconsistent.
Objective: To investigate the relationship between very preterm birth and VLBW and motor development.
Data Sources: The computerized databases EMBASE, PubMed, and Web of Knowledge were used to search for English-language peer-reviewed articles published between January 1992 and August 2009.
Study Selection: Studies were included if they reported motor scores of very preterm and VLBW children without congenital anomalies using 1 of 3 established and widely used motor tests: the Bayley Scales of Infant Development II (BSID-II), the Movement Assessment Battery for Children (MABC), and the Bruininks-Oseretsky Test of Motor Proficiency (BOTMP). Forty-one articles were identified, encompassing 9653 children.
Results: In comparison with term-born peers, very preterm and VLBW children obtained significantly lower scores on all 3 motor tests: BSID-II: d = -0.88 (95% confidence interval [CI], -0.96 to -0.80; P < .001), MABC: d = -0.65 (95% CI, -0.70 to -0.60; P < .001), and BOTMP: d = -0.57 (95% CI, -0.68 to -0.46; P < .001). Whereas motor outcomes on the BSID-II show a catch-up effect in the first years of development (r = 0.50, P = .01), the results on the MABC demonstrate a nonsignificantly greater deficit with increasing age during elementary school and early adolescence (r = -0.59, P = .07).
Conclusion: Being born preterm or VLBW is associated with significant motor impairment persisting throughout childhood.
Answer: The pediatric assessment of motor development in very preterm and low-birthweight children may not be sensitive enough to detect functional motor problems. A study comparing pediatric judgment, a standardized neurological examination (Touwen examination), and a screening of motor development (Denver Developmental Screening Test; DDST) with the Movement ABC in 396 5-year-old very preterm and low-birthweight children found that the Movement ABC detected clinically important motor disorders in 20.5% and borderline disturbances in 22.5% of the children. However, the sensitivity of the pediatric judgment was only 0.19, Touwen examination 0.62, and DDST 0.52 when compared to the Movement ABC. The negative predictive values were 0.61, 0.74, and 0.69, respectively. These findings suggest that the Movement ABC should be added to the assessment of motor development in very preterm and low-birthweight children at 5 years of age (PUBMED:16982490).
Moreover, other studies have shown that preterm-born children are at risk of slower psychomotor development, which may be associated with low birth weight and other perinatal factors and morbidities. The Movement Assessment Battery for Children (MABC-2-C) outcomes were significantly predicted by the age of crawling, the use of steroids, mechanical ventilation, and intraventricular hemorrhage (IVH), indicating that the presence of some perinatal risk factors and some motor milestone attainments could be related to motor development in later childhood of preterm children (PUBMED:35860683).
Therefore, while pediatric assessments are commonly used, they may not always be appropriate or sufficient on their own to detect motor development issues in very preterm and low-birthweight children. The use of more sensitive tools like the Movement ABC is recommended to improve the detection of motor problems in this population. |
Instruction: Do youths gamble?
Abstracts:
abstract_id: PUBMED:25174282
Validity of standard gamble estimated quality of life in acute venous thrombosis. Introduction: The standard gamble is considered the 'gold standard' technique for measuring quality of life. We recently used the standard gamble to estimate quality of life in acute venous thrombosis, and found unexpected variability in the responses. The current study aimed to explore the reasons for variability by comparing the standard gamble technique in patients with acute venous thrombosis to other quality of life measurement tools.
Materials And Methods: Thrombosis clinic patients treated for venous thrombosis were eligible to participate. Patients evaluated their current health state by performing a standard gamble interview, reporting on a visual analogue scale, completing the SF-36 and disease specific questionnaires (PEmb-Qol and VEINES-QOL/Sym). Validity was assessed by correlating the standard gamble utilities with the other methods. Test-retest reliability, responsiveness and acceptability were also assessed.
Results: Forty-four patients were interviewed, with 16 attending for a repeat interview. The median standard gamble utility was 0.97 (0.84-1.0), SF-6D 0.64 (0.59 - 0.80) and visual analogue score 70 (60 - 80). Participants with pulmonary embolism had lower standard gamble estimates than those with deep vein thrombosis. There was good discriminant validity in that the standard gamble estimates were not associated with risk taking behavior, negative outlook, sex or education. Test-retest reliability with the standard gamble was moderate and there was evidence of a ceiling effect.
Conclusions: Standard gamble utilities are higher than other methods of measuring quality of life in venous thrombosis. The choice of utility values adopted in studies will impact on future economic studies.
abstract_id: PUBMED:21743690
Psychiatric morbidity in college students and illiterate youths. The profile of psychiatric morbidity in university students in a general hospital psychiatric clinic was studied and compared with age matched illiterate youths. Students represented 5.1% of the clinic population and illiterate youths represented 3.1%. The majority of ill students were males, unmarried and from a rural area. In both groups 75% of cases sought medical help on their own, but 42% of students solicited psychiatric help directly, in contrast to 11% of illiterate youths. Students reported relatively high role specific stress factors. In contrast to results of student surveys and university health centers, we found an equal representation of psychoses and non-psychoses, a lower representation of problems of under achievement and no representation of alcohol or drug abuse.
abstract_id: PUBMED:35592770
M-commerce adoption among youths in Malaysia: Dataset article. The covid-19 pandemic which took the world by storm changed our behaviour towards m-commerce with enforced movement restrictions across the world. This dataset documents the factors of consideration among Malaysian youths (age 15 to 24 years old) in their intention to adopt m-commerce. Collected from October to November 2020, a total of 396 useable responses were finalized. The questionnaire consists of individual demographic variables and factors which influence the intention of youths to adopt m-commerce in Malaysia. The dataset of demographics and m-commerce related variables can be used to further explore the correlations and description of variables. The dataset is valuable for m-commerce service providers and future works of literature in understanding the behaviour of youths and hence increase the adoption rate of m-commerce among youths.
abstract_id: PUBMED:22666725
Determinants of quality of life of youths in an English-speaking Caribbean nation. Background: Studies on quality of life (QoL) on youths are limited and have not examined determinants of QoL for this cohort.
Aims: The current study seeks to examine the QoL of Jamaican youths and to build a model that identifies factors that explain QoL.
Materials And Method: During the period June to August 2006, the Centre of Leadership and Governance, Department of Government, at the University of the West Indies, Mona Campus, conducted a stratified random probability survey of 1,338 respondents. Data were collected using a 166-item questionnaire. Of the sampled population (N=1,338), the proportion of those respondents age 18 to 25 years was 27% (N=364) and this constitutes the sample for the current study. The data were stored and retrieved in the Statistical Package for the Social Sciences (SPSS 12). Descriptive statistics were used to analyse the data, and logistic regression was used to establish the model.
Results: The quality of life of Jamaican youths was determined by 4 factors which explain 20% of the variability in quality of life. The parents' economic wellbeing has the most influence on the quality of life of Jamaican youths (OR=1.348; 95% CI: 1.35, 3.04), followed by moderate religiosity (OR=3.594; 95% CI: 1.47, 8.82), the extent of the welfarism of the state (OR=5.273; 95% CI: 1.04, 1.69) and gender (OR = 1.329, 95% CI = 1.04, 1.69).
Conclusion: The current work has offered us an understanding of the determinants of QoL of youths and how interventions can be planned for in the future.
abstract_id: PUBMED:37243064
COVID-19 Vaccination Hesitancy among Youths in Soweto, South Africa. In combatting COronaVIrus Disease 2019 (COVID-19), immunization is the most prominent strategy. However, vaccination hesitancy-meaning delays in accepting or denying inoculation regardless of availability-has been identified as an essential threat to global health. Attitudes and perceptions play a pivotal role in vaccine acceptability. Meanwhile, uptake in South Africa's rollout has been particularly disappointing among youths. For that reason, we explored attitudes and perceptions of COVID-19 in 380 youths in Soweto and Thembelihle, South Africa, between April and June 2022. A staggering hesitancy rate of 79.2 percent was recorded (301/380). We found negative attitudes and confounded perceptions of COVID-19 to be fueled by medical mistrust and misinformation, with online channels as the main sources of non- and counterfactual claims stemming mostly from unregulated social media popular with youths. Understanding its underpinnings-and enhancing means of curbing vaccine hesitancy-will be paramount in boosting uptake in South Africa's immunization program, particularly among youths.
abstract_id: PUBMED:32566702
Dataset to develop self-report measure of emotional instability and behavioral difficulties for Malaysian youths. The article presents reliability statistics data in relation to the development of emotional instability and behavioral difficulties scale for youths in a Malaysia context. The data were obtained from youths participants in Kuala Lumpur and Klang Valley, Selangor, Malaysia. The data has four different subscales in describing emotional instability and behavioral difficulties. The data were analyzed using Cronbach's alpha, McDonald's ω, and Gutmann's λ6 to examine internal consistency test. The data showed that this new scale can be used to measure three subscales of emotional instability and one subscale of behavioral difficulties among youths in a Malaysia context.
abstract_id: PUBMED:29880468
Smartphone Apps for Mindfulness Interventions for Suicidality in Asian Youths: Literature Review. Background: The advent of mobile technology has ushered in an era in which smartphone apps can be used as interventions for suicidality.
Objective: We aimed to review recent research that is relevant to smartphone apps that can be used for mindfulness interventions for suicidality in Asian youths.
Methods: The inclusion criteria for this review were: papers published in peer-reviewed journals from 2007 to 2017 with usage of search terms (namely "smartphone application" and "mindfulness") and screened by an experienced Asian clinician to be of clinical utility for mindfulness interventions for suicidality with Asian youths.
Results: The initial search of databases yielded 375 results. Fourteen full text papers that fit the inclusion criteria were assessed for eligibility and 10 papers were included in the current review.
Conclusions: This review highlighted the paucity of evidence-based and empirically validated research into effective smartphone apps that can be used for mindfulness interventions for suicidality with Asian youths.
abstract_id: PUBMED:37519836
Engaging local youths in humanitarian response is not a matter of if but how. Despite being critical responders in humanitarian crises, local youths are continually left out of the humanitarian action agenda. This paper used a qualitative methodology to investigate local youths' role in humanitarian response and their impacts and assessed how humanitarian actors influence the effectiveness of youth engagement. The data was collected through semi-structured interviews with local youths who participated in the Ebola response in Sierra Leone. Findings showed that young people are significantly contributing to crises response. However, they lack an enabling environment and support system to convert their skills into valuable humanitarian resources efficiently. Therefore, despite the rhetoric that many reports and policies reflect, this study establishes that the realities of youth engagement in humanitarian activities are often misunderstood and controlled for the self-interest of different actors other than youths themselves. It advocates for a renewed focus and support for young people's skills as paramount for effective humanitarian response and building back resilient communities after emergencies. Besides, engaging local youths in tackling crises empowers them with transferable skills and stimulates their passion for participating in development issues within their communities.
abstract_id: PUBMED:37197008
Psychosocial Predictors of Suicidal Thoughts and Behaviors in Mexican-Origin Youths: An 8-Year Prospective Cohort Study. Suicide is the second leading cause of death for youths in the United States. More Latino adolescents report suicidal thoughts and/or behaviors (STBs) than youths of most other ethnic communities. Yet few studies have examined multiple psychosocial predictors of STBs in Latino youths using multiyear longitudinal designs. In this study, we evaluated the progression of STBs in 674 Mexican-origin youths (50% female) from fifth grade (10 years old) to 12th grade (17 years old) and identified psychosocial predictors of changes in STBs across this period. Latent growth curve models revealed that being female and later-generation status were associated with increasing prevalence in STBs across adolescence. Family conflict and peer conflict predicted increased STBs, whereas greater familism predicted less STBs. Thus, interpersonal relationships and cultural values contribute to the development of STBs in Mexican-origin youths and may be key levers for decreasing suicidality in this understudied but rapidly growing portion of the U.S. adolescent population.
abstract_id: PUBMED:34484002
Are Malaysian Youths Overdependent on the Internet?: A Narrative Review. Overdependence on the internet is a grave concern that has enveloped Malaysian youths which could lead to a variety of sequelae. This narrative review aims to determine the definition of internet overdependence and its associated factors, as well as the potential preventive and treatment strategies for internet overdependence. From the literature, internet overdependence is regarded as a 3-factor model encompassing salience, self-control failure, and serious consequences. Sociodemographic factors such as age, gender, and ethnicity, as well as psychosocial factors such as depression, anxiety, stress, and loneliness, were found to be associated with internet overdependence among Malaysian youths. A multimodal treatment approach is recommended by the implementation of various types of treatments, integrating disciplines such as pharmacology, psychotherapy, and family counseling. Despite various terminologies being used and unclear conceptualization of its nomenclature, overdependence on the internet is prevalent among youths in Malaysia. Future research should go toward establishing a clear definition of its terminology and attaining more robust evidence on treatment strategies.
Answer: The provided abstracts do not directly address the question of whether youths gamble. The abstracts focus on various aspects of youth behavior and quality of life, including the validity of standard gamble in estimating quality of life in acute venous thrombosis (PUBMED:25174282), psychiatric morbidity (PUBMED:21743690), m-commerce adoption (PUBMED:35592770), determinants of quality of life (PUBMED:22666725), COVID-19 vaccination hesitancy (PUBMED:37243064), emotional instability and behavioral difficulties (PUBMED:32566702), mindfulness interventions for suicidality (PUBMED:29880468), engagement in humanitarian response (PUBMED:37519836), predictors of suicidal thoughts and behaviors (PUBMED:37197008), and internet overdependence (PUBMED:34484002). None of these abstracts provide information on youth gambling behavior. Therefore, based on the provided abstracts, it is not possible to answer the question about youth gambling. |
Instruction: Is early analgesia associated with delayed treatment of appendicitis?
Abstracts:
abstract_id: PUBMED:18272097
Is early analgesia associated with delayed treatment of appendicitis? Purpose: We sought to investigate the relationship between delay in treatment of appendicitis and early use of analgesia.
Basic Procedures: We designed a matched case-control study, with patients having delayed treatment of appendicitis as the cases and patients with no delay in treatment of appendicitis as controls matched for age, sex, Alvarado score, and date of diagnosis. Of 957 patients with appendicitis, there were 103 delayed cases. Matching patients were identified yielding 103 controls.
Main Findings: In comparing cases and controls for early opiate use (26/103 cases, 24/103 controls), there was no association with delayed treatment (odds ratio, 1.11; P = .745; 95% confidence interval, 0.59-3.89). When comparing cases and controls for early NSAID use (29/103 cases, 17/103 controls), an association was found with delayed treatment (odds ratio, 1.98; P = .045; 95% confidence interval, 1.01-3.89).
Conclusion: For early analgesia in appendicitis, we did not find an association with delayed treatment for opiate analgesia, but there did appear to be an association with nonsteroidal anti-inflammatory analgesia.
abstract_id: PUBMED:23973418
Safety and impact on diagnostic accuracy of early analgesia in suspected acute appendicitis: a meta-analysis. Background: The safety of early analgesia in patients suspected to have acute appendicitis (AA) is still controversial.
Methods: Double blind randomized clinical trials comparing patients receiving or not receiving opiates for early analgesia in suspected AA were selected for meta-analysis according to PRISMA guidelines. Primary outcomes were the number of patients with AA confirmed by histology and the number of patients undergoing surgical intervention. Secondary outcomes were missed diagnoses, false positive AA and complication rate. Effect sizes were calculated using a Mantel-Haenszel fixed effects model.
Results: Previously published papers mostly analyzed surrogate end-points such as physician's confidence about the diagnosis or the alteration of clinical signs, subjective parameters dependent on personal perception. Our article focused on clinical outcome and specifically investigated those potentially related to AA instead of unspecified abdominal pain. Opiate administration did not have an impact on the number of histologically proven AA (OR = 1.196 [0.875-1.635]; P = 0.261). Differences in appendectomy rates were only slightly above the threshold for statistical significance (OR = 1.350 [0.966-1.887]; P = 0.079), suggesting that analgesia might influence the treatment approach. On the other hand missed diagnoses (OR = 0.509 [0.087-2.990]; P = 0.455) and false positive AA (OR = 1.071 [0.596-1.923]; P = 0.818) ascertained by histologic examination were unaffected, so diagnostic accuracy was retained. Safety was not compromised by opiates, as the difference in complication rates did not reach statistical significance (OR = 0.615 [0.217-1.748]; P = 0.372).
Conclusion: Early analgesia with opiates in suspected AA might influence the approach to treatment, but does not appear to alter diagnostic accuracy or surgical outcome. To support our findings, further trials on larger sample sizes from different age groups and both genders are needed.
abstract_id: PUBMED:29153247
Early versus delayed appendectomy: A comparison of outcomes. Background: The optimal timing for performing appendectomy in adults remains controversial.
Method: A one-year retrospective review of adult patients with acute appendicitis who underwent appendectomy. The cohort was divided by time-to-intervention into two groups: patients who underwent appendectomy within 8 h (group 1), and those who had surgery after 8 h (group 2). Outcome measures including perioperative morbidity and mortality, post-operative length of stay, and the 30-day readmission rate were compared between the two groups.
Results: A total of 116 patients who underwent appendectomy met the inclusion criteria: 75 patients (65%) in group 1, and 41 (35%) in group 2. There were no differences between group 1 & 2 in perioperative complications (6.7% vs. 9.8%, P = 0.483), postoperative length of stay (median [IQR]; 19.5 [11.5-40.5] vs. 20.0 [11.25-58.5] hours, P = 0.632), or 30-day readmission rate (2.7% vs. 4.9%, P = 0.543). There were no deaths in either group.
Conclusion: Delayed appendectomy performed more than 8 h was not associated with increased perioperative complications, postoperative length of stay, 30-day readmission rate, or mortality.
abstract_id: PUBMED:28574593
Early versus delayed appendicectomy for appendiceal phlegmon or abscess. Background: Appendiceal phlegmon and abscess account for 2% to 10% of acute appendicitis. People with appendiceal phlegmon or abscess usually need an appendicectomy to relieve their symptoms and avoid complications. The timing of appendicectomy for appendiceal phlegmon or abscess is controversial.
Objectives: To assess the effects of early versus delayed appendicectomy for appendiceal phlegmon or abscess, in terms of overall morbidity and mortality.
Search Methods: We searched the Cochrane Library (CENTRAL; 2016, Issue 7), MEDLINE Ovid (1950 to 23 August 2016), Embase Ovid (1974 to 23 August 2016), Science Citation Index Expanded (1900 to 23 August 2016), and the Chinese Biomedical Literature Database (CBM) (1978 to 23 August 2016). We also searched the World Health Organization (WHO) International Clinical Trials Registry Platform search portal (23 August 2016) and ClinicalTrials.gov (23 August 2016) for ongoing trials.
Selection Criteria: We included all individual and cluster-randomised controlled trials, irrespective of language, publication status, or age of participants, comparing early versus delayed appendicectomy in people with appendiceal phlegmon or abscess.
Data Collection And Analysis: Two review authors independently identified the trials for inclusion, collected the data, and assessed the risk of bias. We performed meta-analyses using Review Manager 5. We calculated the risk ratio (RR) for dichotomous outcomes and the mean difference (MD) for continuous outcomes with 95% confidence intervals (CI).
Main Results: We included two randomised controlled trials with a total of 80 participants in this review. 1. Early versus delayed open appendicectomy for appendiceal phlegmonForty participants (paediatric and adults) with appendiceal phlegmon were randomised either to early appendicectomy (appendicectomy as soon as appendiceal mass resolved within the same admission) (n = 20), or to delayed appendicectomy (initial conservative treatment followed by interval appendicectomy six weeks later) (n = 20). The trial was at high risk of bias. There was no mortality in either group. There is insufficient evidence to determine the effect of using either early or delayed open appendicectomy onoverall morbidity (RR 13.00; 95% CI 0.78 to 216.39; very low-quality evidence), the proportion of participants who developed wound infection (RR 9.00; 95% CI 0.52 to 156.91; very low quality evidence) or faecal fistula (RR 3.00; 95% CI 0.13 to 69.52; very low quality evidence). The quality of evidence for increased length of hospital stay and time away from normal activities in the early appendicectomy group (MD 6.70 days; 95% CI 2.76 to 10.64, and MD 5.00 days; 95% CI 1.52 to 8.48, respectively) is very low quality evidence. The trial reported neither quality of life nor pain outcomes. 2. Early versus delayed laparoscopic appendicectomy for appendiceal abscessForty paediatric participants with appendiceal abscess were randomised either to early appendicectomy (emergent laparoscopic appendicectomy) (n = 20) or to delayed appendicectomy (initial conservative treatment followed by interval laparoscopic appendicectomy 10 weeks later) (n = 20). The trial was at high risk of bias. The trial did not report on overall morbidity or complications. There was no mortality in either group. We do not have sufficient evidence to determine the effects of using either early or delayed laparoscopic appendicectomy for outcomes relating to hospital stay between the groups (MD -0.20 days; 95% CI -3.54 to 3.14; very low quality of evidence). Health-related quality of life was measured with the Pediatric Quality of Life Scale-Version 4.0 questionnaire (a scale of 0 to 100 with higher values indicating a better quality of life). Health-related quality of life score measured at 12 weeks after appendicectomy was higher in the early appendicectomy group than in the delayed appendicectomy group (MD 12.40 points; 95% CI 9.78 to 15.02) but the quality of evidence was very low. This trial reported neither the pain nor the time away from normal activities.
Authors' Conclusions: It is unclear whether early appendicectomy prevents complications compared to delayed appendicectomy for people with appendiceal phlegmon or abscess. The evidence indicating increased length of hospital stay and time away from normal activities in people with early open appendicectomy is of very low quality. The evidence for better health-related quality of life following early laparoscopic appendicectomy compared with delayed appendicectomy is based on very low quality evidence. For both comparisons addressed in this review, data are sparse, and we cannot rule out significant benefits or harms of early versus delayed appendicectomy.Further trials on this topic are urgently needed and should specify a set of criteria for use of antibiotics, percutaneous drainage of the appendiceal abscess prior to surgery and resolution of the appendiceal phlegmon or abscess. Future trials should include outcomes such as time away from normal activities, quality of life and the length of hospital stay.
abstract_id: PUBMED:36652227
Patient and Hospital Characteristics Associated With Delayed Diagnosis of Appendicitis. Importance: Racial disparities in timely diagnosis and treatment of surgical conditions exist; however, it is poorly understood whether there are hospital structural measures or patient-level characteristics that modify this phenomenon.
Objective: To assess whether patient race and ethnicity are associated with delayed appendicitis diagnosis and postoperative 30-day hospital use and whether there are patient- or systems-level factors that modify this association.
Design, Setting, And Participants: This population-based, retrospective cohort study used data from the Healthcare Cost and Utilization Project's state inpatient and emergency department (ED) databases from 4 states (Florida, Maryland, New York, and Wisconsin) for patients aged 18 to 64 years who underwent appendectomy from January 7, 2016, to December 1, 2017. Data were analyzed from January 1, 2016, to December 31, 2017.
Exposure: Delayed diagnosis of appendicitis, defined as an initial ED presentation with an abdominal diagnosis other than appendicitis followed by re-presentation within a week for appendectomy.
Main Outcomes And Measures: A mixed-effects multivariable Poisson regression model was used to estimate the association of delayed diagnosis of appendicitis with race and ethnicity while controlling for patient and hospital variables. A second mixed-effects multivariable Poisson regression model quantified the association of delayed diagnosis of appendicitis with postoperative 30-day hospital use.
Results: Of 80 312 patients who received an appendectomy during the study period (median age, 38 years [IQR, 27-50 years]; 50.8% female), 2013 (2.5%) experienced delayed diagnosis. In the entire cohort, 2.9% of patients were Asian or Pacific Islander, 18.8% were Hispanic, 10.9% were non-Hispanic Black, 60.8% were non-Hispanic White, and 6.6% were other race and ethnicity; most were privately insured (60.2%). Non-Hispanic Black patients had a 1.41 (95% CI, 1.21-1.63) times higher adjusted rate of delayed diagnosis compared with non-Hispanic White patients. Patients at hospitals with a more than 50% Black or Hispanic population had a 0.73 (95% CI, 0.59-0.91) decreased adjusted rate of delayed appendicitis diagnosis compared with hospitals with a less than 25% Black or Hispanic population. Conversely, patients at hospitals with more than 50% of discharges of Medicaid patients had a 3.51 (95% CI, 1.69-7.28) higher adjusted rate of delayed diagnosis compared with hospitals with less than 10% of discharges of Medicaid patients. Additional factors associated with delayed diagnosis included female sex, higher levels of patient comorbidity, and living in a low-income zip code. Delayed diagnosis was associated with a 1.38 (95% CI, 1.36-1.61) increased adjusted rate of postoperative 30-day hospital use.
Conclusions And Relevance: In this cohort study, non-Hispanic Black patients had higher rates of delayed appendicitis diagnosis and 30-day hospital use than White patients. Patients presenting to hospitals with a greater than 50% Black and Hispanic population were less likely to experience delayed diagnosis, suggesting that seeking care at a hospital that serves a diverse patient population may help mitigate the increased rate of delayed diagnosis observed for non-Hispanic Black patients.
abstract_id: PUBMED:36482753
Clinician factors associated with delayed diagnosis of appendicitis. Objectives: To evaluate the association of clinician demographics and practice patterns with delayed diagnosis of appendicitis.
Methods: We included children with appendicitis at 13 regional emergency departments (EDs). We screened patients with a previous ED visit within 7 days for delayed diagnosis by chart review. We evaluated the association of clinician characteristics using logistic regression with random intercepts for site and clinician and delay as the outcome.
Results: Among 7,452 children with appendicitis, 105 (1.4%) had delayed diagnosis. Clinicians in the lowest quartile of obtaining blood in their general practice were more likely to have delayed diagnosis (odds ratio 4.9 compared to highest quartile, 95% confidence interval 1.8, 13.8). Clinicians' imaging rates, specialty, sex, and experience were not associated with delayed diagnosis.
Conclusions: Clinicians who used more blood tests in their general practice had a lower risk of delayed diagnosis of appendicitis, possible evidence that lower risk tolerance has benefits.
abstract_id: PUBMED:27721841
Risk factors of delayed diagnosis of acute appendicitis in children: for early detection of acute appendicitis. Purpose: This study examined the risk factors of a delayed diagnosis of acute appendicitis in children undergoing an appendectomy.
Methods: This retrospective study involved children aged below 18 years, who underwent an appendectomy. After dividing them into a delayed diagnosis group and nondelayed diagnosis group according to the time interval between the initial hospital visit and final diagnosis, the risk factors of delayed diagnosis were identified using logistic regression analysis.
Results: Among 712 patients, 105 patients (14.7%) were classified in the delayed diagnosis group; 92 patients (12.9%) were diagnosed using ultrasonography (US), and both US and computed tomography were performed in 38 patients (5.3%). More patients in the delayed diagnosis group underwent US (P=0.03). Spring season and prior local clinic visit were significantly associated with a delayed diagnosis. Fever and diarrhea were more common in the delayed diagnosis group (fever: odds ratio [OR], 1.37; 95% confidence interval [CI], 1.05-1.81; diarrhea: OR, 1.94; 95% CI, 1.08-3.46; P<0.05). These patients showed symptoms for a longer duration (OR, 2.59; 95% CI, 1.78-3.78; P<0.05), and the admission course (OR, 1.26; 95% CI, 1.11-1.44; P<0.05) and C-reactive protein (CRP) levels (OR, 1.47; 95% CI, 1.19-1.82; P<0.05) were associated with the delayed diagnosis.
Conclusion: To decrease the rate of delayed diagnoses of acute appendicitis, symptoms such as fever and diarrhea, seasonal variations, admission course, and CRP levels should be considered and children with a longer duration of symptoms should be closely monitored.
abstract_id: PUBMED:17872606
Does analgesia mask diagnosis of appendicitis among children? Question: Can analgesia be given safely to patients with suspected appendicitis prior to surgical evaluation without masking physical signs and symptoms?
Answer: Withholding analgesia from patients with acute abdominal pain and suspected appendicitis is common. This practice, however, is not supported by published literature. Although a few trials have noted some changes in abdominal examination with analgesia, this has not been associated with any changes in patient outcome. If patients are in pain, analgesia is warranted. Larger multicentre trials are needed to establish practice guidelines.
abstract_id: PUBMED:26804807
Reported provision of analgesia to patients with acute abdominal pain in Canadian paediatric emergency departments. Objectives: Evidence exists that analgesics are underutilized, delayed, and insufficiently dosed for emergency department (ED) patients with acute abdominal pain. For physicians practicing in a Canadian paediatric ED setting, we (1) explored theoretical practice variation in the provision of analgesia to children with acute abdominal pain; (2) identified reasons for withholding analgesia; and (3) evaluated the relationship between providing analgesia and surgical consultation.
Methods: Physician members of Paediatric Emergency Research Canada (PERC) were prospectively surveyed and presented with three scenarios of undifferentiated acute abdominal pain to assess management. A modified Dillman's Tailored Design method was used to distribute the survey from June to July 2014.
Results: Overall response rate was 74.5% (149/200); 51.7% of respondents were female and mean age was 44 (SD 8.4) years. The reported rates of providing analgesia for case scenarios representative of renal colic, appendicitis, and intussusception, were 100%, 92.1%, and 83.4%, respectively, while rates of providing intravenous opioids were 85.2%, 58.6%, and 12.4%, respectively. In all 60 responses where the respondent indicated they would obtain a surgical consultation, analgesia would be provided. In the 35 responses where analgesia would be withheld, 21 (60%) believed pain was not severe enough, while 5 (14.3%) indicated it would obscure a surgical condition.
Conclusions: Pediatric emergency physicians self-reported rates of providing analgesia for acute abdominal pain scenarios were higher than previously reported, and appeared unrelated to request for surgical consultation. However, an unwillingness to provide opioid analgesia, belief that analgesia can obscure a surgical condition, and failure to take self-reported pain at face value remain, suggesting that the need exists for further knowledge translation efforts.
abstract_id: PUBMED:31657320
Should All Complicated Appendicitis Be Treated the Same? The Answer Is No. Delayed presentation of acute appendicitis is associated with increased complications. We hypothesized that the outcomes of appendectomy in delayed presentations of acute appendicitis (>72 hours of pain) were dependent on radiologic findings rather than late presentation. We reviewed records from 2009 to 2015 and analyzed delayed presentations of acute appendicitis. We divided patients into three groups based on specific CT findings: uncomplicated appendicitis (UA), phlegmon or abscess (PA), and other perforated appendicitis (PERF, signs of perforation without abscess or phlegmon). One hundred thirty-eight patients were included in this study (58 in the UA, 67 in the PA, and 13 in the PERF groups). Overall, 78 (57%) patients underwent early appendectomy (EA) and 60 (43%) underwent initial conservative management. The incidence of adverse events was lower in EA than that in initial conservative management (17% vs 42%, P = 0.005). EA in the UA group was associated with shorter hospitalization (3.2 vs 5.6 days, P < 0.001) and less adverse events (6% vs 29%, P < 0.05). Severe adverse events (two colectomies and one fecal fistula) were observed in the PA group. In conclusion, in these late presentations of appendicitis, complicated appendicitis was common. EA was safe in selected patients, however, and associated with decreased adverse events.
Answer: Early analgesia is not consistently associated with delayed treatment of appendicitis. A matched case-control study found no association between early opiate use and delayed treatment of appendicitis, but there was an association with early nonsteroidal anti-inflammatory analgesia (NSAID) use and delayed treatment (PUBMED:18272097). However, a meta-analysis indicated that opiate administration did not impact the number of histologically proven acute appendicitis cases or diagnostic accuracy, suggesting that early analgesia with opiates might influence the treatment approach but does not appear to alter diagnostic accuracy or surgical outcome (PUBMED:23973418).
Moreover, studies on the timing of appendectomy show that delayed appendectomy performed more than 8 hours after diagnosis was not associated with increased perioperative complications, postoperative length of stay, 30-day readmission rate, or mortality (PUBMED:29153247). Similarly, the timing of appendicectomy for appendiceal phlegmon or abscess is controversial, and it is unclear whether early appendicectomy prevents complications compared to delayed appendicectomy (PUBMED:28574593).
In terms of patient and hospital characteristics, non-Hispanic Black patients had higher rates of delayed appendicitis diagnosis and 30-day hospital use than White patients, and seeking care at a hospital that serves a diverse patient population may help mitigate the increased rate of delayed diagnosis observed for non-Hispanic Black patients (PUBMED:36652227). Clinician factors such as lower use of blood tests in general practice were associated with a higher risk of delayed diagnosis of appendicitis (PUBMED:36482753). Risk factors for delayed diagnosis in children include symptoms such as fever and diarrhea, seasonal variations, admission course, and C-reactive protein levels (PUBMED:27721841).
Overall, while certain types of analgesia may be associated with delayed treatment, the evidence suggests that early analgesia, particularly with opiates, does not necessarily lead to delayed treatment or diagnosis of appendicitis. Factors such as clinician practice patterns, patient characteristics, and hospital demographics may also play a role in the timing of diagnosis and treatment. |
Instruction: Pain treatment facilities: do we need quantity or quality?
Abstracts:
abstract_id: PUBMED:24828413
Pain treatment facilities: do we need quantity or quality? Rationale, Aims And Objectives: Chronic pain patients referred to a pain treatment facility have no guarantee that they will receive a proper diagnostic procedure or treatment. To obtain information about organizational aspects of pain treatment facilities and the content of their daily pain practice, we performed a questionnaire survey. The aim of the study was to evaluate the amount of pain treatment facilities, the content of organized specialized pain care and adherence to the criteria of the internationally accepted guidelines for pain treatment services.
Method: The University Pain Centre Maastricht in the Department of Anaesthesiology and Pain Management at Maastricht University Medical Centre developed a questionnaire survey based on the Recommendations for Pain Treatment Services of the International Association for the Study of Pain (IASP). The questionnaire was sent to the medical boards of all hospitals in the Netherlands (n=94).
Results: The response rate was 86% (n=81). Of all hospitals, 88.9% (n=72) reported the provision of organized specialized pain care, which was provided by a pain management team in 86.1% (n=62) and by an individual specialist in 13.9% (n=10). Insight was obtained from pain treatment facilities in five different domains: the organizational structure of pain management, composition of the pain team, pain team practice, patient characteristics, and research and education facilities.
Conclusion: Although 88.9% of all hospitals stated that organized specialized pain care was provided, only a few hospitals could adhere to the criteria for pain treatment services of the IASP. The outcome of the questionnaire survey may help to define quality improvement standards for pain treatment facilities.
abstract_id: PUBMED:34523041
The relation between sleep quality, sleep quantity, and gastrointestinal problems among colorectal cancer survivors: result from the PROFILES registry. Purpose: Common residual symptoms among survivors of colorectal cancer (CRC) are sleep difficulties and gastrointestinal symptoms. Among patients with various gastrointestinal (inflammatory) diseases, sleep quality has been related to gastrointestinal symptoms. For CRC survivors, this relation is unclear; therefore, we examined the association between sleep quality and quantity with gastrointestinal symptoms among CRC survivors.
Methods: CRC survivors registered in the Netherlands Cancer Registry-Southern Region diagnosed between 2000 and 2009 received a survey on sleep quality and quantity (Pittsburgh Sleep Quality Index) and gastrointestinal symptoms (European Organisation for Research and Treatment of Cancer, Quality of Life Questionnaire-Colorectal 38, EORTC QLQ-CR38) in 2014 (≥ 4 years after diagnosis). Secondary cross-sectional data analyses related sleep quality and quantity separately with gastrointestinal symptoms by means of logistic regression analyses.
Results: In total, 1233 CRC survivors were included, of which 15% reported poor sleep quality. The least often reported gastrointestinal symptom was pain in the buttocks (15.1%) and most often reported was bloating (29.2%). CRC survivors with poor sleep quality were more likely to report gastrointestinal symptoms (p's < 0.01). Survivors who slept < 6 h were more likely to report symptoms of bloating or flatulence, whereas survivors who slept 6-7 h reported more problems with indigestion.
Conclusions: Worse sleep quality and short sleep duration were associated with higher occurrence of gastrointestinal symptoms.
Implications For Cancer Survivors: Understanding the interplay between sleep quality and gastrointestinal symptoms and underlying mechanisms adds to better aftercare and perhaps reduction of residual gastrointestinal symptoms in CRC survivors by improving sleep quality.
abstract_id: PUBMED:19936804
Is insufficient quantity and quality of sleep a risk factor for neck, shoulder and low back pain? A longitudinal study among adolescents. The quantity and quality of adolescents' sleep may have changed due to new technologies. At the same time, the prevalence of neck, shoulder and low back pain has increased. However, only a few studies have investigated insufficient quantity and quality of sleep as possible risk factors for musculoskeletal pain among adolescents. The aim of the study was to assess whether insufficient quantity and quality of sleep are risk factors for neck (NP), shoulder (SP) and low back pain (LBP). A 2-year follow-up survey among adolescents aged 15-19 years was (2001-2003) carried out in a subcohort of the Northern Finland Birth Cohort 1986 (n = 1,773). The outcome measures were 6-month period prevalences of NP, SP and LBP. The quantity and quality of sleep were categorized into sufficient, intermediate or insufficient, based on average hours spent sleeping, and whether or not the subject suffered from nightmares, tiredness and sleeping problems. The odds ratios (OR) and 95% confidence intervals (CI) for having musculoskeletal pain were obtained through logistic regression analysis, adjusted for previously suggested risk factors and finally adjusted for specific pain status at 16 years. The 6-month period prevalences of neck, shoulder and low back pain were higher at the age of 18 than at 16 years. Insufficient quantity or quality of sleep at 16 years predicted NP in both girls (OR 4.4; CI 2.2-9.0) and boys (2.2; 1.2-4.1). Similarly, insufficient sleep at 16 years predicted LBP in both girls (2.9; 1.7-5.2) and boys (2.4; 1.3-4.5), but SP only in girls (2.3; 1.2-4.4). After adjustment for pain status, insufficient sleep at 16 years predicted significantly only NP (3.2; 1.5-6.7) and LBP (2.4; 1.3-4.3) in girls. Insufficient sleep quantity or quality was an independent risk factor for NP and LBP among girls. Future studies should test whether interventions aimed at improving sleep characteristics are effective in the prevention and treatment of musculoskeletal pain.
abstract_id: PUBMED:36777425
Living With Inflammatory Bowel Disease: Online Surveys Evaluating Patient Perspectives on Treatment Satisfaction and Health-Related Quality of Life. Background: The quality of life of persons living with inflammatory bowel disease (IBD) is impacted by the physical and psychosocial burdens of disease, as well as by their satisfaction with the quality of care they receive. We sought to better understand (1) the drivers of satisfaction with treatment, including treatment goals, treatment selection, and attributes of patient/health care professional (HCP) interactions, and (2) how IBD symptoms affect aspects of daily life and overall quality of life.
Methods: Two online questionnaires were accessed via MyCrohnsAndColitsTeam.com. The Treatment Survey assessed desired treatment outcomes, past and present therapies, and experiences with the patient's primary treating HCP. The Quality of Life survey assessed respondents' most problematic IBD symptoms and their influence on family and social life, work, and education. Respondents had Crohn's disease (CD) or ulcerative colitis (UC), were 19 years or older, and resided in the United States. All responses were anonymous.
Results: The Treatment Experience survey was completed by 502 people (296 CD, 206 UC), and the Quality of Life survey was completed by 302 people (177 CD, 125 UC). Reduced pain, diarrhea, disease progression, and fatigue were the most desired goals of treatment. Biologics and 5-aminosalicylates were reported as a current or past treatment by the greatest proportion of patients with CD and UC, respectively. A numerically lower proportion of respondents with UC than CD reported use of biologic or small molecule therapy; conversely, a numerically greater proportion of respondents with UC than CD reported these drugs to be very or extremely effective. The HCP was key in the decision to switch to, and in the selection of, biologic or small molecule therapy. Overall satisfaction with an HCP was greatly driven by the quality and quantity of the communication and of the time spent with the HCP. Troublesome abdominal symptoms most impacted aspects of social and family life. Emotional challenges associated with IBD were experienced by most respondents.
Conclusions: Treatment goals of respondents seem to align with HCPs overall treatment goals, including control of gastrointestinal symptoms and prevention of disease progression. Persons with UC might be offered biologic and small molecule therapies less often, despite reported high efficacy by users. Feeling heard and understood by the HCP are key drivers of treatment satisfaction. Quality communication in the patient/HCP relationship enables a better understanding of the patients' goals, disease burden, and emotional needs, which are all key factors to consider when developing a personalized and comprehensive treatment plan and optimizing quality of life.
abstract_id: PUBMED:34939973
Are Changes in Sleep Quality/Quantity or Baseline Sleep Parameters Related to Changes in Clinical Outcomes in Patients With Nonspecific Chronic Low Back Pain?: A Systematic Review. Objectives: Sleep disturbance is prevalent among patients with chronic low back pain (CLBP). This systematic review aimed to summarize the evidence regarding the: (1) temporal relations between changes in sleep quality/quantity and the corresponding changes in pain and/or disability; and (2) role of baseline sleep quality/quantity in predicting future pain and/or disability in patients with CLBP.
Methods: Four databases were searched from their inception to February 2021. Two reviewers independently screened the abstract and full text, extracted data, assessed the methodological quality of the included studies, and evaluated the quality of evidence of the findings using the Grading of Recommendations Assessment Development and Evaluation (GRADE).
Results: Of 1995 identified references, 6 articles involving 1641 participants with CLBP were included. Moderate-quality evidence substantiated that improvements in self-reported sleep quality and total sleep time were significantly correlated with the corresponding LBP reduction. Low-quality evidence showed that self-reported improvements in sleep quality were related to the corresponding improvements in CLBP-related disability. There was conflicting evidence regarding the relation between baseline sleep quality/quantity and future pain/disability in patients with CLBP.
Discussion: This is the first systematic review to accentuate that improved self-reported sleep quality/quantity may be associated with improved pain/disability, although it remains unclear whether baseline sleep quality/quantity is a prognostic factor for CLBP. These findings highlight the importance of understanding the mechanisms underlying the relation between sleep and CLBP, which may inform the necessity of assessing or treating sleep disturbance in people with CLBP.
abstract_id: PUBMED:19109795
Epidemiology of lung cancer prognosis: quantity and quality of life. Primary lung cancer is very heterogeneous in its clinical presentation, histopathology, and treatment response; and like other diseases, the prognosis consists of two essential facets: survival and quality of life (QOL). Lung cancer survival is mostly determined by disease stage and treatment modality, and the 5-Year survival rate has been in a plateau of 15% for three decades. QOL is focused on life aspects that are affected by health conditions and medical interventions; the balance of physical functioning and suffering from treatment side effects has long been a concern of care providers as well as patients. Obviously needed are easily measurable biologic markers to stratify patients before treatment for optimal results in survival and QOL and to monitor treatment responses and toxicities. Targeted therapies toward the mechanisms of tumor development, growth, and metastasis are promising and actively translated into clinical practice. Long-term lung cancer (LTLC) survivors are people who are alive 5 Years after the diagnosis. Knowledge about the health and QOL in LTLC survivors is limited because outcome research in lung cancer has been focused mainly on short-term survival. The independent or combined effects of lung cancer treatment, aging, smoking and drinking, comorbid conditions, and psychosocial factors likely cause late effects, including organ malfunction, chronic fatigue, pain, or premature death among lung cancer survivors. New knowledge to be gained should help lung cancer survivors, their healthcare providers, and their caregivers by providing evidence for establishing clinical recommendations to enhance their long-term survival and health-related QOL.
abstract_id: PUBMED:30635699
Quality indicators in the treatment of hemorrhoids Background: A quality indicator is a quantitative measure that can be used to monitor and evaluate the quality of certain operative procedures that may influence the result of a therapy. An indicator is not a direct measure of quality, it is merely a tool to evaluate the performance of procedures and can indicate potential problem areas.
Material And Methods: A literature search was performed for parameters which could be included as indicators of quality in the treatment of hemorrhoids.
Results And Conclusion: In the treatment of benign diseases, such as hemorrhoids objective indicators (e.g. recurrence or survival rates in oncological diseases) cannot be used as quality indicators. Other indicators or core outcome factors must be used. From the patient's point of view other indicators are important (such as pain, complications, continence, days off work, etc.) than those for the colorectal surgeon, health insurance and healthcare provider. The most important indicators or outcome factors for treatment of hemorrhoids are postprocedural pain, patient satisfaction, complications, residual and recurrent symptoms, pain, quality of life, costs and duration of inability to work. In terms of outcome quality various quality indicators could be identified which also play a role in the guidelines; however, in this respect valid questionnaires or scores that enable a uniform assessment exist only in a few cases. In contrast, some indicators (e. g. costs, length of hospital stay) are strongly influenced by factors such as the healthcare system making these indicators unfeasible.
abstract_id: PUBMED:35587918
Movement quantity and quality: How do they relate to pain and disability in dancers? Objective: This field-based study aimed to determine the association between pre-professional student dancers' movement quantity and quality with (i) pain severity and (ii) pain related disability.
Methods: Pre-professional female ballet and contemporary dance students (n = 52) participated in 4 time points of data collection over a 12-week university semester. At each time point dancers provided self-reported pain outcomes (Numerical Rating Scale as a measure of pain severity and Patient Specific Functional Scale as a measure of pain related disability) and wore a wearable sensor system. This system combined wearable sensors with previously developed machine learning models capable of capturing movement quantity and quality outcomes. A series of linear mixed models were applied to determine if there was an association between dancers' movement quantity and quality over the 4 time points with pain severity and pain related disability.
Results: Almost all dancers (n = 50) experienced pain, and half of the dancers experienced disabling pain (n = 26). Significant associations were evident for pain related disability and movement quantity and quality variables. Specifically, greater pain related disability was associated with more light activity, fewer leg lifts to the front, a shorter average duration of leg lifts to the front and fewer total leg lifts. Greater pain related disability was also associated with higher thigh elevation angles to the side. There was no evidence for associations between movement quantity and quality variables and pain severity.
Discussion: Despite a high prevalence of musculoskeletal pain, dancers' levels of pain severity and disability were generally low. Between-person level associations were identified between dancers' movement quantity and quality, and pain related disability. These findings may reflect dancers' adaptations to pain related disability, while they continue to dance. This proof-of-concept research provides a compelling model for future work exploring dancers' pain using field-based, serial data collection.
abstract_id: PUBMED:33444892
Clinical practice guidelines for the treatment and management of low back pain: A systematic review of quantity and quality. Background: Low back pain (LBP) is highly prevalent in the general population and is responsible for increased health-care costs, pain, impairment of activity, and if chronic, is associated with a range of comorbidities.
Objectives: The purpose of this review was to identify the quantity and assess the quality of evidence-based clinical practice guidelines (CPGs) for the treatment and/or management of LBP in adults.
Methods: MEDLINE, EMBASE, CINAHL, and the Guidelines International Network were systematically searched from 2008 to 2018 to identify LBP CPGs. Eligible CPGs were assessed in duplicate using the Appraisal of Guidelines, Research and Evaluation II (AGREE II) instrument across 6 domains: scope and purpose, stakeholder involvement, rigour of development, clarity of presentation, applicability, and editorial independence.
Results: Of 181 unique search results, 22 CPGs for the treatment and/or management of LBP were eligible. Scaled domain percentages from highest to lowest were: scope and purpose (90.0%), clarity of presentation (84.0%), stakeholder involvement (54.0%), rigour of development (51.2%), editorial independence (39.6%) and applicability (28.5%). Quality varied within and across CPGs.
Conclusions: CPGs varied in quality, with most scoring the highest in the scope and purpose and clarity of presentation domains. CPGs achieved variable and lower scores in the stakeholder involvement, rigour of development, applicability, and editorial independence domains. CPGs with higher AGREE II scores can serve as suitable evidence-based resources for clinicians involved in LBP care; CPGs with lower scores could be improved in future updates using the AGREE II instrument, among other guideline development resources, as a guide.
abstract_id: PUBMED:8953606
Quality of life outcomes: measurement and validation. New treatments for cancer often are evaluated solely on the basis of increased survival, and thus, lack valuable information about other benefits and drawbacks of these treatments. It is important to raise the issue of the quality of life as a companion to the issue of quantity of life. The trade-off is not always between toxicity and survival time; sometimes a treatment, however toxic, affords benefit not by virtue of increasing survival, but rather, by palliating tumor-induced pain or obstruction. Proper selection of measures and supplementary questions is an important first step toward a successful evaluation of QOL. Included in this article is a table that reviews many available QOL measures that have been designed for, or are frequently used with, people with cancer. One cancer-specific questionnaire, the Functional Assessment of Cancer Therapy (FACT), also is reproduced.
Answer: The question of whether pain treatment facilities require quantity or quality is addressed in the context of chronic pain management. According to the study conducted by the University Pain Centre Maastricht, while a high percentage (88.9%) of hospitals reported providing organized specialized pain care, only a few could adhere to the criteria for pain treatment services set by the International Association for the Study of Pain (IASP). This suggests that while there may be a quantity of facilities available, the quality of care, as defined by adherence to international guidelines, is lacking. The study's conclusion emphasizes the need to define quality improvement standards for pain treatment facilities, indicating that quality is a critical aspect that needs attention (PUBMED:24828413).
In the broader context of healthcare, quality indicators are used to monitor and evaluate the quality of certain operative procedures that may influence therapy results. For benign diseases like hemorrhoids, quality indicators such as postprocedural pain, patient satisfaction, complications, residual and recurrent symptoms, quality of life, costs, and duration of inability to work are considered important. This highlights the importance of quality over mere quantity in the treatment of conditions that impact patients' lives (PUBMED:30635699).
Moreover, the quality of life is a significant concern in the treatment of various conditions, including cancer and chronic pain. Studies on lung cancer prognosis, for instance, emphasize the balance between physical functioning and suffering from treatment side effects, suggesting that quality of life should be a companion consideration to quantity of life (PUBMED:19109795). Similarly, in the context of chronic low back pain, the quality of sleep has been shown to correlate with pain and disability outcomes, further underscoring the importance of quality in patient care (PUBMED:34939973).
In summary, while the availability of pain treatment facilities (quantity) is important, the studies suggest that the quality of care provided in these facilities, as well as the overall quality of life for patients, is of paramount importance. Quality improvement standards, adherence to guidelines, and the consideration of quality indicators are essential for ensuring effective pain management and treatment outcomes. |
Instruction: Is it pseudo-dementia?
Abstracts:
abstract_id: PUBMED:30991888
Regression analysis in an illness-death model with interval-censored data: A pseudo-value approach. Pseudo-values provide a method to perform regression analysis for complex quantities with right-censored data. A further complication, interval-censored data, appears when events such as dementia are studied in an epidemiological cohort. We propose an extension of the pseudo-value approach for interval-censored data based on a semi-parametric estimator computed using penalised likelihood and splines. This estimator takes interval-censoring and competing risks into account in an illness-death model. We apply the pseudo-value approach to three mean value parameters of interest in studies of dementia: the probability of staying alive and non-demented, the restricted mean survival time without dementia and the absolute risk of dementia. Simulation studies are conducted to examine properties of pseudo-values based on this semi-parametric estimator. The method is applied to the French cohort PAQUID, which included more than 3,000 non-demented subjects, followed for dementia for more than 25 years.
abstract_id: PUBMED:15037849
Pseudo-dementia as presentation of a dural arteriovenous fistula We describe a case of a 70-Year-old man who presented subacute pseudo-dementia due to a dural fistula. Neurological assessment and the reversibility of the symptoms after embolization support the originality of this observation.
abstract_id: PUBMED:26493622
Percutaneous endoscopic cecostomy (introducer method) in chronic intestinal pseudo-obstruction: Report of two cases and literature review. We report on two patients with recurrent episodes of chronic intestinal pseudo-obstruction (CIPO). A 50-year-old woman with severe multiple sclerosis and an 84-year-old man with Parkinson's disease and dementia had multiple hospital admissions because of pain and distended abdomen. Radiographic and endoscopic findings showed massive dilation of the colon without any evidence of obstruction. Conservative management resolved symptoms only for a short period of time. As these patients were poor candidates for any surgical treatment we carried out percutaneous endoscopic colostomy by placing a 20-Fr tube in the cecum with the introducer method. The procedure led to durable symptom relief without complications. We present these two cases and give a review through the existing literature of the procedure in CIPO.
abstract_id: PUBMED:24683825
Clinical study of the month: depressive pseudo-dementia We report the case of a man aged 62 suffering from a known type I bipolar disorder and referred by his attending psychiatrist because of a state of spatiotemporal disorientation, confusion and prostration evoking significant neurologic impairment. The interest of this case report is in the use of the 18-FDG PET-Scanner, which is increasingly widespread in clinical psychiatry, to support the differential diagnosis between a psycho-organic pathology like dementia or a functional psychiatric pathology like depressive pseudo-dementia (also named melancholic dementia), in which some patterns of dysfunction can now be identified by functional imaging.
abstract_id: PUBMED:35723098
Differential Diagnosis Between Alzheimer's Disease-Related Depression and Pseudo-Dementia in Depression: A New Indication for Amyloid-β Imaging? Background: Alzheimer's disease and depression can start with combined cognitive and depressive symptoms [1, 2]. Accurate differential diagnosis is desired to initiate specific treatment.
Objective: We investigated whether amyloid-β PET imaging can discriminate both entities.
Methods: This retrospective observational study included 39 patients (20 female, age = 70±11years) with both cognitive and depressive symptoms who underwent amyloid-β PET imaging and in whom clinical follow-up data was available. Amyloid-β PET was carried out applying [18F]Florbetaben or [11C]PiB. The PET images were analyzed by standardized visual and relative-quantitative evaluation. Based on clinical follow-up (median of 2.4 years [range 0.3 to 7.0 years, IQR = 3.7 years] after amyloid PET imaging which was not considered in obtaining a definite diagnosis), discrimination ability between AD-related depression and pseudo-dementia in depression/depression with other comorbidities was determined.
Results: Visually, all 10 patients with pseudo-dementia in depression and all 15 patients with other depression were rated as amyloid-β-negative; 2 of 14 patients with AD-related depression were rated amyloid-β-negative. ROC curve analysis of the unified composite standardized uptake value ratios (cSUVRs) was able to discriminate pseudo-dementia in depression from AD-related depression with high accuracy (AUC = 0.92). Optimal [18F]Florbetaben discrimination cSUVR threshold was 1.34. In congruence with the visual PET analysis, the resulting sensitivity of the relative-quantitative analysis was 86% with a specificity of 100%.
Conclusion: Amyloid-β PET can differentiate AD-related depression and pseudo-dementia in depression. Prospective clinical studies are warranted to confirm this result and to potentially broaden the spectrum of clinical applications for amyloid-β PET imaging.
abstract_id: PUBMED:1342662
SPECT and depressive pseudo-dementia. A case report The SPECT (Single Photon Emission Computed Tomography), a new advance in medical imagery, allows the measure of cerebral blood flow and could be of interest in studying mental disorders. We report here a case of pseudo-dementia for which a SPECT has been performed before and after treatment. Mrs V., a 49 years old female, has been suffering from a dementia-like syndrome for several months. She is divorced, has two children, lives with a boy-friend, and has been working in a factory for 25 years. The first psychiatric disorders began three years ago with a gradual apragmatism and muteness. A neuroleptic treatment gave no result. One year later, without any reason, Mrs V. recovered a normal way of life. Nevertheless, from time to time, she had some periods of subexcitation. Few months later, she relapsed in her previous state of apragmatism and muteness. During a new hospitalization, neuroleptic treatment is tried again without any success. Mrs V. is then referred to us for medical screening of a dementia syndrome. In the Unit, it is difficult to communicate with her; she looks sad or amimic and has motor stereotypies (like rubbing her feet continuously against the floor). She has polidypsia and glutonny. Neurologic examination is normal, as well as EEG, X Scan, Nuclear Magnetic Resonance. The Folstein Mini Mental State score is 9/30.(ABSTRACT TRUNCATED AT 250 WORDS)
abstract_id: PUBMED:7136480
Pseudo-Hakim-Adams syndrome. Neuropsychological study of 23 cases of misdiagnosed Hakim-Adams syndrome The authors study the characteristics of a group of 23 patients for whom an initial tentative diagnosis of Hakim-Adams syndrome (H-A syndrome) was eventually rejected. On account of several factors distinguishing these patients from the true H-A group, the authors propose using the term "pseudo Hakim-Adams syndrome". The distinguishing factors include: the grounds for admission, i.e. mental or mnesic deterioration, associated with radiological images of ventricular dilatation; a clinical picture dominated by mental disorders with only rare disorders of gait and sphincter control (several neurological defects were direct consequences of previous cerebral disease); no antecedents of spontaneous meningeal hemorrhage or meningitis in the case history; neuropsychological examination showing fewer disorders of concentration and less dyscalculia and constructive dyspraxia than in true HA, and far more atypical signs of the aphasic and anxio-depressive type. The authors think that various pathological processes may be responsible for this pseudo H-A syndrome in which a predominant mental picture is associated with ventricular dilatation.
abstract_id: PUBMED:25024563
Pseudo-dementia: A neuropsychological review. Ever since Kiloh (1961)[2] coined the term pseudo-dementia, it has been used a little loosely for describing the cognitive deficits in depression, especially, which is found in old age. However, several diagnostic dilemmas persist regarding the nosological status of this condition. Teasing out these individual diagnostic problems is important not only for administering appropriate therapy, but also for preventing them from the unnecessary diagnostic assessments towards the other diagnoses. Thus, it is important to have a detailed knowledge of the cognitive or neuropsychological deficits in this condition. In this review, we start by addressing the important issue of diagnostic confusion between dementia and pseudo-dementia. Subsequently, we proceed by reviewing the present scientific literature on the cognitive deficits found in this clinical condition. For the sake of convenience, we will divide the cognitive deficits into: Memory deficitsExecutive function deficits andDeficits in speech and language domains. Finally, we will look at the progression of this condition to see the components of this condition, which can be actually called "Pseudo".
abstract_id: PUBMED:17675934
Pseudo-dementia conversion and post-traumatic stress disorder Background: Post-traumatic stress disorder (PTSD) is often associated with other psychiatric syndromes. However, studies exploring conversion and PTSD comorbidity are scarce.
Case-report: This paper reports the case of a 45 year-old patient without medical or psychiatric history. In 2003, he suddenly started suffering from amnesia and symptoms of delirium: he was at his office with a cup of coffee but he did not remember why. Aphasia, trembling, behavioural disorders appeared over the next hours and days. Numerous neurological examinations and laboratory tests (including cerebral imagery) were performed without evidence of any physical disease. Three psychiatric examinations were also negative, even if a possible psychogenic origin was hypothesized. Neurological or psychiatric diagnoses were discussed but without definitive conclusion. One year later, the symptoms were unchanged until the patient watched a movie ("Mystic River") that described the story of a man with sexual abuse in childhood. He suddenly remembered that he lived the same experience when he was 8 years old. At the end of the movie, his wife surprisingly noticed that he was walking and speaking normally. All the neurological symptoms disappeared. Unfortunately, symptoms of a severe PTSD appeared, as well as a major depressive disorder. The patient and his parents remembered that he had been more irritable, depressed and anxious at school and during the night, between 8 and 13 years of age, with a possible PTSD during this period. He always refused to talk with his parents about the traumatic event. When he was 13, the family moved house, the patient seemed to forget everything and the symptoms disappeared. About thirty years later, the symptoms were similar with the reexperien of the traumatic event through unwanted recollections, distressing images, nightmares, or flashbacks. He had also symptoms of hyperarousal with physiological manifestations, such as irritability, insomnia, impaired concentration, hypervigilance, and increased startle reactions. Hospitalisation became necessary because of a severe depressive disorder with suicidal ideation and suicidal attempt by hanging. After two failed treatments with SSRI antidepressants, the administration of clomipramine (200 mg/d) and a combined therapy with Eyes Movement Desensitization and Reprocessing (EMDR) led to a significant improvement of PTSD and depression symptoms.
Discussion: Even if PTSD and conversion may share common dissociative mechanisms, the links between both syndromes have not yet been sufficiently explored. Our clinical case raises specifically the question of the initial manifestations of pseudo-dementia (why this type of symptoms, and why at this particular moment of his life, without any targeting events). Moreover, the case of this patient is particularly interesting because of the very long amnesia period between the traumatic event and the onset of PTSD.
Conclusion: The different phases of this case warrant more precise exploration of the links between PTSD and conversion, with clinical, epidemiological and cerebral imagery perspectives.
abstract_id: PUBMED:3649928
Mental confusions and pseudo-dementias N/A
Answer: Based on the provided abstracts, pseudo-dementia refers to a condition where cognitive deficits mimic those of dementia but are actually due to other causes, such as depression or other psychiatric disorders. The term has been used to describe cognitive impairments in depression, particularly in older adults, and it can present diagnostic challenges because it is important to distinguish it from true dementia for appropriate treatment (PUBMED:25024563).
Several abstracts discuss the differentiation between pseudo-dementia and other conditions. For instance, one study highlights the use of amyloid-β PET imaging to differentiate between Alzheimer's disease-related depression and pseudo-dementia in depression, suggesting that this imaging technique can help in making an accurate diagnosis (PUBMED:35723098). Another abstract describes a case where depressive pseudo-dementia was supported by patterns of dysfunction identified by functional imaging, such as 18-FDG PET-Scanner (PUBMED:24683825). Similarly, SPECT imaging has been used to measure cerebral blood flow and could be of interest in studying mental disorders, including pseudo-dementia (PUBMED:1342662).
There are also reports of pseudo-dementia in the context of other medical conditions, such as a case of subacute pseudo-dementia due to a dural fistula (PUBMED:15037849), and in chronic intestinal pseudo-obstruction where percutaneous endoscopic cecostomy was used for symptom relief (PUBMED:26493622).
Furthermore, the term "pseudo-dementia" has been used in the context of conversion and post-traumatic stress disorder, where a patient exhibited dementia-like symptoms that were later understood to be related to re-experiencing a traumatic event (PUBMED:17675934).
In summary, the term "pseudo-dementia" is used to describe a condition where cognitive deficits appear similar to dementia but are actually due to other underlying causes, such as depression or psychiatric disorders. Differentiating pseudo-dementia from true dementia is crucial for proper treatment, and various imaging techniques may aid in this differentiation. However, without specific clinical details or assessment results, it is not possible to determine whether a particular case is pseudo-dementia. |
Instruction: Is serous cystadenoma of the pancreas a model of clear-cell-associated angiogenesis and tumorigenesis?
Abstracts:
abstract_id: PUBMED:19077470
Is serous cystadenoma of the pancreas a model of clear-cell-associated angiogenesis and tumorigenesis? Background: Similar to the other von Hippel-Lindau (VHL)-related tumors such as renal cell carcinomas and capillary hemangioblastomas, serous cystadenomas (SCAs) of the pancreas are also characterized by clear cells. Over the years, we have also noticed that the tumor epithelium shows a prominent capillary network.
Methods: Eighteen cases of SCA were reviewed histologically, and immunohistochemical analysis was performed for CD31 and vascular endothelial growth factor (VEGF) as well as the molecules implicated in clear-cell tumorigenesis: GLUT-1, hypoxia-inducible factor-1 (HIF-1alpha), and carbonic anhydrase IX (CA IX).
Results: There was an extensively rich capillary network that appears almost intraepithelially in all cases of SCA, which was confirmed by CD31 stain that showed, on average, 26 capillaries per every 100 epithelial cells. VEGF expression was identified in 10/18 cases. Among the clear-cell tumorigenesis markers, CA IX was detected in all cases, GLUT-1 and HIF-1alpha in most cases.
Conclusion: As in other VHL-related clear-cell tumors, there is a prominent capillary network immediately adjacent to the epithelium of SCA, confirming that the clear-cell- angiogenesis association is also valid for this tumor type. Molecules implicated in clear-cell tumorigenesis are also consistently expressed in SCA. This may have biologic and therapeutic implications, especially considering the rapidly evolving drugs against these pathways. More importantly, SCA may also serve as a model of clear-cell-associated angiogenesis and tumorigenesis, and the information gained from this tumor type may also be applicable to other clear-cell tumors.
abstract_id: PUBMED:24468454
Colitis-associated colorectal carcinoma: epidemiology, pathogenesis and early diagnosis Colitis-associated colorectal carcinoma (CRC) accounts for about 5% of all CRC and the risk for CRC in inflammatory bowel disease (IBD) patients - according to older meta-analyses - is slightly increased when compared to normal population. Effective anti-inflammatory therapy seems to decrease this risk. Main risk factors for colitis-associated CRC are pancolitis, duration of colitis and presence of primary sclerosing cholangitis. In contrast to sporadic CRC, a characteristic adenoma-carcinoma sequence in the pathogenesis of colitis-associated CRC cannot be found. Nevertheless, numerous cell and gene defects occur. Reactive oxygen species also seem to play a pivotal role in the pathogenesis of colitis-associated CRC. Particularly patients with chronically active pancolitis should undergo regular surveillance colonoscopy, since prognosis of colitis-associated CRC is poor.
abstract_id: PUBMED:28852270
The Role of Proinflammatory Pathways in the Pathogenesis of Colitis-Associated Colorectal Cancer. Patients with inflammatory bowel disease (IBD) are at an increased risk of developing colorectal cancer (CRC). The risk factors of CRC in IBD patients include long disease duration, extensive colitis, severe histological inflammation, and coexistence with primary sclerosing cholangitis (PSC). Several molecular pathways that contribute to sporadic CRC are also involved in the pathogenesis of colitis-associated CRC. It is well established that long-standing chronic inflammation is a key predisposing factor of CRC in IBD. Proinflammatory pathways, including nuclear factor kappa B (NF-κB), IL-6/STAT3, cyclooxygenase-2 (COX-2)/PGE2, and IL-23/Th17, promote tumorigenesis by inducing the production of inflammatory mediators, upregulating the expression of antiapoptotic genes, and stimulating cell proliferation as well as angiogenesis. Better understanding of the underlying mechanisms may provide some promising targets for prevention and therapy. This review aims to elucidate the role of these signaling pathways in the pathogenesis of colitis-associated CRC using evidence-based approaches.
abstract_id: PUBMED:33570127
Overexpression of Cancer-Associated Stem Cell Gene OLFM4 in the Colonic Epithelium of Patients With Primary Sclerosing Cholangitis. Background: To examine immune-epithelial interactions and their impact on epithelial transformation in primary sclerosing cholangitis-associated ulcerative colitis (PSC-UC) using patient-derived colonic epithelial organoid cultures (EpOCs).
Methods: The EpOCs were originated from colonic biopsies from patients with PSC-UC (n = 12), patients with UC (n = 14), and control patients (n = 10) and stimulated with cytokines previously associated with intestinal inflammation (interferon (IFN) γ and interleukin (IL)-22). Markers of cytokine downstream pathways, stemness, and pluripotency were analyzed by real-time quantitative polymerase chain reaction and immunofluorescence. The OLFM4 expression in situ was assessed by RNAscope and immunohistochemistry.
Results: A distinct expression of stem cell-associated genes was observed in EpOCs derived from patients with PSC-UC, with lower expression of the classical stem-cell marker LGR5 and overexpression of OLFM4, previously associated with pluripotency and early stages of neoplastic transformation in the gastrointestinal and biliary tracts. High levels of OLFM4 were also found ex vivo in colonic biopsies from patients with PSC-UC. In addition, IFNγ stimulation resulted in the downregulation of LGR5 in EpOCs, whereas higher expression of OLFM4 was observed after IL-22 stimulation. Interestingly, expression of the IL-22 receptor, IL22RA1, was induced by IFNγ, suggesting that a complex interplay between these cytokines may contribute to carcinogenesis in PSC-UC.
Conclusions: Higher expression of OLFM4, a cancer stemness gene induced by IL-22, is present in PSC-UC, suggesting that IL-22 responses may result in alterations of the intestinal stem-cell niche in these patients.
abstract_id: PUBMED:29601088
Blocking H1/H2 histamine receptors inhibits damage/fibrosis in Mdr2-/- mice and human cholangiocarcinoma tumorigenesis. Primary sclerosing cholangitis (PSC) patients are at risk of developing cholangiocarcinoma (CCA). We have shown that (1) histamine increases biliary hyperplasia through H1/H2 histamine receptors (HRs) and (2) histamine levels increase and mast cells (MCs) infiltrate during PSC and CCA. We examined the effects of chronic treatment with H1/H2HR antagonists on PSC and CCA. Wild-type and multidrug-resistant knockout (Mdr2-/- ) mice were treated by osmotic minipumps with saline, mepyramine, or ranitidine (10 mg/kg body weight/day) or a combination of mepyramine/ranitidine for 4 weeks. Liver damage was assessed by hematoxylin and eosin. We evaluated (1) H1/H2HR expression, (2) MC presence, (3) L-histidine decarboxylase/histamine axis, (4) cholangiocyte proliferation/bile duct mass, and (5) fibrosis/hepatic stellate cell activation. Nu/nu mice were implanted with Mz-ChA-1 cells into the hind flanks and treated with saline, mepyramine, or ranitidine. Tumor growth was measured, and (1) H1/H2HR expression, (2) proliferation, (3) MC activation, (4) angiogenesis, and (5) epithelial-mesenchymal transition (EMT) were evaluated. In vitro, human hepatic stellate cells were evaluated for H1HR and H2HR expression. Cultured cholangiocytes and CCA lines were treated with saline, mepyramine, or ranitidine (25 μM) before evaluating proliferation, angiogenesis, EMT, and potential signaling mechanisms. H1/H2HR and MC presence increased in human PSC and CCA. In H1/H2HR antagonist (alone or in combination)-treated Mdr2-/- mice, liver and biliary damage and fibrosis decreased compared to saline treatment. H1/H2HR antagonists decreased tumor growth, serum histamine, angiogenesis, and EMT. In vitro, H1/H2HR blockers reduced biliary proliferation, and CCA cells had decreased proliferation, angiogenesis, EMT, and migration. Conclusion: Inhibition of H1/H2HR reverses PSC-associated damage and decreases CCA growth, angiogenesis, and EMT; because PSC patients are at risk of developing CCA, using HR blockers may be therapeutic for these diseases. (Hepatology 2018).
abstract_id: PUBMED:29705791
Biliary Tract Cancer: Implicated Immune-Mediated Pathways and Their Associated Potential Targets. There is a well-established link between biliary tract cancers (BTC) and chronic inflammatory conditions such as primary sclerosing cholangitis, chronic cholecystitis, chronic cholelithiasis, liver fluke-associated infestations, and chronic viral hepatic infections. These associated risk factors highlight the potential for development of immune-modulatory agents in this poor-prognostic disease group with limited treatment options. Clinical trials have evaluated the role of immune cells, inflammatory biomarkers, vaccines, cytokines, adoptive cell therapy, and immune checkpoint inhibitors in patients with BTC. Although these have demonstrated the importance of the immune environment in BTC, currently none of the immune-based therapies have been approved for use in this disease group. The role of immunomodulatory agents is a developing field and has yet to find its way 'from bench to bedside' in BTC.
abstract_id: PUBMED:30102768
Neoplastic Transformation of the Peribiliary Stem Cell Niche in Cholangiocarcinoma Arisen in Primary Sclerosing Cholangitis. Primary sclerosing cholangitis (PSC) is a chronic inflammatory cholangiopathy frequently complicated by cholangiocarcinoma (CCA). Massive proliferation of biliary tree stem/progenitor cells (BTSCs), expansion of peribiliary glands (PBGs), and dysplasia were observed in PSC. The aims of the present study were to evaluate the involvement of PBGs and BTSCs in CCA which emerged in PSC patients. Specimens from normal liver (n = 5), PSC (n = 20), and PSC-associated CCA (n = 20) were included. Samples were processed for histology, immunohistochemistry, and immunofluorescence. In vitro experiments were performed on human BTSCs, human mucinous primary CCA cell cultures, and human cholangiocyte cell lines (H69). Our results indicated that all CCAs emerging in PSC patients were mucin-producing tumors characterized by PBG involvement and a high expression of stem/progenitor cell markers. Ducts with neoplastic lesions showed higher inflammation, wall thickness, and PBG activation compared to nonneoplastic PSC-affected ducts. CCA showed higher microvascular density and higher expression of nuclear factor kappa B, interleukin-6, interleukin-8, transforming growth factor β, and vascular endothelial growth factor-1 compared to nonneoplastic ducts. CCA cells were characterized by a higher expression of epithelial-to-mesenchymal transition (EMT) traits and by the absence of primary cilia compared to bile ducts and PBG cells in controls and patients with PSC. Our in vitro study demonstrated that lipopolysaccharide and oxysterols (PSC-related stressors) induced the expression of EMT traits, the nuclear factor kappa B pathway, autophagy, and the loss of primary cilia in human BTSCs. Conclusion: CCA arising in patients with PSC is characterized by extensive PBG involvement and by activation of the BTSC niche in these patients, the presence of duct lesions at different stages suggests a progressive tumorigenesis.
abstract_id: PUBMED:22919240
Colorectal cancer in patients with inflammatory bowel disease: can we predict risk? The inflammatory bowel diseases (IBD), Crohn's disease (CD) and ulcerative colitis (UC), may be complicated by colorectal cancer (CRC). In a recent population-based cohort study of 47 347 Danish patients with IBD by Tine Jess and colleagues 268 patients with UC and 70 patients with CD developed CRC during 30 years of observation. The overall risk of CRC among patients with UC and CD was comparable with that of the general population. However, patients diagnosed with UC during childhood or as adolescents, patients with long duration of disease and those with concomitant primary sclerosing cholangitis were at increased risk. In this commentary, we discuss the mechanisms underlying carcinogenesis in IBD and current investigations of genetic susceptibility in IBD patients. Further advances will depend on the cooperative work by epidemiologist and molecular geneticists in order to identify genetic polymorphisms involved in IBD-associated CRC. The ultimate goal is to incorporate genotypes and clinical parameters into a predictive model that will refine the prediction of risk for CRC in colonic IBD. The challenge will be to translate these new findings into clinical practice and to determine appropriate preventive strategies in order to avoid CRC in IBD patients. The achieved knowledge may also be relevant for other inflammation-associated cancers.
abstract_id: PUBMED:26674110
Colorectal Cancer in Inflammatory Bowel Disease: Epidemiology, Pathogenesis and Surveillance. Background: Inflammatory bowel disease (IBD; including ulcerative colitis and Crohn's disease) is associated with an increased risk for colorectal cancer (CRC). Chronic mucosal inflammation is a key factor in the onset of carcinogenesis in IBD patients. Although most gene alterations that cause sporadic CRCs also occur in patients with IBD-associated CRC, some gene sequences and mutation frequencies differ between sporadic CRCs and IBD-associated CRCs.
Summary: This review explores the incidence of CRC in IBD patients, with the goal of identifying the risk and protective factors for CRC in order to facilitate dysplasia management via individualized surveillance strategies.
Key Message: The incidence of CRC is higher among IBD patients. Identifying the risk and protective factors for CRC will facilitate dysplasia management via individualized surveillance strategies.
Practical Implications: Several risk factors, including active inflammation, the coexistence of primary sclerosing cholangitis, a family history of sporadic CRC and the extent and duration of colonic disease, can lead to the development of CRC in patients with IBD. These risk factors should be utilized in individualized surveillance strategies to lower CRC incidence among IBD patients. Use of 5-aminosalicylic acid may play an important role in CRC prevention. Until newer, more reliable markers of IBD-associated CRC risk are found, dysplasia will continue to be the best marker of CRC risk in IBD. Dysplasia management continues to play a key role in preventing the progression of carcinogenesis.
abstract_id: PUBMED:28844951
Cholangiocytes in the pathogenesis of primary sclerosing cholangitis and development of cholangiocarcinoma. Primary sclerosing cholangitis (PSC) is an idiopathic cholangiopathy strongly associated with inflammatory bowel disease (IBD) and characterized by cholestasis, chronic immune infiltration and progressive fibrosis of the intrahepatic and extrahepatic bile ducts. PSC confers a high risk of cholangiocarcinoma (CCA) with PSC-CCA representing the leading cause of PSC-associated mortality. PSC-CCA is derived from cholangiocytes and associated progenitor cells - a heterogeneous group of dynamic epithelial cells lining the biliary tree that modulate the composition and volume of bile production by the liver. Infection, inflammation and cholestasis can trigger cholangiocyte activation leading to an increased expression of adhesion and antigen-presenting molecules as well as the release of various inflammatory and fibrogenic mediators. As a result, activated cholangiocytes engage in a myriad of cellular processes, including hepatocellular proliferation, apoptosis, angiogenesis and fibrosis. Cholangiocytes can also regulate the recruitment of immune cells, mesenchymal cells, and endothelial cells that participate in tissue repair and destruction in settings of persistent inflammation. In PSC, the role of cholangiocytes and the mechanisms governing their transformation to PSC-CCA are unclear however localization of disease suggests that cholangiocytes are a key target and potential regulator of hepatobiliary immunity, fibrogenesis and tumorigenesis. Herein, we summarize mechanisms of cholangiocyte activation in PSC and highlight new insights into disease pathways that may contribute to the development of PSC-CCA. This article is part of a Special Issue entitled: Cholangiocytes in Health and Disease edited by Jesus Banales, Marco Marzioni, Nicholas LaRusso and Peter Jansen.
Answer: Yes, serous cystadenoma (SCA) of the pancreas may serve as a model of clear-cell-associated angiogenesis and tumorigenesis. The study referenced in abstract PUBMED:19077470 found that SCAs of the pancreas, similar to other von Hippel-Lindau (VHL)-related tumors characterized by clear cells, exhibit a prominent capillary network immediately adjacent to the epithelium. This was confirmed by CD31 staining, which showed an extensively rich capillary network that appears almost intraepithelially in all cases of SCA. Additionally, the expression of vascular endothelial growth factor (VEGF) was identified in some of the cases. The study also found consistent expression of molecules implicated in clear-cell tumorigenesis, such as carbonic anhydrase IX (CA IX), GLUT-1, and hypoxia-inducible factor-1 (HIF-1alpha) in SCAs. These findings suggest that SCAs share molecular features with other clear-cell tumors and may have biological and therapeutic implications, especially considering the evolving drugs targeting these pathways. Therefore, SCAs may indeed serve as a model for studying clear-cell-associated angiogenesis and tumorigenesis. |
Instruction: High-density lipoprotein cholesterol in coronary artery disease patients: is it as low as expected?
Abstracts:
abstract_id: PUBMED:31023096
High-density lipoprotein cholesterol and the risk of obstructive coronary artery disease beyond low-density lipoprotein cholesterol in non-diabetic individuals. Aims: The relationship between high-density lipoprotein cholesterol and the severity of coronary artery disease beyond low-density lipoprotein cholesterol, the primary target of cholesterol-lowering therapy, remains uncertain. We evaluated the association between high-density lipoprotein cholesterol and obstructive coronary artery disease using parameters of any obstructive plaque, obstructive plaque in the left main coronary artery or proximal left anterior descending artery, and obstructive plaque in multi-vessels, according to low-density lipoprotein cholesterol levels.
Methods And Results: We analyzed 5130 asymptomatic non-diabetics who underwent coronary computed tomography angiography for general health examination. Obstructive plaque was defined as a plaque with ≥50% luminal diameter stenosis. The participants were divided into three groups based on low-density lipoprotein cholesterol levels of ≤129, 130-159, and ≥160 mg/dl. The prevalence of any obstructive plaque (5.9% vs 6.4% vs 10.6%) and obstructive plaque in the left main coronary artery or proximal left anterior descending artery (2.1% vs 2.1% vs 4.3%) significantly increased with low-density lipoprotein cholesterol category (all p < 0.05). Compared with subjects with high-density lipoprotein cholesterol level ≥40 mg/dl, those with high-density lipoprotein cholesterol level <40 mg/dl had a significantly higher prevalence of any obstructive plaque (10.4% vs 5.1%), obstructive plaque in the left main coronary artery or proximal left anterior descending artery (3.6% vs 1.8%), and obstructive plaque in multi-vessels (4.3% vs 1.1%), only in the group with low-density lipoprotein cholesterol level ≤129 mg/dl (all p < 0.05). Multiple regression analysis showed that increased high-density lipoprotein cholesterol levels were associated with a reduced risk of all obstructive coronary artery disease parameters only in the group with low-density lipoprotein cholesterol level ≤129 mg/dl (all p < 0.05).
Conclusion: Increased high-density lipoprotein cholesterol levels were independently associated with a lower risk of obstructive coronary artery disease in asymptomatic non-diabetics with low low-density lipoprotein cholesterol levels.
abstract_id: PUBMED:31591337
Low High-Density Lipoprotein Cholesterol Predisposes to Coronary Artery Ectasia. Coronary Artery Ectasia (CAE) is a phenomenon characterized by locally or diffuse coronary artery dilation of one or more coronary arteries. In the present study, the prevalence of acquired coronary ectasia and coronary risk factors for CAE was analyzed in patients undergoing cardiac catheterization for suspected ischemic heart disease. We retrospectively analyzed 4000 patients undergoing coronary angiography for suspected coronary artery disease at our cardiac catheterization unit, and a total of 171 patients were selected. The study group was divided into three groups, 65 patients with CAE, 62 patients with significant obstructive coronary artery disease, and 44 patients with normal coronary angiograms as a control group. A negative correlation was observed between high-density lipoprotein cholesterol (HDL-C) and the presence of CAE (r = -0.274, p < 0.001). In addition, HDL-C (OR, 0.858; CI, 0.749-0.984; p = 0.029), low-density lipoprotein cholesterol (LDL-C)/HDL-C ratio (OR, 1.987; CI, 1.542-2.882; p = 0.034), and hemoglobin (OR, 2.060; CI, 1.114-3.809; p = 0.021) were identified as independent risk factors for the development of CAE. In fact, we observed that a one-unit increase in HDL-C corresponded to a 15% risk reduction in CAE development and that each unit increase in hemoglobin could potentially increase the CAE risk by 2-fold. Low HDL-C could significantly increase the risk of developing CAE in healthy individuals. Elevated hemoglobin could predispose to subsequent dilation and aneurysm of the coronary artery. This work suggests that disordered lipoprotein metabolism or altered hemoglobin values can predispose patients to aneurysmal coronary artery disease.
abstract_id: PUBMED:9819099
The antiatherogenic role of high-density lipoprotein cholesterol. Landmark clinical studies in the past 5 years that demonstrated diminished mortality and first coronary events following lowering of low-density lipoprotein (LDL) cholesterol stimulated considerable interest in the medical community. Yet, high-density lipoprotein (HDL) cholesterol, which transports circulating cholesterol to the liver for clearance, clearly also exerts antiatherogenic effects. The Framingham Heart Study produced compelling epidemiologic evidence indicating that a low level of HDL cholesterol was an independent predictor of coronary artery disease (CAD). Emerging experimental and clinical findings are, collectively, now furnishing a solid scientific foundation for this relation. First, the reverse cholesterol transport pathway--including the roles of nascent (pre-beta) HDL, apolipoprotein A-I, lecithin-cholesterol acyltransferase (LCAT), cholesteryl ester transport protein, and hepatic uptake of cholesteryl ester from HDL by liver--is better understood. For example, the identification of a hepatic HDL receptor, SR-BI, suggests a mechanism of delivery of cholesteryl ester to liver that differs from the receptor-mediated uptake of LDL. Second, apolipoprotein A-I, the major protein component of HDL, and 2 enzymes on HDL, paraoxonase and platelet-activating factor acetylhydrolase appear to diminish the formation of the highly atherogenic oxidized LDL. Third, lower levels of HDL cholesterol are associated in a dose-response fashion with the severity and number of angiographically documented atherosclerotic coronary arteries. Fourth, low HDL cholesterol predicts total mortality in patients with CAD and desirable total cholesterol levels (<200 mg/dL). Fifth, low HDL cholesterol concentrations appear to be associated with increased rates of restenosis after percutaneous transluminal coronary angioplasty. In terms of elevating HDL cholesterol, cessation of cigarette smoking, reduction to ideal body weight, and regular aerobic exercise all appear important. Most medications used to treat dyslipidemias will raise HDL cholesterol levels modestly; however, niacin appears to have the greatest potential to do so, and can increase HDL cholesterol up to 30%. Recognizing these data, the most recent report of the National Cholesterol Education Program identified low HDL cholesterol as a CAD risk factor and recommended that all healthy adults be screened for both total cholesterol and HDL cholesterol levels.
abstract_id: PUBMED:36436874
Impact of High-Density Lipoprotein Function, Rather Than High-Density Lipoprotein Cholesterol Level, on Cardiovascular Disease Among Patients With Familial Hypercholesterolemia. Background: Recently, the function of high-density lipoprotein (HDL), rather than the HDL cholesterol (HDL-C) level, has been attracting more attention in risk prediction for coronary artery disease (CAD).Methods and Results: Patients with clinically diagnosed familial hypercholesterolemia (FH; n=108; male/female, 51/57) were assessed cross-sectionally. Serum cholesterol uptake capacity (CUC) levels were determined using our original cell-free assay. Linear regression was used to determine associations between CUC and clinical variables, including low-density lipoprotein cholesterol and the carotid plaque score. Multivariable logistic regression analysis was used to test factors associated with the presence of CAD. Among the 108 FH patients, 30 had CAD. CUC levels were significantly lower among patients with than without CAD (median [interquartile range] 119 [92-139] vs. 142 [121-165] arbitrary units [AU]; P=0.0004). In addition, CUC was significantly lower in patients with Achilles tendon thickness ≥9.0 mm than in those without Achilles tendon thickening (133 [110-157] vs. 142 [123-174] AU; P=0.047). Serum CUC levels were negatively correlated with the carotid plaque score (Spearman's r=0.37; P=0.00018). Serum CUC levels were significantly associated with CAD, after adjusting for other clinical variables (odds ratio=0.86, 95% CI=0.76-0.96, P=0.033), whereas HDL-C was not.
Conclusions: HDL function, assessed by serum CUC level, rather than HDL-C level, adds risk stratification information among FH patients.
abstract_id: PUBMED:28462120
High-density lipoprotein cholesterol (HDL-C) in cardiovascular disease: effect of exercise training. Decreases in high-density lipoprotein cholesterol (HDL-C) levels are associated with an increased risk of coronary artery disease (CAD), whereas increased HDL-C levels are related to a decreased risk of CAD and myocardial infarction. Although HDL prevents the oxidation of low-density lipoprotein under normal conditions, it triggers a structural change, inhibiting antiarteriosclerotic and anti-inflammatory functions, under pathological conditions such as oxidative stress, inflammation, and diabetes. HDL can transform into various structures based on the quantitative reduction and deformation of apolipoprotein A1 and is the primary cause of increased levels of dysfunctional HDL, which can lead to an increased risk of CAD. Therefore, analyzing the structure and components of HDL rather than HDL-C after the application of an exercise training program may be useful for understanding the effects of HDL.
abstract_id: PUBMED:24014391
High-density lipoprotein cholesterol, coronary artery disease, and cardiovascular mortality. Aims: High-density lipoprotein (HDL) cholesterol is a strong predictor of cardiovascular mortality. This work aimed to investigate whether the presence of coronary artery disease (CAD) impacts on its predictive value.
Methods And Results: We studied 3141 participants (2191 males, 950 females) of the LUdwigshafen RIsk and Cardiovascular health (LURIC) study. They had a mean ± standard deviation age of 62.6 ± 10.6 years, body mass index of 27.5 ± 4.1 kg/m&sup2;, and HDL cholesterol of 38.9 ± 10.8 mg/dL. The cohort consisted of 699 people without CAD, 1515 patients with stable CAD, and 927 patients with unstable CAD. The participants were prospectively followed for cardiovascular mortality over a median (inter-quartile range) period of 9.9 (8.7-10.7) years. A total of 590 participants died from cardiovascular diseases. High-density lipoprotein cholesterol by tertiles was inversely related to cardiovascular mortality in the entire cohort (P = 0.009). There was significant interaction between HDL cholesterol and CAD in predicting the outcome (P = 0.007). In stratified analyses, HDL cholesterol was strongly associated with cardiovascular mortality in people without CAD [3rd vs. 1st tertile: HR (95% CI) = 0.37 (0.18-0.74), P = 0.005], but not in patients with stable [3rd vs. 1st tertile: HR (95% CI) = 0.81 (0.61-1.09), P = 0.159] and unstable [3rd vs. 1st tertile: HR (95% CI) = 0.91 (0.59-1.41), P = 0.675] CAD. These results were replicated by analyses in 3413 participants of the AtheroGene cohort and 5738 participants of the ESTHER cohort, and by a meta-analysis comprising all three cohorts.
Conclusion: The inverse relationship of HDL cholesterol with cardiovascular mortality is weakened in patients with CAD. The usefulness of considering HDL cholesterol for cardiovascular risk stratification seems limited in such patients.
abstract_id: PUBMED:19167646
Serum levels of remnant lipoprotein cholesterol and oxidized low-density lipoprotein in patients with coronary artery disease. Background: Oxidized low-density lipoprotein (OxLDL) and remnant lipoprotein play a crucial role in the development of atherosclerosis. Recently, a novel method for measuring remnant cholesterol levels (remnant lipoproteins cholesterol homogenous assay: RemL-C) has been established. However, the correlation between OxLDL and remnant lipoprotein, including RemL-C, has not been fully investigated.
Methods: We enrolled 25 consecutive patients with documented coronary artery disease (CAD) and 20 controls. Remnant-like particle cholesterol (RLP-C) and RemL-C were used to determine the levels of remnant lipoprotein cholesterol. Serum levels of malondialdehyde-modified LDL (MDA-LDL) and OxLDL using a monoclonal antibody DLH3 (OxPC) were used to measure the concentration of circulating OxLDL.
Results: The CAD group had high levels of fasting glucose and glycosylated hemoglobin (HbA1c), and low levels of high-density lipoprotein cholesterol compared with the control group. Serum levels of total cholesterol or LDL cholesterol were not significantly different between the two groups. The levels of RemL-C (p = 0.035), MDA-LDL (p = 0.018), and MDA-LDL/LDL-C (p = 0.036) in the CAD group were significantly higher than those in the control group. The levels of RLP-C tended to be higher in the CAD group than those in the control group (p = 0.096). Positive correlations were demonstrated between remnant lipoprotein cholesterol and OxLDL (RLP-C and MDA-LDL/LDL-C, r = 0.45, p = 0.0024, RLP-C and OxPC, r = 0.51, p = 0.0005, RemL-C and MDA-LDL/LDL-C, r = 0.42, p = 0.0044, RemL-C and OxPC, r = 0.43, p = 0.0043). Similar trends were observed in non-diabetic subjects and in subjects without metabolic syndrome. Positive correlations were also observed between RLP-C and RemL-C (r = 0.94, p < 0.0001) and between MDA-LDL/LDL-C and OxPC (r = 0.40, p = 0.0074).
Conclusions: These results suggest that the association between high levels of remnant lipoprotein cholesterol and high OxLDL levels might be linked to atherogenesis in patients with CAD.
abstract_id: PUBMED:10208494
Influence of mild to moderately elevated triglycerides on low density lipoprotein subfraction concentration and composition in healthy men with low high density lipoprotein cholesterol levels. Epidemiologic studies have shown that a dyslipoproteinemia with low concentrations of high density lipoprotein (HDL) cholesterol and elevated serum triglycerides (TG) is associated with a particularly high incidence of coronary artery disease. This lipid profile is associated with increased concentrations of small, dense low density lipoprotein (LDL) particles. To evaluate the role of mild to moderately elevated TG on the LDL subfraction profile in patients with low HDL cholesterol, concentration and composition of six LDL subfractions was determined by density gradient ultracentrifugation in 41 healthy men (31+/-9 years, body mass index (BMI) 25.1+/-3.9 kg/m2) with equally low HDL cholesterol levels < 0.91 mmol/l but different TG levels: TG < 1.13 mmol/l, n = 16; TG = 1.13-2.26 mmol/l, n = 13: TG = 2.26-3.39 mmol/l, n = 12. Those men with moderately elevated TG levels between 2.26 and 3.39 mmol/l had significantly higher concentrations of very low density lipoprotein (VLDL), intermediate low density lipoprotein (IDL), and small, dense LDL apoB and cholesterol than men with TG < 1.13 mmol/l. With increasing serum TG, the TG content per particle also increased in VLDL, IDL as well as total LDL particles while the cholesterol and phospholipid (PL) content decreased in VLDL and IDL, but not in LDL particles. LDL subfraction analysis revealed that only large, more buoyant LDL particles (d < 1.044 g/ml) but not the smaller, more dense LDL, were enriched in TG. Small, dense LDL particles were depleted of free cholesterol (FC) and PL. This study has shown that in men with low HDL cholesterol levels mild to moderately elevated serum TG strongly suggest the presence of other metabolic cardiovascular risk factors and in particular of a more atherogenic LDL subfraction profile of increased concentration of small, dense LDL particles that are depleted in surface lipids.
abstract_id: PUBMED:25838421
High-Density Lipoprotein (HDL) Phospholipid Content and Cholesterol Efflux Capacity Are Reduced in Patients With Very High HDL Cholesterol and Coronary Disease. Objective: Plasma levels of high-density lipoprotein cholesterol (HDL-C) are strongly inversely associated with coronary artery disease (CAD), and high HDL-C is generally associated with reduced risk of CAD. Extremely high HDL-C with CAD is an unusual phenotype, and we hypothesized that the HDL in such individuals may have an altered composition and reduced function when compared with controls with similarly high HDL-C and no CAD.
Approach And Results: Fifty-five subjects with very high HDL-C (mean, 86 mg/dL) and onset of CAD at the age of ≈ 60 years with no known risk factors for CAD (cases) were identified through systematic recruitment. A total of 120 control subjects without CAD, matched for race, sex, and HDL-C level (controls), were identified. In all subjects, HDL composition was analyzed and HDL cholesterol efflux capacity was assessed. HDL phospholipid composition was significantly lower in cases (92 ± 37 mg/dL) than in controls (109 ± 43 mg/dL; P=0.0095). HDL cholesterol efflux capacity was significantly lower in cases (1.96 ± 0.39) than in controls (2.11 ± 0.43; P=0.04).
Conclusions: In people with very high HDL-C, reduced HDL phospholipid content and cholesterol efflux capacity are associated with the paradoxical development of CAD.
abstract_id: PUBMED:1912914
Total cholesterol, low density lipoprotein cholesterol, and high density lipoprotein cholesterol and coronary heart disease in Scotland. Objective: To investigate long term changes in total cholesterol, high density lipoprotein cholesterol, and low density lipoprotein cholesterol concentrations and in measures of other risk factors for coronary heart disease and to assess their importance for the development of coronary heart disease in Scottish men.
Design: Longitudinal study entailing follow up in 1988-9 of men investigated during a study in 1976.
Setting: Edinburgh, Scotland.
Subjects: 107 men from Edinburgh who had taken part in a comparative study of risk factors for heart disease with Swedish men in 1976 when aged 40.
Intervention: The men were invited to attend a follow up clinic in 1988-9 for measurement of cholesterol concentrations and other risk factor measurements. Eighty three attended and 24 refused to or could not attend.
Main Outcome Measures: Changes in total cholesterol, high density lipoprotein cholesterol, and low density lipoprotein cholesterol concentrations, body weight, weight to height index, prevalence of smoking, and alcohol intake; number of coronary artery disease events.
Results: Mean serum total cholesterol concentration increased over the 12 years mainly due to an increase in the low density lipoprotein cholesterol fraction (from 3.53 (SD 0.09) to 4.56 (0.11) mmol/l) despite a reduction in high density lipoprotein cholesterol concentration. Body weight and weight to height index increased. Fewer men smoked more than 15 cigarettes/day in 1988-9 than in 1976. Blood pressure remained stable and fasting triglyceride concentrations did not change. The frequency of corneal arcus doubled. Alcohol consumption decreased significantly. Eleven men developed clinical coronary heart disease. High low density lipoprotein and low high density lipoprotein cholesterol concentrations in 1976, but not total cholesterol concentration, significantly predicted coronary heart disease (p = 0.05). Almost all of the men who developed coronary heart disease were smokers (91% v 53%, p less than 0.05).
Conclusion: Over 12 years the lipid profile deteriorated significantly in this healthy cohort of young men. Smoking, a low high density lipoprotein concentration and a raised low density lipoprotein concentration were all associated with coronary heart disease in middle aged Scottish men, whereas there was no association for total cholesterol concentration. The findings have implications for screening programmes.
Answer: The relationship between high-density lipoprotein cholesterol (HDL-C) and coronary artery disease (CAD) is complex. While HDL-C is generally considered protective against CAD, the evidence suggests that this relationship is not always straightforward.
In patients with very high HDL-C levels who also have CAD, it has been observed that the HDL may have altered composition and reduced function compared to controls with similarly high HDL-C levels but no CAD. Specifically, these patients had lower HDL phospholipid content and reduced cholesterol efflux capacity, which are associated with the paradoxical development of CAD (PUBMED:25838421). This suggests that simply having high levels of HDL-C does not always confer protection against CAD, and the functionality of HDL may be more important.
Moreover, the inverse relationship between HDL-C levels and cardiovascular mortality appears to be weakened in patients with CAD. In a study, HDL-C was found to be a strong predictor of cardiovascular mortality in people without CAD, but this association was not observed in patients with stable or unstable CAD (PUBMED:24014391). This indicates that the protective effect of HDL-C may be limited in the presence of established CAD.
Additionally, in a study of patients with familial hypercholesterolemia, a condition characterized by high low-density lipoprotein cholesterol (LDL-C) levels, it was found that HDL function, rather than HDL-C level, was more closely associated with the presence of CAD (PUBMED:36436874). This further supports the idea that the quality of HDL, rather than the quantity, is crucial in the context of CAD.
In summary, while low HDL-C levels are generally associated with an increased risk of CAD, the presence of high HDL-C does not always equate to a lower risk, especially in patients with existing CAD. The functionality of HDL, including its composition and cholesterol efflux capacity, may be more relevant indicators of cardiovascular risk than HDL-C levels alone. |
Instruction: Does intramuscular thermal feedback modulate eccrine sweating in exercising humans?
Abstracts:
abstract_id: PUBMED:24934867
Does intramuscular thermal feedback modulate eccrine sweating in exercising humans? Aim: Few investigators have considered the possibility that skeletal muscles might contain thermosensitive elements capable of modifying thermoeffector responses. In this experiment, the temporal relationships between dynamic changes in deep-body and intramuscular temperatures and eccrine sweat secretion were explored during rhythmical and reproducible variations in heat production.
Methods: Eight subjects performed semi-recumbent cycling (25 °C) at a constant load to first establish whole-body thermal and sudomotor steady states (35 min), followed by a 24-min block of sinusoidal workload variations (three, 8-min periods) and then returning to steady-state cycling (20 min). Individual oesophageal, mean skin and intramuscular (vastus lateralis) temperatures were independently cross-correlated with simultaneously measured forehead sweat rates to evaluate the possible thermal modulation of sudomotor activity.
Results: Both intramuscular and oesophageal temperatures showed strong correlations with sinusoidal variations in sweating with respective maximal cross-correlation coefficients of 0.807 (±0.044) and 0.845 (±0.035), but these were not different (P = 0.40). However, the phase delay between intramuscular temperature changes and sweat secretion was significantly shorter than the delay between oesophageal temperature and sweating [25.6 s (±12.6) vs. 46.9 s (±11.3); P = 0.03].
Conclusion: The temporal coupling of eccrine sweating to intramuscular temperature, combined with a shorter phase delay, was consistent with the presence of thermosensitive elements within skeletal muscles that appear to participate in the modulation of thermal sweating.
abstract_id: PUBMED:20036977
Mechanisms and controllers of eccrine sweating in humans. Human body temperature is regulated within a very narrow range. When exposed to hyperthermic conditions, via environmental factors and/or increased metabolism, heat dissipation becomes vital for survival. In humans, the primary mechanism of heat dissipation, particularly when ambient temperature is higher than skin temperature, is evaporative heat loss secondary to sweat secretion from eccrine glands. While the primary controller of sweating is the integration between internal and skin temperatures, a number of non-thermal factors modulate the sweating response. In addition to summarizing the current understanding of the neural pathways from the brain to the sweat gland, as well as responses at the sweat gland, this review will highlight findings pertaining to studies of proposed non-thermal modifiers of sweating, namely, exercise, baroreceptor loading state, and body fluid status. Information from these studies not only provides important insight pertaining to the basic mechanisms of sweating, but also perhaps could be useful towards a greater understanding of potential mechanisms and consequences of disease states as well as aging in altering sweating responses and thus temperature regulation.
abstract_id: PUBMED:24956027
A non-contact technique for measuring eccrine sweat gland activity using passive thermal imaging. An approach for monitoring eccrine sweat gland activity using high resolution Mid-Wave Infrared (MWIR) imaging (3-5 μm wave band) is described. This technique is non-contact, passive, and provides high temporal and spatial resolution. Pore activity was monitored on the face and on the volar surfaces of the distal and medial phalanges of the index and middle fingers while participants performed a series of six deep inhalation and exhalation exercises. Two metrics called the Pore Activation Index (PAI) and Pore Count (PC) were defined as size-weighted and unweighted measures of active sweat gland counts respectively. PAI transient responses on the finger tips were found to be positively correlated to Skin Conductance Responses (SCRs). PAI responses were also observed on the face, although the finger sites appeared to be more responsive. Results indicate that thermal imaging of the pore response may provide a useful, non-contact, correlate measure for electrodermal responses recorded from related sites.
abstract_id: PUBMED:29544622
The evolution of eccrine sweat glands in human and nonhuman primates. Sweating is an unusual thermoregulatory strategy for most mammals, yet is critical for humans. This trait is commonly hypothesized to result from human ancestors moving from a forest to a warmer and drier open environment. As soft tissue traits do not typically fossilize, this idea has been difficult to test. Therefore, we used a comparative approach to examine 15 eccrine gland traits from 35 primate species. For each trait we measured phylogenetic signal, tested three evolutionary models to explain trait variation, and used phylogenetic models to examine how traits varied in response to climate variables. Phylogenetic signal in traits varied substantially, with the two traits exhibiting the highest values being gland distribution on the body and percent eccrine vs. apocrine glands on the body. Variation in most traits was best explained by an Ornstein-Uhlenbeck model suggesting the importance of natural selection. Two traits were strongly predicted by climate. First, species with high eccrine gland glycogen content were associated with habitats exhibiting warm temperatures and low rainfall. Second, species with increased capillarization were associated with high temperature. Glycogen is a primary energy substrate powering sweat production and sodium reabsorption in the eccrine gland, and increased capillarization permits greater oxygen, glucose and electrolyte delivery. Thus, our results are evidence of natural selection for increased sweating capacity in primate species with body surface eccrine glands living in hot and dry climates. We suggest that selection for increased glycogen content and capillarization may have been part of initial increases in hominin thermoregulatory sweating capacity.
abstract_id: PUBMED:16614366
Neural control and mechanisms of eccrine sweating during heat stress and exercise. In humans, evaporative heat loss from eccrine sweat glands is critical for thermoregulation during exercise and/or exposure to hot environmental conditions, particularly when environmental temperature is greater than skin temperature. Since the time of the ancient Greeks, the significance of sweating has been recognized, whereas our understanding of the mechanisms and controllers of sweating has largely developed during the past century. This review initially focuses on the basic mechanisms of eccrine sweat secretion during heat stress and/or exercise along with a review of the primary controllers of thermoregulatory sweating (i.e., internal and skin temperatures). This is followed by a review of key nonthermal factors associated with prolonged heat stress and exercise that have been proposed to modulate the sweating response. Finally, mechanisms pertaining to the effects of heat acclimation and microgravity exposure are presented.
abstract_id: PUBMED:36896681
Variation in human functional eccrine gland density and its implications for the evolution of human sweating. Objectives: We aim to test three questions regarding human eccrine sweat gland density, which is highly derived yet poorly understood. First, is variation in functional eccrine gland density ("FED") explained by childhood climate, suggesting phenotypic plasticity? Second, is variation in FED explained by genetic similarity (a proxy for "geographic ancestry"), implying divergent evolutionary pathways in this trait of ancestral populations? Third, what is the relationship between FED and sweat production?
Materials And Methods: To test questions one and two, we measured FED in 68 volunteers aged 18-39 with varied childhood climate regimes and geographic ancestries. To test question three, we compared sweat production to FED in our n = 68 sample. In addition, we examined the relationship between FED and whole-body sweat loss during cycling in warm conditions using a sample of eight heat-acclimated endurance athletes.
Results: Interindividual variation in six-site FED was more than twofold, ranging from 60.9 to 132.7 glands/cm2 . Variation in FED was best explained by body surface area and limb circumferences (negative associations) and poorly explained by childhood climatic conditions and genetic similarity. Pilocarpine-induced sweat production was unrelated to FED while whole-body sweat loss during cycling was significantly, though modestly, associated with FED.
Discussion: We hypothesize that gland-level phenotypic plasticity, rather than changes in eccrine gland density, was sufficient to permit thermal adaptation to novel environments as humans colonized the globe. Future research should measure effects of FED in dehydrated states and the relationship between FED and salt loss, and control for effects of microclimate to rule out phenotypic plasticity effects.
abstract_id: PUBMED:31128655
Thermogenic and psychogenic sweating in humans: Identifying eccrine glandular recruitment patterns from glabrous and non-glabrous skin surfaces. In this experiment, psychogenic (mental arithmetic), thermogenic (mean body temperature elevation of 0.6 °C) and combined thermo-psychogenic treatments were used to explore eccrine sweat-gland recruitment from glabrous (volar hand and forehead) and non-glabrous skin surfaces (chest). It was hypothesised that each treatment would activate the same glands, and that glandular activity would be intermittent. Nine individuals participated in a single trial with normothermic and mildly hyperthermic phases. When normothermic, a 10-min arithmetical challenge was administered, during which sudomotor activity was recorded. Following passive heating and thermal clamping, sweating responses were again evaluated (10 min). A second arithmetical challenge (10 min) was administered during clamped hyperthermia, with its sudorific impact recorded. The activity of individual sweat glands was recorded at 60-s intervals, using precisely positioned, and uniformly applied, starch-iodide papers. Those imprints were digitised and analysed. Peak activity typically occurred during the thermo-psychogenic treatment, revealing physiologically active densities of 128 (volar hand), 165 (forehead) and 77 glands.cm-2 (chest). Except for the hand (46%), glands uniquely activated by one treatment were consistently <10% of the total glands identified. Glandular activations were most commonly of an intermittent nature, particularly during the thermogenic treatment. Accordingly, we accepted the hypothesis that psychogenic, thermogenic and thermo-psychogenic stimuli activate the same sweat glands in both the glabrous and non-glabrous regions. In addition, this investigation has provided detailed descriptions of the intermittent nature of sweat-gland activity, revealing that a consistent proportion of the physiologically active glands are recruited during these thermal and non-thermal stimuli.
abstract_id: PUBMED:12558618
Axillary hyperhidrosis: eccrine or apocrine? We review the literature regarding axillary hyperhidrosis, discuss normal sweat gland function and postulate on the respective roles of the eccrine, apocrine and apo-eccrine glands in the pathophysiology of excessive axillary sweating.
abstract_id: PUBMED:28648603
Effects of Brn2 overexpression on eccrine sweat gland development in the mouse paw. Eccrine sweat glands regulate body temperature by secreting water and electrolytes. In humans, eccrine sweat glands are ubiquitous in the skin, except in the lips and external genitalia. In mice, eccrine sweat glands are present only in the paw pad. Brn2 is a protein belonging to a large family of transcription factors. A few studies have examined Brn2 in melanoma cells and epidermal keratinocytes. This study investigated changes in the skin in the K5-Brn2 transgenic mouse, which overexpresses Brn2 and contains the keratin 5 promotor. Interestingly, the volume of eccrine sweat glands was reduced markedly in the K5-Brn2 transgenic mouse compared with the wild-type, while the expression of aquaporin 5, important molecule in sweat secretion, was increased in each sweat gland cell, probably to compensate for the reduction in gland development. However, sweating response to a pilocarpine injection in the hind paw was significantly decreased in the K5-Brn2 transgenic mouse compared with the wild-type. The paw epidermis was thicker in the K5-Brn2 transgenic mouse compared with the wild-type. Taken together, eccrine sweat gland development and sweat secretion were suppressed markedly in the K5-Brn2 transgenic mouse. These results may be associated with dominant development of the epidermis by Brn2 overexpression in the paw skin.
abstract_id: PUBMED:1778649
The evolution of sweat glands. Mammals have two kinds of sweat glands, apocrine and eccrine, which provide for thermal cooling. In this paper we describe the distribution and characteristics of these glands in selected mammals, especially primates, and reject the suggested development of the eccrine gland from the apocrine gland during the Tertiary geological period. The evidence strongly suggests that the two glands, depending on the presence or absence of fur, have equal and similar functions among mammals; apocrine glands are not primitive. However, there is a unique and remarkable thermal eccrine system in humans; we suggest that this system evolved in concert with bipedalism and a smooth hairless skin.
Answer: Yes, intramuscular thermal feedback does modulate eccrine sweating in exercising humans. A study explored the temporal relationships between dynamic changes in deep-body and intramuscular temperatures and eccrine sweat secretion during exercise. The results showed that both intramuscular and oesophageal temperatures were strongly correlated with variations in sweating, but the phase delay between intramuscular temperature changes and sweat secretion was significantly shorter than the delay between oesophageal temperature and sweating. This suggests the presence of thermosensitive elements within skeletal muscles that participate in the modulation of thermal sweating (PUBMED:24934867). |
Instruction: Is the Use of a Support Bridge Beneficial for Preventing Stomal Retraction After Loop Ileostomy?
Abstracts:
abstract_id: PUBMED:26135822
Is the Use of a Support Bridge Beneficial for Preventing Stomal Retraction After Loop Ileostomy? A Prospective Nonrandomized Study. Purpose: A temporary defunctioning loop ileostomy is frequently created during low colorectal or coloanal anastomosis to prevent peritoneal sepsis associated with anastomotic leakage. We investigated whether routine support bridge placement prevents stoma retraction after the formation of a loop ileostomy.
Design: Prospective, nonrandomized trial.
Subjects And Setting: The study sample comprised 32 consecutive patients who underwent defunctioning loop ileostomy at an academic tertiary care center in Seoul Korea from February to September 2010.
Methods: Patients were nonrandomly allocated to "no bridge," "short-term bridge" (1 week), and "long-term bridge" (3 weeks) groups based on the surgeon's clinical judgment. Group differences in stoma height changes over time were analyzed.
Results: Subjects' mean age was 59.5 (range: 43-82) years, and the male-to-female ratio was 2.2:1.0. The mean heights of the stoma on postoperative day 2 and postoperative month 3, respectively, were 1.07 ± 0.16 cm (mean ± SD) and 0.81 ± 0.17 cm in the no-bridge group, 1.70 ± 0.29 cm and 1.21 ± 0.18 cm in the short-term bridge group, and 1.18 ± 0.16 cm and 1.01 ± 0.20 cm in the long-term bridge group. The changes in the stoma height 3 months after the surgery showed no statistically significant differences among the groups (P = .430). Stoma Quality of Life scores at 3 weeks (47.4 vs 46.1; P = .730) were similar for patients with and without bridges. However, a significantly greater number of patients with bridges reported difficulty with pouch changes compared to those without bridges (72.7% vs 14.3%; P = .002).
Conclusions: Routine use of support bridges during loop ileostomy is unnecessary and inconvenient to patients. If a support bridge must be used, it can be removed early.
abstract_id: PUBMED:16784467
Ileostomy rod--is it a bridge too far? Objective: Defunctioning loop ileostomies are used commonly to protect low colorectal anastomoses and thereby reducing the serious complications of leakage. However, they are associated with specific complications such as retraction. Traditionally, a supporting rod is placed as a bridge to support both limbs of the stoma in the hope of reducing the incidence of stomal retraction. There is little evidence in the published literature to support this practice. The aim of this study was to determine whether using an ileostomy rod would reduce the incidence of stomal retraction.
Method: A prospective, randomised controlled trial was performed in 60 consecutive patients who required a defunctioning loop ileostomy. Patients were allocated to either a 'bridge' or 'bridge-less' protocol. All the patients were assessed by dedicated stoma nurses for at least 3 months and until their stomas were closed. Their postoperative symptoms, including stoma activity and retraction rate, were recorded.
Results: Between May 2001 and June 2004, 57 patients completed the study (28 bridge; 29 bridge-less). There were no significant differences in the retraction rate between the groups. No clinical anastomotic leakage was recorded and none of the patients required early closure.
Conclusions: If a loop ileostomy is constructed properly, stomal retraction is uncommon and routine use of a bridge is unnecessary.
abstract_id: PUBMED:27657823
Stomaplasty with pannicuectomy in an obese patient with stomal retraction: A case report. Introduction: Stomal retraction is a common complication following stoma formation. A repeat surgical procedure for stomal revision is an invasive treatment that is often required as a result.
Case Presentation: An 81-year-old woman with obstructive rectal carcinoma and perforative peritonitis underwent an emergent anterior resection and colostomy (Hartmann's operation). After the operation, the patient changed the stoma pouch every day because of stomal retraction and leakage. Thirty-eight days after the operation, we performed a stomaplasty with pannicuectomy. Following this procedure, the patient changed the stoma pouch twice weekly.
Discussion: Stomal retraction is caused by the thick subcutaneous fat and abnormal skin folds in obese patients, as well as the excess tension that is the result of inadequate mobilization. Treatment of stomal retraction typically requires an intraperitoneal stoma revision. Our method of panniculectomy with skin excision but without stomal revision does not involve an incision around the stoma and there is no risk of fecal contamination.
Conclusion: We report a case of an obese patient who underwent stomaplasty with pannicuectomy for stomal retraction. We believe that stomaplasty with pannicuectomy is a feasible option in obese patients with stomal retraction.
abstract_id: PUBMED:27023628
Outcomes of support rod usage in loop stoma formation. Aim: Traditionally, support rods have been used when creating loop stomas in the hope of preventing retraction. However, their effectiveness has not been clearly established. This study aimed to investigate the rate of stoma rod usage and its impact on stoma retraction and complication rates.
Method: A prospective cohort of 515 consecutive patients who underwent loop ileostomy/colostomy formation at a tertiary referral colorectal unit in Sydney, Australia were studied. Mortality and unplanned return to theatre rates were calculated. The primary outcome measure of interest was stoma retraction, occurring within 30 days of surgery. Secondary outcome measures included early stoma complications. The 10-year temporal trends for rod usage, stoma retraction, and complications were examined.
Results: Mortality occurred in 23 patients (4.1 %) and unplanned return to theatre in 4 patients (0.8 %). Stoma retraction occurred in four patients (0.78 %), all without rods. However, the rate of retraction was similar, irrespective of whether rods were used (P = 0.12). There was a significant decline in the use of rods during the study period (P < 0.001) but this was not associated with an increase in stoma retraction rates. Early complications occurred in 94/432 patients (21.8 %) and were more likely to occur in patients with rods (64/223 versus 30/209 without rods, P < 0.001).
Conclusions: Stoma retraction is a rare complication and its incidence is not significantly affected by the use of support rods. Further, complications are common post-operatively, and the rate appears higher when rods are used. The routine use of rods warrants judicious application. WHAT DOES THIS PAPER ADD TO THE LITERATURE?: It remains unclear whether support rods prevent stoma retraction. This study, the largest to date, confirms that stoma retraction is a rare complication and is not significantly affected by the use of rods. Consequently, routine rod usage cannot be recommended, particularly as it is associated with increased stoma complications.
abstract_id: PUBMED:28067986
A prospective randomized controlled trial comparing early postoperative complications in patients undergoing loop colostomy with and without a stoma rod. Aim: A stoma rod or bridge has been traditionally placed under the bowel loop while constructing a loop colostomy. This is believed to prevent stomal retraction and provide better faecal diversion. However, the rod can cause complications such as mucosal congestion, oedema and necrosis. This single-centre prospective randomized controlled trial compared outcomes after creation of loop colostomy with and without a supporting stoma rod. The primary outcome studied was stoma retraction rate; other stoma-related complications were studied as secondary outcomes.
Method: One hundred and fifty-one patients were randomly allotted to one of two arms, colostomy with or without a supporting rod. Postoperative complications such as retraction, mucocutaneous separation, congestion and re-exploration for stoma-related complications were recorded.
Results: There was no difference in the stoma retraction rate between the two arms (8.1% in the rod arm and 6.6% in the no-rod arm; P = 0.719). Stomal necrosis (10.7% vs 1.3%; P = 0.018), oedema (23% vs 3.9%; P = 0.001), congestion (20.3% vs 2.6%; P = 0.001) and re-admission rates (8.5% vs 0%; P = 0.027) were significantly increased in the arm randomized to the rod.
Conclusion: The stoma rod does not prevent stomal retraction. However, complication rates are significantly higher when a stoma rod is used. Routine use of a stoma rod for construction of loop colostomy can be avoided.
abstract_id: PUBMED:3510035
A new method of stomal reconstruction in patients with retraction of conventional ileostomy. A new method of stomal reconstruction has been evaluated in three patients with retraction of a conventional ileostomy associated with special problems. A polyglactin 910 mesh has been used for remodeling of the ileostomy. The observation times vary from 20 to 37 months. There has been no recurrence of retraction in any of the patients during this period.
abstract_id: PUBMED:38023201
Feasibility and safety of specimen extraction via an enlarged (U-Plus) skin bridge loop ileostomy: a single-center retrospective comparative study. Objective: To investigate the feasibility and safety of specimen extraction via an enlarged (U-Plus) skin bridge loop ileostomy.
Methods: A retrospective analysis of 95 patients with rectal cancer who underwent laparoscopic low anterior rectal resection and skin bridge loop ileostomy between August 2018 and August 2022, including 44 patients with specimen extraction via an enlarged (U-Plus) skin bridge loop ileostomy (experimental group) and 51 patients with specimen extraction via an abdominal incision (control group). Following the application of propensity score matching (PSM), 34 pairs of data were successfully matched. Subsequently, a comparative analysis was conducted on the clinical data of the two groups.
Results: The experimental group exhibited significantly better outcomes than the control group in various aspects. Specifically, the experimental group had lower values for average operative time (P < 0.001), estimated blood loss (P < 0.001), median length of visible incision after surgery (P < 0.001), median VAS pain score on the first day after surgery (P = 0.015), and average postoperative hospitalization (P = 0.001). There was no statistical significance observed in the incidence of stoma-related complications in both groups (P > 0.05). Within each group, the stoma-QOL scores before stoma closure surgery were significantly higher than those at one month and two months after the surgery, with statistical significance (P < 0.05).
Conclusion: Specimen extraction via a U-Plus skin bridge loop ileostomy is a safe and feasible method that shortens operation time and postoperative visual incision length, decreases estimated blood loss, and reduces patient postoperative pain compared with specimen extraction via an abdominal incision.
abstract_id: PUBMED:15570849
Loop ileostomy: modification of technique. Introduction: A loop ileostomy is a suitable procedure for faecal diversion. A number of technical improvements and advancement in stoma management have made its creation a suitable alternative to a loop colostomy. We describe an alternative technique for securing a loop ileostomy and perform a retrospective review of this technique.
Patients & Method: 40 patients who had a loop ileostomy performed as part of an abdominal procedure were reviewed. The loop of ileum was secured to the stoma site with a novel 'suture bridge' technique.
Results: 32 patients had the stoma formed to protect a distal anastomosis, 6 to palliate bowel obstruction, 1 to control faecal incontinence and another for colonic Crohn's disease. There were no incidences of paralytic ileus, mechanical obstruction, prolapse, retraction or bleeding after the loop ileostomies were formed. Thirty patients had their ileostomies closed. In 27 patients this was performed by excising the muco-cutaneous edge and anterior closure. Three patients had their stomas resected and an end-to-end bowel anastomoses. Following closure there were two complications in separate patients--self-limiting paralytic ileus and small bowel obstruction at the site of the stomal closure that required a second operation. There were no incidences of anastomotic leaks or bleeding in patients who had their ileostomy closed. No mortalities were attributed to either stoma formation or closure.
Conclusion: We have described a safe alternative technique for securing a loop ileostomy with negligible complications in construction and closure as demonstrated in our results.
abstract_id: PUBMED:33907300
Comparison of the clinical outcomes of skin bridge loop ileostomy and traditional loop ileostomy in patients with low rectal cancer. To compare the clinical results of patients with low rectal cancer who underwent skin bridge loop ileostomy and traditional loop ileostomy, and provide clinical evidence for choosing a better ostomy method. We retrospectively collected data of 118 patients with rectal cancer who underwent low anterior resection and loop ileostomy. To investigate the patients characteristics, postoperative stoma-related complications and the frequency of exchanged ostomy bags. The differences of these indicators between the two groups of patients who underwent skin bridge loop ileostomy and traditional loop ileostomy were compared. The Visual Analog Scale (VAS) score of the skin bridge loop ileostomy group was lower than that of the traditional ileostomy loop group (P < 0.05). The skin bridge group had a lower Discoloration, Erosion, Tissue overgrowth (DET) score and incidence of mucocutaneous separation than the traditional group at the 1st and 2nd weeks after operation (P < 0.05). The average number of weekly exchanged ostomy bags was significantly less in the skin bridge group than in the traditional group within 4 weeks after surgery (P < 0.05). Our experience demonstrates that the skin bridge loop ileostomy may significantly reduce early postoperative stoma-related complications, the frequency of exchanged ostomy bags and patients' medical costs after discharge.
abstract_id: PUBMED:25465646
Spontaneous closure of an ileostomy: A rare occurrence. Introduction: Ileostomy is an iatrogenic entero-cutaneous (EC) fistula designed for controlled evacuation of bowel contents. Once ileostomy has served its purpose, it is reverted by surgical procedure. We are reporting an interesting case of spontaneous closure of an ileostomy, obviating the need of surgical intervention.
Presentation Of Case: A 26 year old lady presented with perforation peritonitis. Upon exploration, a tubercular perforation of terminal ileum was found. Loop ileostomy was formed and patient was discharged on anti-tubercular treatment (ATT) after an uneventful recovery. During follow up visits, stoma was found to be retracting gradually. Retraction was not associated with any signs of peritonitis. Patient was able to pass stools per rectally. Stoma regressed completely within 8 months followed by epithelialisation of stoma site. Patient was leading an essentially normal life until her last follow up visit.
Discussion: Considering the various factors affecting spontaneous healing of EC fistulas, all the intestinal stomas do have favourable characteristics essential for spontaneous closure. However, this is seldom seen in day to day surgical practice. The factors pertaining to this particular case that led to spontaneous closure of stoma remain poorly understood.
Conclusion: Further research is warranted to understand the mechanism behind spontaneous regression of a stoma. Relationship between this event and intestinal tuberculosis or ATT needs to be analysed.
Answer: The use of a support bridge, also known as a stoma rod or bridge, has traditionally been employed during the construction of a loop ileostomy with the intention of preventing stomal retraction. However, the evidence from several studies suggests that the routine use of a support bridge may not be beneficial for this purpose.
A prospective nonrandomized study found that the routine use of support bridges during loop ileostomy is unnecessary and may be inconvenient to patients, as there were no statistically significant differences in stoma height changes among groups with no bridge, short-term bridge, and long-term bridge placement. Additionally, a greater number of patients with bridges reported difficulty with pouch changes (PUBMED:26135822).
Similarly, a prospective randomized controlled trial showed no significant differences in the retraction rate between groups with and without a bridge, suggesting that if a loop ileostomy is constructed properly, stomal retraction is uncommon and the routine use of a bridge is unnecessary (PUBMED:16784467).
Another study, which was the largest to date, confirmed that stoma retraction is a rare complication and is not significantly affected by the use of rods. The study also found that complications post-operatively were more common when rods were used, leading to the conclusion that routine rod usage cannot be recommended (PUBMED:27023628).
A prospective randomized controlled trial comparing early postoperative complications in patients undergoing loop colostomy with and without a stoma rod found that the stoma rod does not prevent stomal retraction and that complication rates are significantly higher when a stoma rod is used (PUBMED:28067986).
In summary, the current evidence suggests that the use of a support bridge is not beneficial for preventing stomal retraction after loop ileostomy and may be associated with increased complications and inconvenience to patients. Therefore, its routine use can generally be avoided. |
Instruction: Are exposure to cytomegalovirus and genetic variation on chromosome 6p joint risk factors for schizophrenia?
Abstracts:
abstract_id: PUBMED:17453677
Are exposure to cytomegalovirus and genetic variation on chromosome 6p joint risk factors for schizophrenia? Background: Published data support genetic variants, as well as certain infectious agents, as potential risk factors for schizophrenia. Less is known about interactions between the risk factors.
Aim: To evaluate exposure to infectious agents and host genetic variation as joint risk factors.
Methods: We investigated four infectious agents: cytomegalovirus (CMV), herpes simplex viruses 1 and 2 (HSV1, HSV2), and Toxoplasma gondii (TOX). We initially compared exposure using specific serum antibodies, among simplex and multiplex nuclear families (one or more than one affected offspring, respectively). If interactions between infectious agents and host genetic variation are important risk factors for schizophrenia, we reasoned that they would be more prominent among multiplex versus simplex families. We also evaluated the role of variation at chromosome 6p21-p23 in conjunction with exposure. We used 22 short tandem repeat polymorphisms (STRPs) dispersed across this region.
Results: Though exposure to all four agents was increased among multiplex families versus simplex families, the difference was consistently significant only for CMV (odds of exposure to CMV in multiplex families: 2.47, 95% CI: 1.48-5.33). Transmission disequilibrium tests and case-control comparisons using STRPs revealed significant linkage/association with D6S2672 among CMV+ schizophrenia patients.
Conclusions: Polymorphisms near D6S2672 could confer risk for schizophrenia in conjunction with CMV exposure.
abstract_id: PUBMED:22966150
Evaluation of HLA polymorphisms in relation to schizophrenia risk and infectious exposure. Background: Genome-wide association studies (GWAS) implicate single nucleotide polymorphisms (SNPs) on chromosome 6p21.3-22.1, the human leukocyte antigen (HLA) region, as common risk factors for schizophrenia (SZ). Other studies implicate viral and protozoan exposure. Our study tests chromosome 6p SNPs for effects on SZ risk with and without exposure.
Method: GWAS-significant SNPs and ancestry-informative marker SNPs were analyzed among African American patients with SZ (n = 604) and controls (n = 404). Exposure to herpes simplex virus, type 1 (HSV-1), cytomegalovirus (CMV), and Toxoplasma gondii (TOX) was assayed using specific antibody assays.
Results: Five SNPs were nominally associated with SZ, adjusted for population admixture (P < .05, uncorrected for multiple comparisons). These SNPs were next analyzed in relation to infectious exposure. Multivariate analysis indicated significant association between rs3130297 genotype and HSV-1 exposure; the associated allele was different from the SZ risk allele.
Conclusions: We propose a model for the genesis of SZ incorporating genomic variation in the HLA region and neurotropic viral exposure for testing in additional, independent African American samples.
abstract_id: PUBMED:25781172
Infection and inflammation in schizophrenia and bipolar disorder: a genome wide study for interactions with genetic variation. Inflammation and maternal or fetal infections have been suggested as risk factors for schizophrenia (SZ) and bipolar disorder (BP). It is likely that such environmental effects are contingent on genetic background. Here, in a genome-wide approach, we test the hypothesis that such exposures increase the risk for SZ and BP and that the increase is dependent on genetic variants. We use genome-wide genotype data, plasma IgG antibody measurements against Toxoplasma gondii, Herpes simplex virus type 1, Cytomegalovirus, Human Herpes Virus 6 and the food antigen gliadin as well as measurements of C-reactive protein (CRP), a peripheral marker of inflammation. The subjects are SZ cases, BP cases, parents of cases and screened controls. We look for higher levels of our immunity/infection variables and interactions between them and common genetic variation genome-wide. We find many of the antibody measurements higher in both disorders. While individual tests do not withstand correction for multiple comparisons, the number of nominally significant tests and the comparisons showing the expected direction are in significant excess (permutation p=0.019 and 0.004 respectively). We also find CRP levels highly elevated in SZ, BP and the mothers of BP cases, in agreement with existing literature, but possibly confounded by our inability to correct for smoking or body mass index. In our genome-wide interaction analysis no signal reached genome-wide significance, yet many plausible candidate genes emerged. In a hypothesis driven test, we found multiple interactions among SZ-associated SNPs in the HLA region on chromosome 6 and replicated an interaction between CMV infection and genotypes near the CTNNA3 gene reported by a recent GWAS. Our results support that inflammatory processes and infection may modify the risk for psychosis and suggest that the genotype at SZ-associated HLA loci modifies the effect of these variables on the risk to develop SZ.
abstract_id: PUBMED:22717193
Prenatal maternal infection, neurodevelopment and adult schizophrenia: a systematic review of population-based studies. Background: Disruption of foetal development by prenatal maternal infection is consistent with a neurodevelopmental model of schizophrenia. Whether specific prenatal infections are involved, their timing and the mechanisms of any effect are all unknown. We addressed these questions through a systematic review of population-based studies.
Method: Electronic and manual searches and rigorous quality assessment yielded 21 studies that included an objective assessment of individual-level prenatal maternal infection and standardized psychotic diagnoses in adult offspring. Methodological differences between studies necessitated a descriptive review.
Results: Results for prenatal maternal non-specific bacterial, respiratory or genital and reproductive infection differed between studies, which reported up to a two- to fivefold increased risk of schizophrenia. Evidence for herpes simplex virus type 2 (HSV-2) and Toxoplasma gondii was mixed; some studies reported up to a doubling of schizophrenia risk. Prenatal HSV-1 or cytomegalovirus (CMV) infections were not associated with increased risk. Exposure to influenza or other infections during early pregnancy may be more harmful than later exposure. Increased proinflammatory cytokines during pregnancy were also associated with risk. Prenatal infection was associated with structural and functional brain abnormalities relevant to schizophrenia.
Conclusions: Prenatal exposure to a range of infections and inflammatory responses may be associated with risk of adult schizophrenia. Larger samples, mediation and animal models should be used to investigate whether there is a 'sensitive period' during development, and the effects of prenatal infections on neurodevelopment. Inclusion of genetic and immunological information should help to elucidate to what extent genetic vulnerability to schizophrenia may be explained by vulnerability to infection.
abstract_id: PUBMED:21388746
A novel embryological theory of autism causation involving endogenous biochemicals capable of initiating cellular gene transcription: a possible link between twelve autism risk factors and the autism 'epidemic'. Human alpha-fetoprotein is a pregnancy-associated protein with an undetermined physiological role. As human alpha-fetoprotein binds retinoids and inhibits estrogen-dependent cancer cell proliferation, and because retinoic acid (a retinol metabolite) and estradiol (an estrogen) can both initiate cellular gene transcription, it is hypothesized here that alpha-fetoprotein functions during critical gestational periods to prevent retinoic acid and maternal estradiol from inappropriately stimulating gene expression in developing brain regions which are sensitive to these chemicals. Prenatal/maternal factors linked to increased autism risk include valproic acid, thalidomide, alcohol, rubella, cytomegalovirus, depression, schizophrenia, obsessive-compulsive disorder, autoimmune disease, stress, allergic reaction, and hypothyroidism. It will be shown how each of these risk factors may initiate expression of genes which are sensitive to retinoic acid and/or estradiol - whether by direct promotion or by reducing production of alpha-fetoprotein. It is thus hypothesized here that autism is not a genetic disorder, but is rather an epigenetic disruption in brain development caused by gestational exposure to chemicals and/or conditions which either inhibit alpha-fetoprotein production or directly promote retinoic acid-sensitive or estradiol-sensitive gene expression. This causation model leads to potential chemical explanations for autistic brain morphology, the distinct symptomatology of Asperger's syndrome, and the differences between high-functioning and low-functioning autism with regard to mental retardation, physical malformation, and sex ratio. It will be discussed how folic acid may cause autism under the retinoic acid/estradiol model, and the history of prenatal folic acid supplementation will be shown to coincide with the history of what is popularly known as the autism epidemic. It is thus hypothesized here that prenatal folic acid supplementation has contributed to the post-1980 increase in US autism diagnoses. In addition to explaining the epidemic within the wider retinoic acid/estradiol model of causation, this theory leads to potential explanations for certain genetic findings in autism, autistic regression, and changing trends in autism symptomatology with regard to mental retardation, wheat allergy, and gastrointestinal problems.
abstract_id: PUBMED:22819777
Maternal antibodies to infectious agents and risk for non-affective psychoses in the offspring--a matched case-control study. Background: An increasing number of studies suggest that certain maternal infections are associated with non-affective psychoses in the offspring. Here we investigated if maternal exposure to Toxoplasma gondii, cytomegalovirus (CMV), herpes simplex virus type 1 (HSV-1) or type 2 (HSV-2) prior to delivery was associated with future diagnosis of schizophrenia or other non-affective psychoses in the offspring.
Methods: This case-control study included 198 individuals born in Sweden 1975-85, diagnosed with schizophrenia (ICD-10, F20) and other non-affective psychoses (ICD-10, F21-29) as in- or outpatients, and 524 matched controls. Specific immunoglobulin G (IgG) levels in archived neonatal dried blood samples from these individuals were determined by immunoassays. Reference levels were determined by prevalences among pregnant women in Sweden 1975-85. Odds ratios (OR) for schizophrenia and other non-affective psychoses were calculated, considering maternal and gestational factors as covariates.
Results: Levels of IgG directed at T. gondii corresponding to maternal exposure was associated with subsequent schizophrenia (OR=2.1, 95% CI 1.0-4.5) as were levels of IgG directed at CMV (OR=2.2, 95% CI 1.0-5.1) but not at HSV-1 or -2. There were even stronger associations with higher levels of T. gondii or CMV antibodies. There were no associations between any of the infectious agents and other non-affective psychoses.
Conclusions: This study supports findings of maternal exposure to T. gondii and schizophrenia risk in offspring, and extends the risk to also include maternal exposure to CMV. Future studies should confirm the association with CMV exposure and identify mechanisms underlying these associations.
abstract_id: PUBMED:17561376
Polymorphisms in MICB are associated with human herpes virus seropositivity and schizophrenia risk. Viral infection may be a risk factor for schizophrenia and has been associated with decreased cognitive functioning in patients. We report associations of SNPs at MICB (MHC class I polypeptide-related sequence B, chromosome 6p21) with cytomegalovirus and herpes simplex virus 1 seropositivity. We previously found associations with schizophrenia on chromosome 6p21 among patients seropositive for cytomegalovirus (CMV) and herpes simplex virus 1 (HSV1). To localize the associations further, we genotyped 26 SNPs spanning 100 kb in a sample of 236 Caucasian schizophrenia patients and 240 controls. Based on suggestive associations, we selected five SNPs at MICB to assay among two additional Caucasian samples that had been serotyped for CMV and HSV1: a case-control sample recruited in Baltimore (n=272 cases, 108 controls), and a case-parent trio sample recruited in Pittsburgh (n=221). Among Baltimore control individuals there were significant associations with antibody status for infectious agents: rs1051788 with HSV1 seropositivity (p=0.006) and rs2523651 with cytomegalovirus seropositivity (p=0.001). The former association was also detectable among the parents of cases recruited in Pittsburgh (p=0.024). Neither viral association was noted among the schizophrenia cases. With respect to schizophrenia risk, significant transmission distortion was noted at rs1051788 and rs1055569 among the case-parent trios regardless of antibody status (p=0.014 and 0.036 respectively). A similar trend for association with schizophrenia liability at rs1051788 in the Baltimore sample did not attain statistical significance. There are a number of explanations for the associations, including chance variation, as well as gene-virus interactions. Further replicate studies are warranted, as are functional studies of these polymorphisms.
abstract_id: PUBMED:24572972
Genetic etiology of schizophrenia: possible role of immunoglobulin γ genes. There is increasing evidence for the involvement of herpes simplex virus type 1 and human cytomegalovirus in the cognitive impairment of patients with schizophrenia (SCZ). Both herpes simplex virus type 1 and human cytomegalovirus have evolved strategies for decreasing the efficacy of the host immune response and interfering with viral clearance. Immunoglobulin GM genes, genetic markers of IgG heavy chains located on chromosome 14, modify certain immunoevasion strategies of these viruses. Particular GM alleles are also associated with antibody responsiveness to gliadin or gluten sensitivity, an attribute reported to be prevalent in a significant proportion of SCZ patients. On the basis of these properties, I hypothesize that GM alleles are risk factors for SCZ and their evaluation could help genetically dissect the disease in different subsets and/or help unify some disparate areas of pathobiology (e.g. cognitive dysfunction and gluten sensitivity) affected in this disorder.
abstract_id: PUBMED:22085475
Neonatal antibodies to infectious agents and risk of bipolar disorder: a population-based case-control study. Objective: There is a substantial evidence base linking prenatal exposure to infectious agents and an increased risk of schizophrenia. However, there has been less research examining the potential for these exposures to also contribute to risk for bipolar disorder. The aim of this study was to examine the association between neonatal markers of selected prenatal infections and risk for bipolar disorder.
Methods: Using population-based Danish registers, we examined 127 individuals with a diagnosis of bipolar disorder, and 127 sex and day-of-birth individually matched controls. Based on neonatal dried blood spots, we measured antibodies to herpes simplex virus type 1 (HSV-1) and 2 (HSV-2), cytomegalovirus (CMV), and Toxoplasma gondii. Relative risks were calculated for the matched pairs when examined for optical density units for antibodies to each of the infectious agents.
Results: There was no association between any of the neonatal markers of prenatal infection and risk of bipolar disorder.
Conclusions: In contrast with studies of schizophrenia, our analysis does not support maternal infection with HSV-1, HSV-2, CMV, or Toxoplasma gondii as risk factors for bipolar disorder. However, larger study samples are needed, and data on, for example, specific serotypes of Toxoplasma and indicators of the timing of maternal infection are still warranted.
abstract_id: PUBMED:34016508
Prenatal infection and schizophrenia: A decade of further progress. Epidemiologic studies have provided evidence that prenatal exposure to maternal infection is associated with an increased risk of developing schizophrenia in the offspring. Research over the past decade has added further to our understanding of the role of prenatal infection in schizophrenia risk. These investigations include several well-powered designs, and like some earlier studies, measured maternal antibodies to specific infectious agents in stored serum samples and large registers to identify clinically diagnosed infections during pregnancy. Convergent findings from antibody studies suggest that prenatal maternal infection with Toxoplasma gondii is associated with increased schizophrenia risk in the offspring, while associations with HSV-2 infection are likely attributable to confounding. Maternal influenza infection remains a viable candidate for schizophrenia, based on an early serological study, though there has been only one attempt to replicate this finding, with a differing methodology. A prior association between maternal serologically confirmed cytomegalovirus infections require further study. Clinically diagnosed maternal infection, particularly bacterial infection, also appears to be associated with increased risk of offspring schizophrenia, and heterogeneity in these findings is likely due to methodological differences between studies. Further clarification may be provided by future studies that address the timing, type, and clinical features of infections. Important insight may be gained by examining the long-term offspring outcomes in emerging epidemics such as Zika virus and COVID-19, and by investigating the interaction between exposure to prenatal infection and other risk or protective factors.
Answer: Yes, exposure to cytomegalovirus (CMV) and genetic variation on chromosome 6p are suggested to be joint risk factors for schizophrenia. Research has indicated that exposure to CMV is increased among multiplex families (families with more than one affected offspring) compared to simplex families (families with one affected offspring), and this difference was consistently significant for CMV. The odds of exposure to CMV in multiplex families were 2.47 times higher than in simplex families. Furthermore, significant linkage and association were found with a polymorphism near D6S2672 on chromosome 6p21-p23 among CMV-positive schizophrenia patients, suggesting that polymorphisms in this region could confer risk for schizophrenia in conjunction with CMV exposure (PUBMED:17453677).
Additional studies have supported the hypothesis that genetic variation in the human leukocyte antigen (HLA) region on chromosome 6p21.3-22.1, which is implicated in immune function, may interact with exposure to infectious agents like CMV to affect schizophrenia risk. For instance, a genome-wide study found multiple interactions among schizophrenia-associated single nucleotide polymorphisms (SNPs) in the HLA region and CMV infection, suggesting that the genotype at these loci may modify the effect of CMV on the risk of developing schizophrenia (PUBMED:25781172).
Moreover, polymorphisms in MICB, located on chromosome 6p21, have been associated with CMV seropositivity and schizophrenia risk. This indicates a potential gene-virus interaction that could contribute to the development of schizophrenia (PUBMED:17561376).
In summary, the evidence points to a potential interaction between CMV exposure and genetic variation on chromosome 6p as joint risk factors for schizophrenia. |
Instruction: Is Gippsland environmentally iodine deficient?
Abstracts:
abstract_id: PUBMED:21114698
Is Gippsland environmentally iodine deficient? Water iodine concentrations in the Gippsland region of Victoria, Australia. Objective: This paper provides evidence of environmental iodine deficiency in the Gippsland region.
Design: Quantitative study; water samples were collected from 18 water treatment plants and four rain water tanks across Gippsland and water iodine concentrations were measured.
Setting: Gippsland region of Victoria, Australia.
Main Outcome Measures: This paper reports on the iodine concentration of drinking water from sources across Gippsland and examines the contribution of iodine from water to the Gippsland diet. This study also briefly examines the relationship between the concentration of iodine in water and distance from the sea. The cut-off value for water iodine concentrations considered to be indicative of environmental iodine deficiency is <2 µg L(-1) .
Results: The mean iodine concentration of water from 18 Gippsland water treatment plants was 0.38 µg L(-1) and would therefore make negligible difference to the dietary intake of iodine. This finding also falls well below the suggested dietary intake of iodine from water estimated by the 22nd Australian Total Diet Study. Our study found no linear relationship between the water iodine concentration and distance from the sea.
Conclusion: As Gippsland has environmental iodine deficiency there is a greater probability that people living in this region are at higher risk of dietary iodine deficiency than those living in environmentally iodine sufficient regions. Populations living in areas known to have environmental iodine deficiency should be monitored regularly to ensure that problems of iodine deficiency, especially amongst the most vulnerable, are addressed promptly.
abstract_id: PUBMED:30045430
Maternal iodine dietary supplements and neonatal thyroid stimulating hormone in Gippsland, Australia. Background And Objectives: Pregnant women are at particular risk of iodine deficiency due to their higher iodine requirements. Iodine is known to be essential for normal growth and brain development, therefore neonatal outcomes in mildly iodine deficient areas, such as Gippsland, are a critical consideration. This study aimed to investigate whether iodine supplementation prevented iodine insufficiency as determined by neonatal thyroid stimulating hormone (TSH) screening criteria.
Methods And Study Design: Gippsland-based women aged >=18 years, in their third trimester of pregnancy, provided self-reported information regarding their iodine supplement use and consent to access their offspring's neonatal TSH screening data. 126 women consented to participate, with 111 women completing all components of this study.
Results: Only 18.9% of participants followed the National Health and Medical Research Council (NHMRC) recommendation of 150 μg/day iodine supplement, with 42.3% of participants not taking any supplements, or taking supplements with no iodine or insufficient iodine. The remaining women (38.7%) were taking supplements with doses of iodine much higher (200-300 μg) than the NHMRC recommended dose or were taking multiple supplements containing iodine. When correlating iodine intake to their neonates' TSH, no correlation was found. When iodine supplementation usage was categorised as below, equal to, or above NHMRC recommendations there was no significant difference in neonatal TSH.
Conclusion: This study found that iodine supplementation appeared to prevent maternal iodine insufficiency when measured against neonatal TSH screening criteria.
abstract_id: PUBMED:21381996
Urinary iodine deficiency in Gippsland pregnant women: the failure of bread fortification? Objective: To assess iodine status and the factors that influence iodine status among a cohort of pregnant women living in Gippsland.
Design, Participants And Setting: Cross-sectional study of 86 pregnant women (at ≥ 28 weeks' gestation) conducted in hospital antenatal care services and private obstetrician clinics across the Gippsland region of Victoria, Australia, from 13 January 2009 to 17 February 2010.
Main Outcome Measures: Overall proportion of pregnant women with a urinary iodine concentration (UIC) > 150 μg/L; proportion of pregnant women with a UIC >150 μg/L after the mandatory iodine fortification of bread; use of supplements containing iodine; intake of foods known to be good sources of iodine; intake of bread.
Results: The percentage of pregnant women with UIC >150 μg/L (indicative of iodine sufficiency) was 28%. There was no statistically significant difference in UICs before and since iodine fortification of bread. The median UIC before fortification was 96 μg/L (interquartile range [IQR], 45-153 μg/L) and since fortification was 95.5 μg/L (IQR, 60-156 μg/L). The dietary intake of iodine-rich food (including bread) and the use of appropriate supplements was insufficient to meet the increased iodine requirements during pregnancy.
Conclusions: The UICs in this cohort of pregnant women are of concern, and seem unlikely to be improved by the national iodine fortification program. Pregnant women in Gippsland urgently need effective iodine education programs and encouragement to either consume iodine-rich foods or take appropriate supplements.
abstract_id: PUBMED:7841701
Dysgenesis of thyroid is the common type of childhood hypothyroidism in environmentally iodine deficient areas of north India. Forty-five children (28 girls and 17 boys; mean age 4.5 years) with hypothyroidism referred to us from January 1989 to November 1990 were evaluated prospectively for the pattern of hypothyroidism by hormone assays, scintiscan and urinary iodine estimation. Among the 6 children from non-endemic areas, athyreosis and/or hypoplasia were seen in 3, ectopia in 2 and dyshormonogenesis in 1. Of 39 children from moderate to severe environmentally iodine deficient regions, 18 (46%) had athyreosis and/or hypoplasia and 10 (26%) ectopic thyroid. Iodine deficiency was seen in 4, dyshormonogenesis in 4, secondary/tertiary hypothyroidism in 2 and thyroiditis in 1. The mean age of these children at the onset of symptoms was 1.4 years and at clinical presentation 4.5 years. There was significant growth retardation with 54% of children being below the 5th centile of Indian standards. There was no significant difference in the age at onset of symptoms and presentation, clinical features and bone age for the different types. The levels of serum total T4 were significantly low in dysgenesis (athyreosis, hypoplasia and ectopia, p < 0.001). Dysgenesis of the thyroid is the most common type of childhood hypothyroidism in iodine deficiency endemias. We postulate that severe iodine deficiency in the intrauterine and early neonatal period may lead to dysgenesis of the thyroid.
abstract_id: PUBMED:32514901
Supplemental iodine-containing prenatal multivitamins use and the potential effects on pregnancy outcomes in a mildly iodine-deficient region. Purpose: The use and contribution of prenatal multivitamins (PMV) as iodine source for pregnant women in China, especially in mildly iodine-deficient region, have not been well studied. This study aimed to explore the association between PMV intake during pregnancy and thyroid function in mothers and newborns.
Methods: We performed a study involving women with a history of taking PMV during pregnancy between January 2013 and October 2015, in Shanghai, a mildly iodine-deficient region. Maternal thyroid function in early and late pregnancy, and neonatal TSH on postnatal d 3 were obtained from medical records. We compared the outcomes in pregnant women who took exclusively iodine-containing PMV (I + PMV) with those who took exclusively non-contained PMV (I- PMV). Propensity score matching (PSM) was used to identify women with similar baseline characteristics.
Results: After PSM, 1280 women in I + PMV and 2560 in I- PMV had similar propensity scores and were included in the analyses. Introduction of I + PMV to women was associated with slightly higher maternal thyroid hormone production (higher maternal FT4, p = 0.01, non-significantly lower TSH, p = 0.79) and lower neonatal TSH levels (p < 0.0001). The frequency of adverse pregnancy outcomes or thyroid dysfunctions did not differ between groups in late pregnancy. Mothers received I + PMV (0.2 SD) had a stronger association of maternal TSH with neonatal TSH than those who received I- PMV (0.1 SD). These effects were only shown in TPOAb-negative mothers, not in TPOAb-positive mothers.
Conclusion: TPOAb-positive women display an impaired iodine transport in thyroid and placenta, and this may explain the lack of changes in maternal and neonatal thyroid parameters with I + PMV supplementation in these women. This phenomenon might suggest that these women require different iodine doses or treatment approach in comparison with TPOAb-negative women.
abstract_id: PUBMED:4767263
Studies on iodine concentration in psammoma bodies in the thyroids of chronically iodine-deficient rats: effect of iodine repletion. Thyroids of rats fed an iodine-deficient diet for several months contain small psammoma bodies within the follicular lumens which concentrate radioactive iodine. If the iodine-deficient rats are fed a high-iodine diet to produce a "colloid goiter" with reaccumulation of PAS-positive colloid around the psammoma bodies before administering radioactive iodine, the radioactivity is present in follicular cells, around the psammoma bodies and in the colloid 24 hours after radioactive iodine administration. Propylthiouracil (PTU) causes radioactivity to disappear from the cells and colloid but does not produce any appreciable discharge of radioactivity from the psammoma bodies. If radioactive iodine is given to the iodine-deficient rats before feeding a high-iodine diet, radioactivity is initially present chiefly in the cells and psammoma bodies and gradually accumulates in the PAS-positive colloid as this becomes deposited under the influence of the increased dietary iodine. If such rats are fed PTU for 4 days before the high-iodine diet is instituted, radioactivity remains limited almost entirely to the psammoma bodies and does not appear in the accumulating colloid. It is concluded that the psammoma bodies are iodinated directly, rather than forming a nidus for condensation of intrafollicular thyroglobulin after it is iodinated. Although iodine is readily bound to the psammoma bodies, it apparently is not easily removed from these structures under in vivo conditions.
abstract_id: PUBMED:35406006
Iodine Supplementation in Pregnancy in an Iodine-Deficient Region: A Cross-Sectional Survey. Iodine deficiency is a common problem in pregnant women and may have implications for maternal and child health. Iodine supplementation during pregnancy has been recommended by several scientific societies. We undertook a cross-sectional survey to assess the efficacy of these recommendations in a European iodine-deficient region. Urinary iodine concentrations (UIC) were determined in pregnant women before (n = 203) and after (n = 136) the implementation of guidelines for iodine supplementation in pregnancy. Iodine supplementation (200 μg/day) reduced the proportion of pregnant women with severe iodine deficiency (37.4% to 18.0%, p = 0.0002). The median UIC increased from 67.6 µg/L to 106.8 µg/L but remained below the recommended target level (>150 µg/L) for pregnant women. In conclusion, iodine supplementation in pregnant women improved iodine status in this iodine-deficient region but was insufficient to achieve recommended iodine levels in pregnancy. Additional measures, such as the adjustment of the dose or timing of supplementation, or universal salt iodization, may be needed.
abstract_id: PUBMED:31928176
Current Iodine Nutrition Status and Prevalence of Thyroid Disorders in Tibetan Adults in an Oxygen-Deficient Plateau, Tibet, China: A Population-Based Study. Background: Iodine deficiency (ID) is a global problem in individuals living in an iodine-deficient environment, specifically in mountainous regions. However, data regarding the iodine nutritional status of Tibetan people in the plateau are limited. Methods: A population-based survey was conducted from July 2016 to July 2017 in Lhasa, Tibet, including 12 communities in Lhasa city and 10 surrounding rural areas. The iodine nutritional status of Tibetan people was evaluated using the traditional iodine nutrition indexes: urinary iodine concentration (UIC), thyroid size, serum thyroxine, thyrotropin, thyroglobulin antibody and thyroid peroxidase antibody (TPOAb). Results: A total of 2295 healthy participants were screened, and 2160 participants who had completed all the required examinations were enrolled in this study (response rate, 94.1%). Urinary iodine showed a skewed distribution, with a median (upper and lower quartiles) of 154 (99-229) μg/L. The percentages of low iodine (UIC <100 μg/L), adequate iodine (UIC, 100-199 μg/L), and high iodine (UIC ≥200 μg/L) were 25.6%, 42.0%, and 32.4%, respectively. The urinary iodine level in the urban region was higher than that in the rural region (p < 0.05). Urinary iodine levels were lower with increasing age (p < 0.05). The prevalence of hyperthyroidism, hypothyroidism, goiter, TPOAb positivity, and thyroglobulin antibody positivity was 1.0%, 21.8%, 4.7%, 6.6%, and 10.4%, respectively. Logistic regression analysis found that urinary iodine was an independent risk factor for TPOAb positivity (odds ratio = 0.997 [95% confidence interval, 0.995-0.999]; p < 0.001). Conclusions: Compared with individuals living in the plains of China, Tibetan adults have a higher rate of ID. UIC was an independent risk factor for TPOAb positivity. This public health issue should be further investigated.
abstract_id: PUBMED:21737997
Differences between subjects with sufficient and deficient urinary iodine in an area of iodine sufficiency. Background: Iran has long been recognized as a country of iodine sufficiency; however, recent studies show that the proportion of subjects with insufficient urinary iodine is gradually increasing in Tehran capital city.
Aim: The aim of this study was to evaluate differences between individuals with sufficient and deficient urinary iodine in Tehran.
Material And Methods: In this cross-sectional study, 639 Tehranian adult subjects, aged ≥ 19 yr (242 males, 397 females), were enrolled through randomized cluster sampling. A 24-h urine sample was collected for measurement of urinary iodine, sodium and creatinine concentrations using the digestion method, flame photometry and autoanalyzer assay, respectively. Salt intake was estimated and iodine content of household salt was measured by titration.
Results: Medians (interquartile range) of 24-h urinary iodine concentrations in subjects with sufficient and deficient urinary iodine were 163.0 (126.0-235.0) and 44.0 (26.0-67.0) μg/l, p<0.001, respectively. Salt with iodine content of >20 parts per million was consumed by 77.4 and 38.3% of subjects with sufficient and deficient urinary iodine, respectively (p<0.001). Median daily salt intake in subjects with sufficient urinary iodine was significantly higher than in those with deficient urinary iodine (8.1 vs 7.3 g, p<0.001). No significant differences in the mentioned variables were observed between males and females. Fifty and 30% of subjects with insufficient and sufficient urinary iodine had <7 yr education, respectively (p<0.001).
Conclusions: Iodine content of salt, the amount of salt intake and education levels differ greatly between subjects with sufficient and deficient urinary iodine in Tehran.
abstract_id: PUBMED:28946048
pH-dependent synthesis of iodine-deficient bismuth oxyiodide microstructures: Visible-light photocatalytic activity. Bismuth oxyiodides have exhibited high potential for applications in visible-light photocatalytic environmental remediation and solar energy conversion. In this work, a series of iodine-deficient bismuth oxyiodides (Bi4O5I2, Bi7O9I3, Bi5O7I) can be simply prepared through a pH-dependent aqueous procedure with feeding Bi/I ratio of 2:1. The compositions of the Bi-based oxyiodides are closely related to acid-base circumstances, with Bi4O5I2 formed in weakly acidic medium (pH = 5) and Bi7O9I3, Bi5O7I in basic medium (pH = 8 and 11). Morphology differences of nanosheet-assembled Bi4O5I2, Bi7O9I3 architectures and rod-like Bi5O7I microstructures demonstrate different crystalline characters and construction of Bi-based oxyiodide crystals. UV-vis DRS results revealed good visible-light absorptions of Bi4O5I2 and Bi7O9I3 architectures and appropriate band structures for photocatalytic reactions, on comparison to Bi5O7I microrods. Low electrochemical impedance of Bi7O9I3 microflowers with sheet-like units further facilitated the separation of e--h+ carriers in the degradation process. Accordingly, among the bismuth oxyiodide samples, Bi7O9I3 displayed prominent visible-light degradation performance for colorless bisphenol-A (BPA) due to the direct photoexcitation process.
Answer: Yes, Gippsland is environmentally iodine deficient. The mean iodine concentration of water from 18 Gippsland water treatment plants was found to be 0.38 µg L(-1), which is well below the cut-off value for water iodine concentrations considered to be indicative of environmental iodine deficiency (<2 µg L(-1)) (PUBMED:21114698). |
Instruction: Is lymph node dissection required in pulmonary metastasectomy for colorectal adenocarcinoma?
Abstracts:
abstract_id: PUBMED:36168561
The prognostic effect of lymph node dissection in secondary lung tumor metastasectomies. Background: In this study, we aimed to investigate the prognostic effect of lymph node dissection in secondary lung tumor metastasectomies.
Methods: Between January 2001 and December 2015, a total of 201 patients (122 males, 79 females; mean age: of 41.4±19.2 years; range, 3 to 79 years) who had pulmonary metastasectomy due to colorectal carcinoma, renal cell carcinoma, and soft tissue - skeletal sarcomas were retrospectively analyzed. The clinicopathological features of the patients, overall survival, and complication rates according to the presence of lymph node dissection were evaluated.
Results: The most common histopathological subtype was a sarcoma in 118 (58.7%) patients, followed by colorectal carcinoma in 63 (31.3%) patients, and renal cell carcinoma in 20 (10%) patients. A total of 88 (43.7%) patients underwent systematic lymph node dissection with pulmonary metastasectomy. The mean overall survival of patients with and without lymph node dissection were 49±5.9 (95% confidence interval 37.3-60.6) and 26±4.4 (95% confidence interval 17.2-34.7) months, respectively (p=0.003). The five-year survival rates in colorectal carcinoma, renal cell carcinoma, and sarcoma were 52%, 30%, and 23%, respectively (p=0.002). Locoregional recurrences occurred in 15 (35.7%) patients in the lymph node dissection group and in 23 (60.5%) patients in the non-lymph node dissection group (p=0.026). Lymph node dissection did not show a significant relationship regarding to postoperative complications (p=0.09).
Conclusion: Lymph node dissection following pulmonary metastasectomy may improve the overall survival and reduce locoregional recurrence, without any increase in morbidity and mortality.
abstract_id: PUBMED:34716666
Effects of mediastinal lymph node dissection in colorectal cancer-related pulmonary metastasectomy. Background: The benefits of mediastinal lymph node dissection (MLND) in colorectal cancer-related pulmonary metastasectomy (PM) have been poorly reported. This study aimed to determine whether MLND affects survival in patients undergoing PM and to identify the prognostic factors for survival.
Methods: We retrospectively reviewed 275 patients who had undergone colorectal cancer-related PM from January 2010 to December 2016. MLND was defined as the resection of at least six mediastinal lymph node stations according to the International Association for the Study of Lung Cancer criteria (N1, ≥3 stations; N2, ≥3 stations). The propensity score matching method was used to reduce bias.
Results: Thirty-three (12%) patients underwent MLND, and 13 (4.7%) patients had mediastinal lymph node involvement. This study showed no difference in 5-year overall survival (no MLND, 52.7% vs. MLND, 53.5%; p = 0.81). On multivariable analysis, negative prognostic factors for overall survival were preoperative carcinoembryonic antigen (CEA) level (p < 0.001), a higher number of metastatic nodules (p < 0.001), metastatic nodule size ≥2 cm (p < 0.001), and lymph node involvement (p = 0.006).
Conclusions: Mediastinal lymph node involvement, preoperative CEA level, higher metastatic nodule number, and nodule size negatively affected survival whereas MLND in PM was not associated with survival.
abstract_id: PUBMED:34012609
Lymphadenectomy in pulmonary metastasectomy. Lymph node (LN) removal during pulmonary metastasectomy is a prerequisite to achieve complete resection or at least collect prognostic information, but is not yet generally accepted. On average, the rate of unexpected lymph node involvement (LNI) is less than 10% in sarcoma, 20% in colorectal cancer (CRC) and 30% in renal cell carcinoma (RCC) when radical LN dissection is performed. LNI is a negative prognostic factor and presence of preoperative mediastinal disease usually leads to exclusion of the patient from metastasis surgery. Nonetheless, some authors found excellent prognoses even with mediastinal LNI in colorectal and RCC metastases when radical LN dissection was performed (median survival of 37 and 36 months, respectively). Multiple metastases, central location of the lesion followed by anatomical resections are associated with a higher LNI rate. The real prognostic influence of systematic LN dissection remains unclear. Two positive effects were described after radical lymphadenectomy: a trend for improved survival in RCC patients and a reduction of mediastinal recurrences from 23% to 0% in CRC patients. Unfortunately, there is a great number of studies that do not demonstrate any positive effect of lymphadenectomy during pulmonary metastasectomy except a pseudo stage migration effect. Future studies should not only focus on survival, but also on local and LN recurrence.
abstract_id: PUBMED:34656390
Pulmonary metastasectomy with lymphadenectomy for colorectal pulmonary metastases: A systematic review. Background: Routine lymphadenectomy during metastasectomy for pulmonary metastases of colorectal cancer has been recommended by several recent expert consensus meetings. However, evidence supporting lymphadenectomy is limited. The aim of this study was to perform a systematic review of the literature on the impact of simultaneous lymph node metastases on patient survival during metastasectomy for colorectal pulmonary metastases (CRPM).
Methods: A systematic review was conducted according to the PRISMA guidelines of studies on lymphadenectomy during pulmonary metastasectomy for CRPM. Articles published between 2000 and 2020 were identified from Medline, Embase and the Cochrane Library without language restriction. Grading of Recommendations Assessment, Development and Evaluation (GRADE) framework was used to assess the risk of bias and applicability of included studies. Survival rates were assessed and compared for the presence and level of nodal involvement.
Results: Following review of 8054 studies by paper and abstract, 27 studies comprising 3619 patients were included in the analysis. All patients included in these studies underwent lymphadenectomy during pulmonary metastasectomy for CRPM. A total of 690 patients (19.1%) had simultaneous lymph node metastases. Five-year overall survival for patients with and without lymph node metastases was 18.2% and 51.3%, respectively (p < .001). Median survival for patients with lymph node metastases was 27.9 months compared to 58.9 months in patients without lymph node metastases (p < .001). Five-year overall survival for patients with N1 and N2 lymph node metastases was 40.7% and 10.9%, respectively (p = .064).
Conclusion: Simultaneous lymph node metastases of CRPM have a detrimental impact on survival and this is most apparent for mediastinal lymph node metastases. Therefore, lymphadenectomy during pulmonary metastasectomy for CRPM can be advised to obtain important prognostic value.
abstract_id: PUBMED:22721598
Is lymph node dissection required in pulmonary metastasectomy for colorectal adenocarcinoma? Background: The aim of this study was to clarify the clinical outcome and significance of mediastinal lymph node dissection (LND) during pulmonary resection of metastases from colorectal adenocarcinoma.
Methods: A retrospective chart review was performed. Between April 1985 and December 2009, 518 patients underwent 720 pulmonary metastasectomies for metastatic colorectal adenocarcinoma. Relevant factors were analyzed with the χ2 or Fisher exact test and the Mann-Whitney test. Survival and lymph node (LN) recurrence-free period after pulmonary metastasectomy were analyzed with Kaplan-Meier and Cox proportional hazards methods.
Results: The overall 5-year and 10-year survival rate after pulmonary metastasectomy were 47.1% and 27.7%, respectively. The only significant prognostic factor for survival after pulmonary metastasectomy was mediastinal LN metastasis (p=0.047 in univariate and 0.0028 in multivariate analysis); 199 patients did not undergo LND, 279 patients underwent LND that were negative, and 40 patients underwent LND that contained 1 or more positive mediastinal LN for metastases. The sensitivity of positron emission tomographic scan for detecting mediastinal LN metastases was only 35%. Although long-term survivors were present, systematic LND was not a significant factor for prolonged survival (p=0.26) in the positive LND group.
Conclusions: Mediastinal LN metastases are a significant negative prognostic factor for survival after pulmonary metastasectomy for metastatic colorectal cancer. Computed tomography and positron emission tomography based imaging, as well as preoperative carcinoembryonic antigen levels have poor sensitivity for detecting malignant mediastinal LN in this setting. Systematic mediastinal LND should be performed for prognostic purposes during pulmonary metastasectomy for colorectal metastases.
abstract_id: PUBMED:33256771
Prognostic factors in pulmonary metastasectomy and efficacy of repeat pulmonary metastasectomy from colorectal cancer. Background: The rate of pulmonary metastasectomy from colorectal cancer (CRC) has increased with recent advances in chemotherapy, diagnostic techniques, and surgical procedures. The purpose of this study was to investigate the prognostic factors for response to pulmonary metastasectomy and the efficacy of repeat pulmonary metastasectomy.
Methods: This study was a retrospective, single-institution study of 126 CRC patients who underwent pulmonary metastasectomy between 2000 and 2019 at the Gifu University Hospital.
Results: The 3- and 5-year survival rates were 84.9% and 60.8%, respectively. Among the 126 patients, 26 (20.6%) underwent a second pulmonary metastasectomy for pulmonary recurrence after initial pulmonary metastasectomy. Univariate analysis of survival identified seven significant factors: (1) gender (p = 0.04), (2) past history of extra-thoracic metastasis (p = 0.04), (3) maximum tumor size (p = 0.002), (4) mediastinal lymph node metastasis (p = 0.02), (5) preoperative carcinoembryonic antigen (CEA) level (p = 0.01), (6) preoperative carbohydrate antigen 19-9 (CA19-9) level (p = 0.03), and (7) repeat pulmonary metastasectomy for pulmonary recurrence (p < 0.001). On multivariate analysis, only mediastinal lymph node metastasis (p = 0.02, risk ratio 8.206, 95% confidence interval (CI) 1.566-34.962) and repeat pulmonary metastasectomy for pulmonary recurrence (p < 0.001, risk ratio 0.054, 95% CI 0.010-0.202) were significant. Furthermore, in the evaluation of surgical outcomes, the safety of second pulmonary metastasectomy was almost the same as that of initial pulmonary metastasectomy.
Conclusions: Repeat pulmonary metastasectomy is likely to be safe and effective for recurrent cases that meet the surgical criteria. However, mediastinal lymph node metastasis was a significant independent prognostic factor for worse overall survival.
abstract_id: PUBMED:24681037
Risk factors for lymph node metastases and prognosticators of survival in patients undergoing pulmonary metastasectomy for colorectal cancer. Background: Systematic lymph node dissection is not routinely performed in patients undergoing pulmonary metastasectomy (PM) of colorectal cancer. The aim of the study was to identify risk factors for lymph node metastases (LNM) and to determine prognosticators for survival in colorectal cancer patients with pulmonary metastases.
Methods: We retrospectively reviewed our prospective database of 165 patients with colorectal cancer undergoing PM and systematic lymph node dissection with curative intent from 1999 to 2009. The χ(2) test, regression analyses, Kaplan-Meier analyses, log rank tests, and Cox regression analyses were used to determine prognosticators for LNM and survival.
Results: The prevalence of LNM was 22.4%. Lymph node metastases were more often detected in case of rectal cancer and if anatomic resections in term of segmentectomy or lobectomy had to be performed for PM. The number of pulmonary metastases showed a nonlinear association with the risk of positive postoperative LNM. For 1 to 10 pulmonary metastases, each additional pulmonary metastasis conferred a 16% increase in risk for LNM. Rectal cancer, M-status of the primary tumor, number of pulmonary metastases, and disease progression during pre-PM chemotherapy were independent prognosticators for survival. Lymph node metastases were not an independent prognosticator.
Conclusions: Rectal cancer, required anatomic resections, and multiple metastases were risk factors for LNM. Rectal cancer, M-status of the primary tumor, number of pulmonary metastasis, and disease progression during pre-PM chemotherapy were independent negative predictors of survival, stratifying patients with poor prognosis who may benefit from chemotherapy before or after PM.
abstract_id: PUBMED:25440271
Unexpected lymph node disease in resections for pulmonary metastases. Background: Pulmonary metastasectomy is widely accepted for different malignant diseases. The role of mediastinal lymph node (LN) dissection in these procedures is discussed controversially. We evaluated our results of LN removal at the time of pulmonary metastasectomy with respect to the frequency of unexpected LN disease.
Methods: This was a retrospective analysis of 313 resections performed in 209 patients. Operations were performed in curative intention. Patients with known thoracic LN involvement and those without lymphadenectomy (n = 43) were excluded. Patients were analyzed according the type of LN dissection. Subgroups of different primary cancers were evaluated separately.
Results: Sublobar resections were performed in 256 procedures with lymphadenectomy, and 14 patients underwent lobectomy. Patients underwent radical lymphadenectomy (n = 158) or LN sampling (n = 112). The overall incidence of unexpected tumor in LN was 17% (radical lymphadenectomy, 15.8%; sampling, 18.8%). Unexpected LN involvement was found in 17 patients (35.5%) with breast cancer, in 120 (9.2%) with colorectal cancer, and in 53 (20.8%) with renal cell carcinoma. The 5-year survival was 30.2% if LN were tumor negative and 25% if positive (p = 0.19). LN sampling vs radical removal had no significant effect on 5-year survival (23.6% vs 30.9%; p = 0.29).
Conclusions: Dissection of mediastinal LN in resection of lung metastases will reveal unexpected LN involvement in a relevant proportion of patients, in particular in breast and renal cancer. Routine LN dissection appears necessary and may become important for further therapeutic decisions. On the basis of our data, LN sampling seems to be sufficient.
abstract_id: PUBMED:29843186
Predictive Factors of Thoracic Lymph Node Metastasis Accompanying Pulmonary Metastasis from Colorectal Cancer. Background: The aim of this study was to identify the factors predicting thoracic lymph node (LN) metastases for pulmonary resection from colorectal cancer (CRC).
Methods: The records of 160 patients who underwent pulmonary metastasectomy for CRC were retrospectively reviewed. Clinicopathologic factors were analyzed with chi-square test or t-test and logistic regression to identify predictable factors for LN metastases.
Results: Sixty patients (37.5%) underwent LN dissection during pulmonary metastasectomy, and LN metastases were found in five patients. Twenty-three patients had LN recurrence among the 100 patients (62.5%) without LN dissection during the follow-up period. Twenty-eight patients out of 160 (17.5%) had LN metastases. By multivariate analysis, the number of pulmonary metastasis and metastasis from colon cancers were independent factors predicting LN metastases.
Conclusion: The number of pulmonary metastasis and metastasis from colon cancers were independent factors predicting LN metastases. LN sampling should be performed especially in cases with strong predictive factors to improve staging and help guide further treatment.
abstract_id: PUBMED:30811031
Surgical management of colorectal lung metastases. Pulmonary metastasectomy for colorectal cancer is an established means of treatment for select patients. This article will highlight the recent evidence published in the literature related to current practices for the surgical management of colorectal lung metastases and propose a diagnostic algorithm for use in clinical practice. It will also discuss controversies related to pulmonary metastasectomy, including the optimal timing of surgery, the extent of lymph node sampling/dissection, and the extent of surgical resection.
Answer: The necessity of lymph node dissection in pulmonary metastasectomy for colorectal adenocarcinoma is a subject of ongoing debate. Several studies have investigated the impact of lymph node dissection on patient outcomes, with varying conclusions.
One study found that mediastinal lymph node metastases are a significant negative prognostic factor for survival after pulmonary metastasectomy for metastatic colorectal cancer. Systematic mediastinal lymph node dissection (LND) was recommended for prognostic purposes during pulmonary metastasectomy for colorectal metastases, despite the fact that systematic LND was not a significant factor for prolonged survival in the positive LND group (PUBMED:22721598).
Another study showed that mediastinal lymph node involvement, preoperative carcinoembryonic antigen (CEA) level, higher metastatic nodule number, and nodule size negatively affected survival. However, mediastinal lymph node dissection (MLND) in pulmonary metastasectomy was not associated with survival, suggesting that MLND may not be required for all patients undergoing pulmonary metastasectomy for colorectal cancer (PUBMED:34716666).
A systematic review concluded that simultaneous lymph node metastases of colorectal pulmonary metastases (CRPM) have a detrimental impact on survival, particularly for mediastinal lymph node metastases. Therefore, lymphadenectomy during pulmonary metastasectomy for CRPM can be advised to obtain important prognostic value (PUBMED:34656390).
Other studies have identified lymph node involvement as a negative prognostic factor and have suggested that lymph node dissection may improve overall survival and reduce locoregional recurrence without increasing morbidity and mortality (PUBMED:36168561). However, the real prognostic influence of systematic lymph node dissection remains unclear, with some studies not demonstrating any positive effect of lymphadenectomy during pulmonary metastasectomy except a pseudo stage migration effect (PUBMED:34012609).
In summary, while lymph node metastases are associated with worse survival outcomes, the requirement for lymph node dissection in pulmonary metastasectomy for colorectal adenocarcinoma is not definitively established and may depend on individual patient factors and the presence of predictive factors for lymph node metastases (PUBMED:33256771, PUBMED:24681037, PUBMED:25440271, PUBMED:29843186). Further research and individualized patient assessment are necessary to determine the role of lymph node dissection in the management of colorectal lung metastases. Some studies suggest that lymph node sampling may be sufficient, as it can reveal unexpected lymph node involvement in a significant proportion of patients, particularly in breast and renal cancer, and may become important for further therapeutic decisions (PUBMED:25440271).
The predictive factors for thoracic lymph node metastases have also been studied, with the number of pulmonary metastases and metastasis from colon cancers identified as independent factors predicting lymph node metastases. This suggests that lymph node sampling should be performed, especially in cases with strong predictive factors, to improve staging and help guide further treatment (PUBMED:30811031).
Overall, while there is evidence to suggest that lymph node dissection can provide prognostic information and may influence survival outcomes, the decision to perform lymph node dissection during pulmonary metastasectomy for colorectal adenocarcinoma should be tailored to the individual patient based on the presence of risk factors for lymph node metastasis and the potential impact on survival and treatment planning. |
Instruction: Is it safe to ligate the left renal vein during open abdominal aortic aneurysm repair?
Abstracts:
abstract_id: PUBMED:20413260
Is it safe to ligate the left renal vein during open abdominal aortic aneurysm repair? Background: Open repair of juxta-renal abdominal aortic aneurysms (AAA) sometimes involves the ligation and division of the left renal vein (LRV). Some surgeons advocate repair, but this is not common practice. The aim was to study the effect of LRV ligation on renal function.
Methods: A retrospective audit of all open AAA repairs between February 2004 and September 2007 in our unit was completed. Pre- and postoperative renal function was assessed with the estimated glomerular filtration rate (eGFR), using an established formula.
Results: Two hundred sixty-one open AAA repairs were performed in the study period. The LRV was ligated in 18.8%; mean age was 75.5 years, 35 were men, mean AAA diameter was 7.8 cm, there were 7 elective, 22 urgent, and 19 emergency AAA repairs. Renal function with LRV ligated was compared with the 212 patients without LRV ligation by independent samples t-testing. The baseline mean serum creatinine and glomerular filtration rate in the LRV ligated group were 115.1 micromol/L and 60.6, respectively, which were similar to the LRV not ligated group (p > 0.05). The renal function at postoperative day 1, day 7, and weeks 2-6 was similar in the two groups (p > 0.05). The postoperative renal function on day 1 was significantly worse compared to baseline (p < 0.05), but not at day 7 and weeks 2-6 (p > 0.05).
Conclusion: In patients undergoing LRV ligation, there is an initial drop in renal function which improves over 2-6 weeks. At each stage, the renal function is similar to patients in whom the LRV is not ligated. LRV ligation is safe during open AAA repair.
abstract_id: PUBMED:36069635
Retroaortic left renal vein in open thoracoabdominal aneurysm repair: A modified approach. The incidence of retroaortic left renal vein (RLRV) is less than 6%. This anatomical variation hinders the exposure and anastomosis of visceral arteries during open thoracoabdominal aneurysm repair. This situation may warrant division and ligation of the RLRV using the conventional retroperitoneal approach. This report describes a modified approach wherein the vein is not divided, thereby improving its surgical exposure.
abstract_id: PUBMED:33163748
Endovascular repair of a ruptured abdominal aortic aneurysm after endovascular aneurysm repair due type IB endoleak associated with a late fistula between the abdominal aorta and a retroaortic left renal vein. A ruptured abdominal aortic aneurysm after endovascular aneurysm repair with an arteriovenous fistula between the aneurysm sac and a retroaortic left renal vein is an extremely rare complication. This case describes an 81-year-old man who developed an aorto-left renal vein fistula owing to a type IB endoleak 2 years after endovascular aneurysm exclusion. The leak was repaired with a left endograft limb extension. Endovascular techniques are attractive and feasible alternatives and can play an essential role in reinterventions. This report is the first of an aorto-left renal vein fistula owing a type IB endoleak after an endovascular aneurysm repair.
abstract_id: PUBMED:35491763
Complete Endovascular Repair for Abdominal Aortic Aneurysm With Concomitant Aorto-Left Retroaortic Renal Vein Fistula. Background: Abdominal aortic aneurysm (AAA) with concomitant aorto-retroarotic left renal vein fistula (ALRVF) is an extremely rare clinical condition. With the recent development of endovascular techniques, repair of such conditions with a complete minimal invasive approach is now possible. We reported here a case of endovascular repair of AAA with concomitant ALRVF.
Case Presentation: A 62-year-old gentleman presenting with AAA and concomitant ALRVF underwent complete endovascular repair, including an endovascular aortic aneurysm repair (EVAR) with bifurcated aortic graft as well as embolization of the aneurysm sac and deployment of a covered stent in the left retroarotic renal vein to achieve sealing of the arterial-venous fistula. The patient required no blood transfusion and no ICU stay. He has been followed up closely for 4 years and has been well clinically. Aneurysm sac size has remained stable.
Conclusions: Endovascular repair can be a safe and reliable surgical alternative to treat AAA with concomitant ALRVF. But long-term follow up and more clinical data are required to verify the durability of endovascular repair for such conditions.
abstract_id: PUBMED:24386025
Aorto-left renal vein fistula caused by a ruptured abdominal aortic aneurysm. Retroaortic left renal vein is a malformation in which the left renal vein courses dorsal to the abdominal aorta. In patients with abdominal aortic aneurysm, an aorto-left renal vein fistula can form if the left renal vein is sandwiched between the aneurysm wall and lumbar vertebrae. The patient was an 84-year-old man with lower back pain. We performed a contrast-enhanced computed tomography (CT), although renal dysfunction was noted. The CT showed a ruptured juxta-renal abdominal aortic with aorto-left renal vein fistula. This clinical condition can cause severe renal dysfunction, in spite of which an enhanced contrasted CT scan would be extremely informative preoperatively.
abstract_id: PUBMED:31316040
Effect of Intraoperative Division of the Left Renal Vein on the Fate of Renal Function and Left Renal Volume After Open Repair of Para- and Juxtarenal Aortic Aneurysm. Background: The effect of left renal vein division (LRVD) during open surgery (OS) for pararenal and juxtarenal abdominal aortic aneurysm (P/JRAA) on postoperative renal function remains controversial, so we focused on chronic renal decline (CRD) and separately examined renal volume as a surrogate index of split renal function.Methods and Results:The 115 patients with P/JRAA treated with OS from June 2007 to January 2017 were reviewed: 26 patients without LRVD were matched to 27 patients with LRVD according to preoperative chronic kidney disease (CKD) stage and proximal clamp sites. The effect of LRVD on CRD was investigated by a time-to-event analysis. During a median follow-up of 23.5 months, CRD occurred in 5 patients with LRVD and in 4 patients without LRVD. Comparison of freedom from CRD showed no significant difference between the matched groups (P=0.870). The separate renal volumes were evaluated before surgery and at 1 and 2 years of follow-up using CT images from 18 patients with LRVD. At 2 years, the mean renal volume had decreased by 15% in the left kidney and by 9% in the right kidney (P=0.052 and 0.148, respectively), but the left-to-right renal volume ratio showed no significant change (P=0.647).
Conclusions: LRVD had no significant effect on CRD or left renal volume relative to the right renal volume for up to 2 years.
abstract_id: PUBMED:14652842
Renal artery clamping and left renal vein division during abdominal aortic aneurysm repair. Objectives: To determine whether renal artery clamping and division of the left renal vein affects renal function in the patients who undergo repair of infrarenal abdominal aortic aneurysm (AAA).
Methods: Between 1992 and 2000, 267 patients had open surgery for infrarenal AAA. Of these, 22 (8%) required temporary bilateral (15) or unilateral (7) renal artery clamping. 8 also had the left renal vein divided, three of which were re-anastomosed.
Results: Renal artery clamping and/or renal vein divisions did not affect the incidence of complications and long term renal failure.
Conclusions: Clamping of the renal arteries and/or renal vein division during AAA surgery does not in itself compromise short or long term renal function.
abstract_id: PUBMED:37075832
Left Renal Vein Division during Open Surgical Repair for Abdominal Aortic Aneurysm May Cause Long-Term Kidney Remodeling. Background: Left renal vein division (LRVD) is a maneuver performed during open surgical repair for abdominal aortic aneurysms. Even so, the long-term effects of LRVD on renal remodeling are unknown. Therefore, we hypothesized that interrupting the venous return of the left renal vein might cause renal congestion and fibrotic remodeling of the left kidney.
Methods: We used a murine left renal vein ligation model with 8-week-old to 12-week-old wild-type male mice. Bilateral kidneys and blood samples were harvested postoperatively on days 1, 3, 7, and 14. We assessed the renal function and the pathohistological changes in the left kidneys. In addition, we retrospectively analyzed 174 patients with open surgical repairs between 2006 and 2015 to assess the influence of LRVD on clinical data.
Results: Temporary renal decline with left kidney swelling occurred in a murine left renal vein ligation model. In the pathohistological assessment of the left kidney, macrophage accumulation, necrotic atrophy, and renal fibrosis were observed. In addition, Myofibroblast-like macrophage, which is involved in renal fibrosis, was observed in the left kidney. We also noted that LRVD was associated with temporary renal decline and left kidney swelling. LRVD did not, however, impair renal function in long-term observation. Additionally, the relative cortical thickness of the left kidney in the LRVD group was significantly lower than that of the right kidney. These findings indicated that LRVD was associated with left kidney remodeling.
Conclusions: Venous return interruption of the left renal vein is associated with left kidney remodeling. Furthermore, interruption in the venous return of the left renal vein does not correlate with chronic renal failure. Therefore, we suggest careful follow-up of renal function after LRVD.
abstract_id: PUBMED:24417989
Effects of left renal vein division on postoperative renal function during open repair of abdominal aortic aneurysm Objective: To explore the effects of left renal vein division (LRVD) on postoperative renal function and examine the overall prognosis in patients undergoing open repair of abdominal aortic aneurysm (AAA).
Methods: Retrospective analyses were conducted for the clinical data of AAA patients with open repair at our hospital from January 2000 to December 2011. They were divided into LRVD (n = 35) and non-LRVD (n = 141) groups. The 30-day mortality, cardio-cerebrovascular complications, pulmonary complications, preoperative and postoperative levels of creatinine and glomerular rate filtration (GFR), aortic cross-clamping time, blood loss volume, intensive care duration and dialysis or continuous renal replacement therapy (CRRT) rate were compared between two groups.
Results: A total of 189 AAA repair were performed. And 13 patients with suprarenal clamping were excluded. The LRVD group had a significant higher proportion of ruptured AAA (48.6% vs 27.7%, P < 0.01), higher 3-day postoperative creatinine (P < 0.01), longer intensive duration (P < 0.05) and decreased 3-day postoperative GFR (P < 0.01). No significant difference existed in 30-day mortality, incidence of major complications, creatinine or GFR at discharge (P > 0.05).
Conclusions: LRVD during open surgery of AAA has no effect on the postoperative mortality and renal function.
abstract_id: PUBMED:35347391
Left Renal Vein Division for Juxtarenal Aortic Exposure: Influence on Renal Function and Role of the Communicating Lumbar Vein. Background: In this study, we evaluate the outcome of renal function in patients undergoing juxtarenal abdominal aortic aneurysm repair with or without division of the left renal vein with special focus on the role of the communicating lumbar vein.
Methods: A retrospective analysis of prospectively collected data of 110 patients undergoing elective juxtarenal abdominal aortic aneurysm repair between 2000 and 2018 was performed. The demographic characteristics and comorbidities were reviewed in detail and the renal function was analysed pre- and post-operatively. The cohort of patients was split into group A (left renal vein divided) and B (left renal vein mobilised). Group A was further sub-analysed regarding the presence of a communicating lumbar vein on preoperative imaging data (group A+ = vein present, group A- = no communicating lumbar vein present).
Results: The patients were matched well regarding their demographic characteristics and comorbidities. In the analysis of renal function, no statistically significant difference could be detected between group A and B. In the sub-analysis of group A, the group with a communicating lumber vein (group A+) turned out to have a significantly better renal function in the long term (sCrea 0.87 vs. 1.51; p = 0.016).
Conclusion: Ligation of the left renal vein is a safe procedure in surgery of juxtarenal aortic aneurysms regarding the outcome of the renal function. A communicating lumbar vein between the left renal vein and the left ascending lumbar vein seems to play a key role to provide venous drainage after division of the left renal vein.
Answer: Yes, it is generally considered safe to ligate the left renal vein (LRV) during open abdominal aortic aneurysm (AAA) repair. Studies have shown that patients undergoing LRV ligation experience an initial drop in renal function, which improves over 2-6 weeks, and at each stage, the renal function is similar to patients in whom the LRV is not ligated (PUBMED:20413260). Additionally, the effect of LRV division on chronic renal decline (CRD) and left renal volume was not significant up to 2 years post-surgery (PUBMED:31316040). Furthermore, clamping of the renal arteries and/or renal vein division during AAA surgery does not in itself compromise short or long term renal function (PUBMED:14652842).
However, the long-term effects of LRV division on renal remodeling are not fully understood. One study using a murine model suggested that interrupting the venous return of the LRV might cause renal congestion and fibrotic remodeling of the left kidney, but this did not correlate with chronic renal failure, indicating that careful follow-up of renal function after LRV division is advisable (PUBMED:37075832). Another study found that the presence of a communicating lumbar vein seems to play a key role in providing venous drainage after division of the LRV, which could be beneficial for long-term renal function (PUBMED:35347391).
In cases where the left renal vein is not divided, such as with a retroaortic left renal vein (RLRV), modified surgical approaches have been described to improve exposure without the need for division (PUBMED:36069635). Endovascular repair techniques have also been successfully used in cases of AAA with concomitant aorto-left retroaortic renal vein fistula, providing a minimally invasive alternative with good short-term outcomes, although long-term data is still required (PUBMED:35491763).
Overall, while the division of the LRV during open AAA repair is considered safe, individual patient factors, the presence of anatomical variations, and the potential for long-term renal remodeling should be considered in surgical planning. |
Instruction: Do totally occlusive nasal packs after nasal surgery increase the risk of immediate respiratory distress during recovery from anesthesia?
Abstracts:
abstract_id: PUBMED:27107602
Do totally occlusive nasal packs after nasal surgery increase the risk of immediate respiratory distress during recovery from anesthesia? Objectives: This study aims to compare the risk of immediate respiratory distress (IRD) during the recovery of anesthesia between the nasal surgery with totally occlusive nasal packing and non-respiratory tract-related surgeries.
Patients And Methods: A total of 300 patients (180 males, 120 females; mean age 30.1±8.2 years; range 18 to 52 years) were included in the study. The patients were assigned to one of two age- and sex-matched groups according to surgery type: 1) patients undergoing nasal surgery with totally occlusive nasal packs for nasal septum deviation or 2) patients undergoing non-respiratory tract surgeries for various diseases. Immediate respiratory distress was defined as any unanticipated hypoxemia, hypoventilation or upper-airway obstruction (stridor or laryngospasm) requiring an active and specific intervention.
Results: The patients who underwent nasal surgery with totally occlusive nasal packs had a 6.25 times higher risk of IRD than the patients who underwent non-respiratory tract surgery during recovery from general anesthesia. Smokers had a 4.8 times higher risk of having IRD than non-smokers during the post-extubation phase. There were no significant differences in the incidence of IRD between males and females.
Conclusion: Based on our study results, totally occlusive nasal packs and smoking were associated with poor extubation status at the end of the surgical procedure.
abstract_id: PUBMED:18329208
Risk of respiratory distress in the patients who were applied nasal packing at the end of nasal surgery. Objective: This prospective study investigated the risk of respiratory distress in the patients who were applied nasal packing at the end of nasal surgery; and effects of nasal packing on consciousness level while the patients were awake or asleep, measured by Bispectral Index (BIS).
Methods: The study group consisted of 15 adult patients (10 male, 5 female), who were applied nasal packing at the end of nasal surgery. The control group consisted of 15 adult patients (10 male, 5 female), who received general anesthesia for various reasons. In the study and control groups, BIS index, respiratory rate, peripheral oxygen saturation, pulse per minute and blood pressure were measured at seven different times.
Results: There was no statistically significant difference between BIS indexes of the study and control groups. In the fourth hour after sleep (AS-4h), respiratory rate of the study group was significantly lower than that of the control group. In the fourth hour after the anesthesia (AA-4h), oxygen saturation value of the study group was lower than that of the control group.
Conclusion: We conclude that in patients who are applied nasal packing at the end of nasal surgery; at AA-4h and AS-4h times, there may be risk of decrease in the oxygen saturation and respiratory rate parameters, respectively. Therefore, it is necessary to monitor non-invasive respiratory parameters and to give enriched oxygen by an oral catheter.
abstract_id: PUBMED:24635981
Congenital nasal obstruction due to pyriform aperture stenosis. A case series Nasal obstruction in neonates is a potentially fatal condition due to their exclusive nasal breathing. The main cause is inflammatory or infectious rhinitis. Congenital, neoplastic, traumatic or iatrogenic causes are less frequent. Choanal atresia is the most common congenital nasal anomaly. A less common etiology of congenital nasal obstruction is pyriform aperture stenosis. Suspicion might arise in any newborn with varying degrees of stridor and respiratory distress, associated with the difficulty of passing a probe through anterior nares. Diagnosis should be confirmed by a computed tomography of the craniofacial massif. The therapeutic approach will depend on the severity of symptoms. We describe our experience with 5 patients with this condition, treated surgically using a sub-labial approach, and followed by nasal stenting.
abstract_id: PUBMED:25885732
Postoperative airway management after nasal endoscopic sinus surgery: A comparison of traditional nasal packing with nasal airway. Background: Nasal packing after the nasal surgery can be extremely hazardous and can lead to airway complications such as dyspnea and respiratory obstruction.
Objective: The present study aimed at comparing the traditional nasal packing with nasal airway during the immediate postoperative period in patients undergoing fibreoptic endoscopic sinus surgery (FESS) under general anaesthesia (GA) with regards to airway management.
Materials And Methods: The study groups consisted of 90 ASA grade I and II patients aged 16 to 58 years who underwent FESS under GA. Patients were randomly assigned into three groups: Group NP, UA and Group BA of 30 patients each. At the end of surgery, Group NP patients were managed with traditional bilateral nasal packing while a presterilized 5 mm ID uncuffed ETT was cut to an appropriate size and inserted into one of the nostrils in UA and bilaterally in BA group patients. During postoperative period following parameters and variables were observed over the next 24 hours: Any respiratory distress or obstruction, pain and discomfort, oxygen saturation, heart rate, blood pressure, bleeding episode, ease of suctioning through nasal airway, anaesthesiologists and surgeons satisfaction during postoperative period, discomfort during removal of nasal airway and any fresh bleeding episode during removal of nasal airway. The data was compiled and analyzed using Chi-square test and ANOVA with post-hoc significance. Value of P < 0.05 was considered significant and P < 0.0001 as highly significant.
Results: The post-op mean cardio-respiratory parameters showed significant variations among NP group (P < 0.05) and the patient of UA and BA groups while intergroup comparison between UA and BA was non-significant (P > 0.05). Pain and discomfort, bleeding episode, ease of suctioning through nasal airway, pain and bleeding during removal of nasal airway (P < 0.0001) as well as surgeons and anaesthesiologists satisfaction criteria showed significant results among the NP group as compared to UA and BA groups (P < 0.05).
Conclusion: The present intervention to maintain airway patency can be termed as excellent with additional benefits like ease of suctioning; oxygen supplementation and a possible haemostatic effect due to pressure on the operated site. The low cost of the modified nasal airway and easily replicable design were the standout observations of the present study.
abstract_id: PUBMED:24604163
Congenital nasal pyriform aperture stenosis: is there a role for nasal dilation? Importance: Congenital nasal pyriform aperture stenosis (CNPAS) may require sublabial drill-out of the pyriform aperture when symptoms are severe or refractory to medical therapy. Less invasive nasal dilation decreases potential morbidity to neonates with severe CNPAS.
Objective: To determine the outcome of patients with CNPAS who underwent nasal dilation alone without other surgical therapy.
Design, Setting, And Participants: A retrospective case series at a tertiary pediatric hospital involving neonates with CNPAS.
Interventions: Nasal dilation using Hegar cervical dilators in neonates with severe CNPAS.
Main Outcomes And Measures: Avoidance of sublabial pyriform aperture drill-out and length of stay in the hospital after treatment.
Results: Four patients (median age, 15 days) had respiratory distress and feeding difficulties. Nasal stenosis was suspected, and maxillofacial computed tomography scans revealed a mean pyriform aperture width of 4.5 mm. Medical therapy was initiated, but symptoms persisted. Direct laryngoscopy, rigid bronchoscopy, and nasal endoscopy with nasal dilation to at least 4 mm were performed in 4 patients without postoperative stenting. Mean length of stay after treatment was 4 days. Two patients underwent repeat nasal dilation on postoperative days 18 and 23. All 4 patients remained free of nasal disease in a median follow-up of 4.5 months.
Conclusions And Relevance: Four patients with severe CNPAS were successfully treated with nasal dilation without pyriform aperture bone removal or nasal stenting. This series, while small, suggests that nasal dilation may be a therapeutic option for severe CNPAS that decreases the risks of open surgery and subsequent stent use.
abstract_id: PUBMED:16567165
Congenital nasal pyriform aperture stenosis: a rare cause of neonatal nasal obstruction. Congenital nasal pyriform aperture stenosis has been described as an unusual cause of neonatal nasal obstruction. Clinical suspicion is based on respiratory distress, cyclic cyanosis, apneas, and feeding difficulties. A bony overgrowth of the maxillary nasal processes is thought to be responsible for this deformity. This anomaly has been reported as an isolated feature or can be associated with craniofacial or central nervous system anomalies. Surgery is indicated in cases of severe respiratory distress, feeding difficulties, and when conservative methods fail.
abstract_id: PUBMED:7675492
Congenital nasal pyriform aperture (bony inlet) stenosis. Congenital nasal pyriform aperture stenosis should be considered in the differential diagnosis of infants with nasal airway obstruction. It can occur as an isolated anomaly or it can be associated with other congenital anomalies. A history of cyclical respiratory distress and cyanosis often associated with feeding and relieved by crying is characteristic. On examination, the nasal airway is narrowed anteriorly. CT is the study of choice to make the diagnosis and rule out other causes of nasal obstruction. Conservative management is recommended, sometimes with the use of a McGovern nipple for feeding. Severe cases or those in which conservative management fails should be considered for surgery.
abstract_id: PUBMED:15141758
Polysomnographic effects of nasal surgery for snoring and obstructive sleep apnea. Objective: It has been hypothesized that nasal obstruction causes an increase in negative pressure in the upper airway and induces an inspiratory collapse at the pharyngeal level. We used portable polysomnography (PSG) to assess the efficacy of nasal surgery for snoring and obstructive sleep apnea (OSA).
Material And Methods: We reviewed 21 patients who presented with nasal obstruction and snoring. Septal surgery with or without inferior turbinectomy was performed. Each patient was assessed pre- and postoperatively using PSG. We measured the respiratory distress index (RDI), apnea index (AI), oxygen saturation index (OSI) and the duration of snoring. Selection criteria were an RDI of > 15 as determined by PSG and clinical nasal obstruction and a deviated nasal septum as determined by physical examination.
Results: Nasal surgery had the following effects: RDI decreased from 39 to 29 (p = 0.0001), AI decreased from 19 to 16 (p = 0.0209), OSI decreased from 48 to 32 (p = 0.0001) and the duration of snoring decreased from 44% to 39% (p = 0.1595). Snoring and OSA were completely relieved in 4 patients (19%) who did not require any additional surgical therapy.
Conclusion: Snoring and OSA may be corrected merely by septal surgery in some patients, and secondary surgery (uvulopalatoplasty) may be considered after a thorough evaluation by means of postoperative PSG.
abstract_id: PUBMED:38465174
Navigating Congenital Nasal Obstruction: A Contemporary Surgical Paradigm. An uncommon form of nasal airway obstruction in a newborn with respiratory distress manifestations that needs prompt surgical correction when medical therapy cannot address the problem adequately. In this case study, two newborns were diagnosed with congenital nasal pyriform aperture stenosis (CNPAS) following a CT paranasal sinuses when the infant demonstrated persistent symptoms of upper airway obstruction. The narrowing of the nasal pyriform aperture, with a mean width of 0.65 cm in these newborns, was insufficient to allow breathing through the nostrils. Bedsides flexible endoscopy examinations revealed laryngomalacia in both of these infants. A supraglottoplasty, surgical nasal dilation, and stenting were performed without requiring a sublabial drill out of the pyriform aperture, allowing total resolution of the initial respiratory symptoms. Thus, a successful nasal enlargement was accomplished. During the post-operative follow-up period, no incurrences were observed. Both patients with CNPAS were successfully treated with nasal dilatation and nasal stenting instead of the traditional pyriform aperture bone removal by a sublabial approach. Despite being a small series, it demonstrates that nasal dilatation and stenting may be considered an alternate procedure in selective CNPAS cases because it lowers the risk of open surgery and presumably offers an effective management option.
abstract_id: PUBMED:38440456
Polysomnographic Assessment on Osahs Outcomes in Patients with Nasal Obstruction Undergoing Septoplasty with Partial Inferior Turbinectomy. The relationship between nasal obstruction and obstructive sleep apnea has raised interest among otolaryngologists since years. There are studies that suggest that surgical correction of nasal obstruction improves sleep quality and reduces symptoms of sleep apnea. This lead to our study to understand the effect of nasal surgery on obstructive sleep apnea hypopnea syndrome (OSAHS).
Aim: To assess the effect of nasal surgery in improvement in Obstructive Sleep Apnoea Hypopnoea Syndrome in patients with nasal obstruction by Polysomnography (PSG).
Material And Methods: This study included 30 patients with nasal obstruction who underwent septoplasty and/or turbinate reduction procedure with pre and post operative assessment of respiratory distress index (RDI) including apnoea hypopnoea index (AHI), obstructive apnoea index (OAI), Snoring Index (SI) using polysomnography (PSG).
Result: Nasal correction surgery showed statistically significant improvement (p-value < 0.001) in RDI from 13.66 to 6.66, OAI from 6.34 to 3.18 and Snoring Index from 161.77 to 62.23 as assessed by polysomnography. There was statistically significant improvement in minimal saturation levels (during sleep) and positional sleep apnoea.
Conclusion: Isolated nasal surgery like septoplasty and/or turbinate reduction improved sleep parameters and alleviated OSA symptoms in patients with static nasal obstruction and obstructive sleep apnoea/hypopnoea syndrome. However, patients with multilevel or dynamic airway obstruction may need further intervention.
Answer: Yes, totally occlusive nasal packs after nasal surgery do increase the risk of immediate respiratory distress (IRD) during recovery from anesthesia. A study comparing the risk of IRD between patients undergoing nasal surgery with totally occlusive nasal packs and those undergoing non-respiratory tract surgeries found that the former group had a 6.25 times higher risk of IRD during recovery from general anesthesia. Additionally, smokers in the nasal surgery group had a 4.8 times higher risk of IRD than non-smokers during the post-extubation phase (PUBMED:27107602). |
Instruction: Do MRI and mammography reliably identify candidates for breast conservation after neoadjuvant chemotherapy?
Abstracts:
abstract_id: PUBMED:25777093
Do MRI and mammography reliably identify candidates for breast conservation after neoadjuvant chemotherapy? Background: Neoadjuvant chemotherapy (NAC) may allow breast-conserving therapy (BCT) in patients who require mastectomy at presentation. Breast MRI is more accurate than mammography in assessing treatment response, but combined test reliability in identifying BCT candidates after NAC is not well described. We evaluated whether post-NAC breast MRI alone and with mammography accurately identifies BCT candidates.
Methods: In this retrospective study of 111 consecutive breast cancer patients receiving NAC, all had pre- and postchemotherapy MRI, followed by surgery. Posttreatment MRI and mammography results were correlated with surgical outcomes and pathologic response.
Results: Fifty-one of 111 (46 %) patients presented with multicentric or inflammatory breast cancer and were not BCT candidates. The remaining 60 (54 %) were considered BCT candidates after downstaging (mean age: 47 years). All 60 had at least a partial response to NAC and were suitable for BCT on MRI after NAC. Forty-five of 60 (75 %) underwent lumpectomy; 15 of 60 (25 %) chose mastectomy. Forty-one of 45 (91 %) of lumpectomies were successful; 4 of 45 (9 %) required mastectomy. Twelve of 15 (80 %) patients choosing mastectomy could have undergone BCT based on pathology; 3 of 15 (20 %) did require mastectomy. Two of these three patients had extensive microcalcifications on mammogram, indicating the need for mastectomy despite MRI suitability for BCS. MRI alone correctly predicted BCS in 53 of 60 (88 %) patients. MRI plus mammography was correct in 55 of 60 (92 %), although only 9 of 45 (20 %) BCT patients and 4 of 15 (27 %) potentially conservable mastectomy patients had complete pathologic responses.
Conclusions: Posttreatment MRI plus mammography is an accurate method to determine whether BCT is possible after NAC is given to downstage disease.
abstract_id: PUBMED:38422421
MRI in the Setting of Neoadjuvant Treatment of Breast Cancer. Neoadjuvant therapy may reduce tumor burden preoperatively, allowing breast conservation treatment for tumors previously unresectable or requiring mastectomy without reducing disease-free survival. Oncologists can also use the response of the tumor to neoadjuvant chemotherapy (NAC) to identify treatment likely to be successful against any unknown potential distant metastasis. Accurate preoperative estimations of tumor size are necessary to guide appropriate treatment with minimal delays and can provide prognostic information. Clinical breast examination and mammography are inaccurate methods for measuring tumor size after NAC and can over- and underestimate residual disease. While US is commonly used to measure changes in tumor size during NAC due to its availability and low cost, MRI remains more accurate and simultaneously images the entire breast and axilla. No method is sufficiently accurate at predicting complete pathological response that would obviate the need for surgery. Diffusion-weighted MRI, MR spectroscopy, and MRI-based radiomics are emerging fields that potentially increase the predictive accuracy of tumor response to NAC.
abstract_id: PUBMED:37685349
Accuracy of Breast Ultrasonography and Mammography in Comparison with Postoperative Histopathology in Breast Cancer Patients after Neoadjuvant Chemotherapy. Introduction: Nowadays chemotherapy in breast cancer patients is optionally applied neoadjuvant, which allows for testing of tumor response to the chemotherapeutical treatment in vivo, as well as allowing a greater number of patients to benefit from a subsequent breast-conserving surgery.
Material And Methods: We compared breast ultrasonography, mammography, and clinical examination (palpation) results with postoperative histopathological findings after neoadjuvant chemotherapy, aiming to determine the most accurate prediction of complete remission and tumor-free resection margins. To this end, clinical and imaging data of 184 patients (193 tumors) with confirmed diagnosis of breast cancer and neoadjuvant therapy were analyzed.
Results: After chemotherapy, tumors could be assessed by palpation in 91.7%, by sonography in 99.5%, and by mammography in 84.5% (chi-square p < 0.0001) of cases. Although mammography proved more accurate in estimating the exact neoadjuvant tumor size than breast sonography in total numbers (136/163 (83.44%) vs. 142/192 (73.96%), n.s.), 29 tumors could be assessed solely by means of breast sonography. A sonographic measurement was feasible in 192 cases (99.48%) post-chemotherapy and in all cases prior to chemotherapy.
Conclusions: We determined a superiority of mammography and breast sonography over clinical palpation in predicting neoadjuvant tumor size. However, neither examination method can predict either pCR or tumor margins with high confidence.
abstract_id: PUBMED:20922360
Importance of mammography, sonography and MRI for surveillance of neoadjuvant chemotherapy for locally advanced breast cancer Purpose: The aim of this study is to give an overview on the surveillance of response to neoadjuvant chemotherapy in locally advanced breast cancer with mammography, ultrasound and breast MRI.
Material And Methods: The results of a recently presented study on surveillance in the course of chemotherapy with contrast-enhanced MRI are compared with ratings based on mammography and ultrasound.
Results: Contrast-enhanced MRI correlates best with the histological tumor size when compared with mammography and ultrasound. Tumors with a high HER2 score (2+ with positive FISH test or 3+) show a significantly higher response compared to tumors with a lower HER2 score: size p <0.01, maximum enhancement p <0.01 and area under the curve (AUC) p <0.05. Reduction of tumor size and enhancement are complementary parameters and are not correlated to each other (r=0.22).
Discussion: Contrast-enhanced MRI of the breast is a reliable method for quantification of the response to neoadjuvant chemotherapy. The reductions of tumor size and of tumor enhancement are not correlated. Therefore, it may be reasonable to take both aspects for quantification of therapy response into account. Further studies are needed for evaluation of the value of breast MRI as a prognostic factor.
abstract_id: PUBMED:15868065
Breast conservation after neoadjuvant chemotherapy. Background: Tumor downstaging by preoperative neoadjuvant chemotherapy in patients with locally advanced breast tumors allows breast conservation in women who were previously candidates for mastectomy. Nevertheless, lumpectomy success in such cases cannot be fully achieved. The aim of this study was to create a quantitative tool for preoperative evaluation of the success of breast conservation in such patients.
Methods: The study population included 100 consecutive patients with stage II and III breast cancer who were designated for lumpectomy and 19 patients who were designated for mastectomy. All patients received neoadjuvant therapy. Breast-conserving surgery was offered in accordance with clinical and esthetic criteria. Demographic details and clinical, imaging, and pathologic information were collected from medical files. A decision protocol for classifying patients to lumpectomy or mastectomy was built by using the Classification and Regression Trees procedure based on preoperative characteristics.
Results: Three factors were found to be the main predictors for successful breast conservation: absence of diffuse microcalcifications as seen in the pretreatment mammogram, a postchemotherapy tumor size of < 25 mm, and the existence of a circumscribed lesion on mammography.
Conclusions: The use of these criteria as a basis for decision on the type of surgery may decrease the performance of unnecessary procedures.
abstract_id: PUBMED:24623334
Breast conservation therapy after neoadjuvant chemotherapy: optimization of a multimodality approach. Neoadjuvant chemotherapy (NAC) is routinely used in locally advanced breast cancer, but is increasingly used in early stage patients. Even patients with advanced disease can achieve excellent outcomes with breast conservation therapy (BCT) after NAC. The use of NAC followed by BCT is an example of how multimodality therapy can optimize outcomes while limiting morbidity and preserving cosmetic outcomes. Open communication between the multidisciplinary team is crucial to selecting appropriate candidates for this approach.
abstract_id: PUBMED:12196716
Efficacy of 3D-MR mammography for breast conserving surgery after neoadjuvant chemotherapy. Background: One of the main roles of neoadjuvant chemotherapy for breast cancer is to shrink large tumors to increase patient eligibility for breast conserving surgery. Three dimensional MR Mammography (3D-MRM) can detect tumor extension more accurately compared with mammography and Ultrasonography (US). Therefore, the shrinkage pattern observed on 3D-MRM was analyzed with regard to several pathological factors.
Methods: A total of 27 breast cancer cases were examined by 3D-MRM before and after neoadjuvant chemotherapy. The volume reduction and shrinkage patterns were assessed and compared with the pathological diagnosis.
Results: There were two shrinkage patterns. Twelve of 25 evaluable breast cancers (48%) showed a concentric shrinkage pattern while 13 cases (52%) showed a dendritic shrinkage pattern. The cases with concentric shrinkage were good candidates for breast conserving surgery, But tumors showing dendritic shrinkage often had positive margins necessitating mastectomy. Pathologically, tumors with a papillotubular pattern, Estrogen receptor (ER) positivity, low nuclear grade and c-erbB 2 negativity tended to show dendritic shrinkage.
Conclusions: 3D-MRM is a useful modality for evaluating whether breast conserving surgery can be safely done in the neoadjuvant setting.
abstract_id: PUBMED:32227407
Role of MRI to Assess Response to Neoadjuvant Therapy for Breast Cancer. The goals of imaging after neoadjuvant therapy for breast cancer are to monitor the response to therapy and facilitate surgical planning. MRI has been found to be more accurate than mammography, ultrasound, or clinical exam in evaluating treatment response. However, MRI may both overestimate and underestimate residual disease. The accuracy of MRI is dependent on tumor morphology, histology, shrinkage pattern, and molecular subtype. Emerging MRI techniques that combine functional information such as diffusion, metabolism, and hypoxia may improve MR accuracy. In addition, machine-learning techniques including radiomics and radiogenomics are being studied with the goal of predicting response on pretreatment imaging. This article comprehensively reviews response assessment on breast MRI and highlights areas of ongoing research. LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY STAGE: 3 J. MAGN. RESON. IMAGING 2020;52:1587-1606.
abstract_id: PUBMED:28331654
Evaluation of Neoadjuvant Chemotherapy Response with Dynamic Contrast Enhanced Breast Magnetic Resonance Imaging in Locally Advanced Invasive Breast Cancer. Objective: The reliability of traditional methods such as physical examination, ultrasonography (US) and mammography is limited in determining the type of treatment response in patients with neoadjuvant chemotherapy (NAC) application for locally advanced breast cancer (LABC). Dynamic contrast-enhanced magnetic resonance imaging (MRI) is gaining popularity in the evaluation of NAC response. This study aimed to compare NAC response as determined by dynamic contrast-enhanced breast MRI in patients with LABC to histopathology that is the gold standard; and evaluate the compatibility of MRI, mammography and US with response types.
Materials And Methods: The US, mammography and MRI findings of 38 patients who received NAC with a diagnosis of locally advanced breast cancer and surgical treatment were retrospectively analyzed and compared to histopathology results. Type of response to treatment was determined according to the "Criteria in Solid Tumors Response Evolution 1.1" by mammography, US and MRI criteria. The relationship between response types as defined by all three imaging modalities and histopathology were evaluated, and the correlation of response type as detected by MRI and pathological response and histopathological type of breast cancer was further determined. For statistical analysis, the chi-square, paired t test, correlation and kappa tests were used.
Results: There is a statistical moderate positive correlation between response type according to pathology and MRI (kappa: 0.63). There was a weak correlation between response type according to mammography or US and according to pathology (kappa: 0.2). When the distribution of treatment response by MRI is stratified according to histopathological types, partial response was higher in all histopathological types similar to the type of pathologic response. When compared with pathology MRI detected treatment response accurately in 84.2% of the patients.
Conclusion: Dynamic contrast-enhanced breast MRI appears to be a more effective method than mammography or US in the evaluation of response to neoadjuvant chemotherapy. MRI evaluation of LABC is accepted as the appropriate radiological approach.
abstract_id: PUBMED:15728611
Prospective comparison of mammography, sonography, and MRI in patients undergoing neoadjuvant chemotherapy for palpable breast cancer. Objective: The objective of our study was to determine the relative accuracy of mammography, sonography, and MRI in predicting residual tumor after neoadjuvant chemotherapy for breast cancer as compared with the gold standards of physical examination and pathology.
Subjects And Methods: Forty-one women with stage IIB-III palpable breast cancer were prospectively enrolled in a study investigating the effects of sequential single-agent chemotherapy (doxorubicin followed by paclitaxel or vice versa) on tumor imaging. The study cohort consisted of the first 31 patients (age range, 31-65 years; mean, 45 years) who completed the protocol. All underwent physical examination, mammography, sonography, and MRI before and after receiving each neoadjuvant chemotherapeutic drug. Imaging studies were reviewed by two radiologists using conventional lexicons for lesion analysis, and the findings were compared with clinical response and pathology results.
Results: Complete, partial, and stable clinical response as defined by clinical examination was seen in 15, 14, and two of the 31 patients, respectively. Agreement rates about the degree of response were 32%, 48%, and 55%, respectively, for mammography, sonography, and MRI compared with clinical evaluation and did not differ statistically. Agreement about the rate of response as measured by clinical examination, mammography, sonography, and MRI compared with the gold standard (pathology) was 19%, 26%, 35%, and 71%, respectively. Of the four, MRI agreed with the gold standard significantly more often (p < 0.002 for all three paired comparisons with MRI). When there was disagreement with the gold standard, none of the four exhibited a significant tendency to either under- or overestimate.
Conclusion: MRI appears to provide the best correlation with pathology-better than physical examination, mammography, and sonography-in patients undergoing neoadjuvant chemotherapy. However, MRI may overestimate (6%) or underestimate (23%) residual disease in approximately 29% of the patients (95% confidence interval, 14-48%).
Answer: Yes, MRI and mammography can reliably identify candidates for breast conservation after neoadjuvant chemotherapy (NAC). A retrospective study of 111 consecutive breast cancer patients receiving NAC found that posttreatment MRI plus mammography is an accurate method to determine whether breast-conserving therapy (BCT) is possible after NAC is given to downstage disease. MRI alone correctly predicted BCT in 88% of patients, and MRI plus mammography was correct in 92% of cases (PUBMED:25777093).
Another study supports the notion that MRI is more accurate than mammography, ultrasound, or clinical exam in evaluating treatment response, although it may both overestimate and underestimate residual disease. The accuracy of MRI is dependent on tumor morphology, histology, shrinkage pattern, and molecular subtype (PUBMED:32227407).
Furthermore, MRI has been found to be more accurate than mammography, ultrasound, or clinical exam in predicting residual tumor after neoadjuvant chemotherapy for breast cancer, with MRI agreeing with the gold standard of pathology significantly more often than the other methods (PUBMED:15728611).
However, it is important to note that no method is sufficiently accurate at predicting complete pathological response that would obviate the need for surgery (PUBMED:38422421), and neither mammography nor breast sonography can predict either complete pathological response (pCR) or tumor margins with high confidence (PUBMED:37685349).
In conclusion, while MRI and mammography are reliable tools for identifying candidates for breast conservation after NAC, they are not infallible and should be used in conjunction with other diagnostic methods and clinical judgment to make the most informed decisions regarding patient care. |
Instruction: Does cyclosporin A have any effect on accelerated atherosclerosis in absence of graft rejection?
Abstracts:
abstract_id: PUBMED:8719466
Does cyclosporin A have any effect on accelerated atherosclerosis in absence of graft rejection? Pathologic and morphometric evaluation in an experimental model. Background: The effects of cyclosporin A on accelerated atherosclerosis were studied in an experimental model of aortic isotransplantation.
Methods: Seventy-six Lewis rats were studied. Forty-one abdominal aortic isografts were performed and divided into five groups: 2-day isografts and 15- and 100-day isografts with and without cyclosporin treatment. The remaining rats were divided into seven groups: 15- and 100-day sham-operated, with and without cyclosporin administration; 15- and 100-day animals with cyclosporin treatment only; and normal controls. Cyclosporin was injected subcutaneously in doses of 10 mg/kg daily for the first 15 days and afterward every other day. Longitudinal sections of the proximal anastomosis and cross sections of the midgraft region were measured with a semiautomatic image-analyzer.
Results: Histologic analysis showed that accelerated atherosclerosis was not observed either in NT2 rats or in nontransplanted animals. In the 15-day isografts, accelerated atherosclerosis was present in the perianastomotic tract of the recipient aorta in nine of nine NT15 rats, whereas it was found only in three of nine T15 animals (p < 0.02). Histomorphometric analysis showed that accelerated atherosclerosis was less pronounced in the T100 isografts than in the NT100 ones, this difference being significant at the recipient anastomotic side only (p < 0.0005).
Conclusions: The present results support the hypothesis that cyclosporin, at immunosuppressant and nontoxic doses, can delay the onset and progression of accelerated atherosclerosis and that its effects are more significant at the recipient side of the anastomosis where accelerated atherosclerosis begins to develop.
abstract_id: PUBMED:8310510
The beneficial effect of human recombinant superoxide dismutase on acute and chronic rejection events in recipients of cadaveric renal transplants. In a prospective randomized double-blind placebo-controlled trial, the effect of rh-SOD, given in a dose of 200 mg intravenously during surgery to cyclosporine-treated recipients of cadaveric renal allografts, on both acute and chronic rejection events as well as patient and graft survival was investigated by analyzing the patients' charts retrospectively. The results obtained show that rh-SOD exerts a beneficial effect on acute rejection events as indicated by a significant reduction of (1) first acute rejection episodes from 33.3% in controls to 18.5%, as well as (2) early irreversible acute rejection from 12.5% in controls to 3.7%. With regard to long-term results, there was a significant improvement of the actual 4-year graft survival rate in rh-SOD-treated patients to 74% (with a projected half-life of 15 years) compared with 52% in controls (with an extrapolated half-life of 5 years). The beneficial effect of rh-SOD observed in this trial is not fully understood, although one can assume that the effect is related to its antioxidant action on ischemia/reperfusion injury of the renal allograft, thereby potentially reducing the immunogenicity of the graft. In addition and in accordance with the "response-to-injury hypothesis" in the pathogenesis of general atherosclerosis, rh-SOD has the potential to mitigate free radical-mediated reperfusion injury-induced acute endothelial cell damage that potentially may contribute to the process of chronic obliterative rejection arteriosclerosis.
abstract_id: PUBMED:27295076
Calcineurin inhibitors cyclosporine A and tacrolimus induce vascular inflammation and endothelial activation through TLR4 signaling. The introduction of the calcineurin inhibitors (CNIs) cyclosporine and tacrolimus greatly reduced the rate of allograft rejection, although their chronic use is marred by a range of side effects, among them vascular toxicity. In transplant patients, it is proved that innate immunity promotes vascular injury triggered by ischemia-reperfusion damage, atherosclerosis and hypertension. We hypothesized that activation of the innate immunity and inflammation may contribute to CNI toxicity, therefore we investigated whether TLR4 mediates toxic responses of CNIs in the vasculature. Cyclosporine and tacrolimus increased the production of proinflammatory cytokines and endothelial activation markers in cultured murine endothelial and vascular smooth muscle cells as well as in ex vivo cultures of murine aortas. CNI-induced proinflammatory events were prevented by pharmacological inhibition of TLR4. Moreover, CNIs were unable to induce inflammation and endothelial activation in aortas from TLR4(-/-) mice. CNI-induced cytokine and adhesion molecules synthesis in endothelial cells occurred even in the absence of calcineurin, although its expression was required for maximal effect through upregulation of TLR4 signaling. CNI-induced TLR4 activity increased O2(-)/ROS production and NF-κB-regulated synthesis of proinflammatory factors in cultured as well as aortic endothelial and VSMCs. These data provide new insight into the mechanisms associated with CNI vascular inflammation.
abstract_id: PUBMED:8080589
Mitochondrial calcium deposition in association with cyclosporine therapy and myocardial magnesium depletion: a serial histologic study in heart transplant recipients. Intramitochondrial calcification has been reported in heart transplant recipients treated with high-dose cyclosporine. Myocardial magnesium depletion is common in this group and, on the basis of extensive data from animal studies, would be expected to produce similar mitochondrial deposition of calcium. This prospective study investigated the occurrence of such calcification in biopsy specimens obtained serially in nine heart transplant recipients with simultaneous analysis of myocardial magnesium. During a mean follow-up of 32 weeks, 24 biopsy specimens were analyzed from nine patients. Mitochondrial calcium deposition was more marked in biopsy specimens from recipients with magnesium depletion (p < 0.025). Early toxic cyclosporine levels occurred in three recipients associated with a significant but reversible increase in mitochondrial calcification (p < 0.0001). Histologic rejection and use of calcium antagonists did not modify these findings. It is concluded that although cyclosporine toxicity does induce mitochondrial calcium deposition, such deposition can occur in the absence of toxicity should myocardial magnesium depletion be concurrent. Long-term follow-up will establish the clinical sequelae of such observations. However, when taken together with the results of this study, recent reports of attenuation of accelerated graft atherosclerosis by calcium antagonists may suggest that cyclosporine-induced myocardial magnesium depletion may have an etiologic role in this multifactorial process.
abstract_id: PUBMED:25430438
Central modulation of cyclosporine-induced hypertension. Arterial hypertension is a considerable side effect that accompanies the clinical use of immunosuppressant drugs such as cyclosporine (CSA). In addition to promoting graft rejection, uncontrolled hypertension is a major risk factor for atherosclerosis, left ventricular hypertrophy, heart failure, and premature death. Most, if not all, reports that reviewed the hypertensive effect of CSA and underlying mechanisms focused on the roles of peripheral vasoactive machinaries, perhaps because of the limited capacity of CSA to diffuse to brain tissues and the lack of any appreciable effect for centrally administered CSA on blood pressure (BP) or central sympathetic outflow. This review focuses primarily on evidence that supports a modulatory role for central neural pathways, as go-between afferent and efferent sympathetic circuits, in the elicitation of the hypertensive action of CSA. Other areas covered briefly in the review include (1) an outline of peripheral mechanisms that contribute to the hypertensive action of CSA, and (2) comparisons of the BP effects of CSA and other calcineurin-dependent (tacrolimus) and independent (sirolimus) immunosuppressants. The knowledge of these mechanisms, central and peripheral, may permit the identification of new therapeutic strategies against CSA hypertension.
abstract_id: PUBMED:25227328
mTOR inhibitors and dyslipidemia in transplant recipients: a cause for concern? Post-transplant dyslipidemia is exacerbated by mammalian target of rapamycin (mTOR) inhibitors. Early clinical trials of mTOR inhibitors used fixed dosing with no concomitant reduction in calcineurin inhibitor (CNI) exposure, leading to concerns when consistent and marked dyslipidemia was observed. With use of modern concentration-controlled mTOR inhibitor regimens within CNI-free or reduced-exposure CNI regimens, however, the dyslipidemic effect persists but is less pronounced. Typically, total cholesterol levels are at the upper end of normal, or indicate borderline risk, in kidney and liver transplant recipients, and are lower in heart transplant patients under near-universal statin therapy. Of note, it is possible that mTOR inhibitors may offer a cardioprotective effect. Experimental evidence for delayed progression of atherosclerosis is consistent with evidence from heart transplantation that coronary artery intimal thickening and the incidence of cardiac allograft vasculopathy are reduced with everolimus versus cyclosporine therapy. Preliminary data also indicate that mTOR inhibitors may improve arterial stiffness, a predictor of cardiovascular events, and may reduce ventricular remodeling and decrease left ventricular mass through an anti-fibrotic effect. Post-transplant dyslipidemia under mTOR inhibitor therapy should be monitored and managed closely, but unless unresponsive to therapy should not be regarded as a barrier to its use.
abstract_id: PUBMED:8970620
Antihypertensive drug treatment in chronic renal allograft rejection in the rat. Effect on structure and function. To gain insight into the contribution of immunologic and hemodynamic factors in the progressive demise of structure and function in chronic renal allograft dysfunction, we studied the histological changes, the immunostainable glomerular anionic sites, and glomerular capillary hydrostatic pressures of rat renal allografts with chronic rejection. Recipient animals were left untreated, received 8 weeks of treatment with the immunosuppressive drug cyclosporine, or received antihypertensive drugs consisting of the combination of reserpine, hydralazine and hydrochlorothiazide, the angiotensin-converting enzyme inhibitor cilazapril, or the angiotensin II receptor blocker L-158,809. Grafts in untreated recipients developed chronic interstitial inflammation, as well as vascular and glomerular lesions consistent with chronic rejection. These lesions were associated with immunohistochemical loss of the negatively charged heparan sulfate proteoglycan side chain. All treatment regimens decreased the systemic and glomerular capillary pressures and were associated with no loss of function, decreased proteinuria, and a tendency to improved graft function. Cyclosporine prevented all histological manifestations of rejection, and antihypertensive drugs decreased the extent of glomerular mesangiolysis and glomerulosclerosis; L-158,809 and cilazapril also inhibited graft atherosclerosis and tubular atrophy. We conclude that chronic rejection is primarily an immune-mediated process, but hemodynamic and angiotensin II-mediated effects may play a pivotal role in the expression of immune-mediated lesions.
abstract_id: PUBMED:1750082
Inhibition of myointimal hyperplasia and macrophage infiltration by estradiol in aorta allografts. A major cause of organ graft loss after heart transplantation is accelerated atherosclerosis. In this study we used aorta allografts and investigated the effect of estradiol-17 beta treatment on both the degree of myointimal hyperplasia and morphological changes evaluated by light and electron microscopy. Outbred New Zealand white male rabbits (2.7-3.5 kg) were fed cholesterol (0.5%) from one week prior to transplantation, and until sacrifice three weeks later. The donor abdominal aorta was transplanted end-to-end to the right carotid artery of the recipient animals. Immediately following surgery, cyclosporine (10 mg/kg/d s.c.) was administered to prevent graft rejection. The allograft recipients were randomly assigned to one of five groups and treated with cottonseed oil (placebo) or estradiol cypionate at 1, 10, 100, or 1000 micrograms/kg/d i.m. for 3 weeks. The aorta grafts were harvested and fixed for transmission electron microscopy and morphometry. The area of myointimal thickening was calculated as a percent of total vessel area (mean +/- SEM); the control group was 6.6 +/- 0.5% (n = 5). Estradiol treatment significantly inhibited (P less than 0.05) myointimal hyperplasia at all doses. The values were 3.9 +/- 0.6% (n = 6) for 1 microgram/kg/day; 4.4 +/- 0.7% (n = 5) for 10 micrograms/kg/day; 3.5 +/- 0.4% (n = 6) for 100 micrograms/kg/day; and 2.9 +/- 0.1% (n = 3) for 1000 micrograms/kg/day. Electron microscopic evaluation revealed that the four doses of estradiol protected the endothelium from the degenerative changes seen in all aorta allografts from the animals in the control group. Furthermore 10, 100, and 1000 micrograms/kg/day of estradiol prevented the appearance of vacuolized macrophages (foam cells) and also the vacuolization of smooth muscle cells that was observed in the aorta allografts from the control group and the group treated with 1 microgram/kg/day of estradiol. We conclude that the inhibitory effect of estradiol on the development of graft atherosclerosis may be due to inhibition of smooth muscle cell proliferation and preservation of ultrastructurally normal endothelial cells. The inhibitory effect on foam cell production and a concomitant vacuolization of smooth muscle cells may play a lesser role. We suggest that estrogen replacement therapy may be beneficial in postmenopausal women with organ allografts.
abstract_id: PUBMED:2407354
Spectrum and diagnosis of myocardial rejection. Right ventricular endomyocardial biopsy has become the mainstay for the diagnosis of acute cardiac rejection. The intelligent inerpretation of endomyocardial biopsy specimens requires knowledge of the artifacts inherent to the procedure as well as specific rejection and nonrejection pathology. Myocardial contraction bands, artifactual tissue spreading, and prior biopsy site changes should not be misinterpreted as evidence of myocyte damage, interstitial edema, or rejection, respectively. The Billingham criteria for acute cardiac rejection (mild, moderate, and severe) are still the most widely utilized, although other schemes for rejection have also shown clinical usefulness. Additionally, there is increasing evidence that some patients may develop a vascular or humoral rejection that may be more difficult to diagnose by endomyocardial biopsy without utilization of special techniques--for example, immunofluorescence. Nonrejection pathology frequently seen post-transplantation includes ischemia or catecholamine effect, interstitial fibrosis, myocardial calcification, cyclosporine-associated endocardial infiltrates (Quilty effect), myocyte hypertrophy, and infections (CMV, toxoplasmosis). Coronary artery disease continues to be the most significant threat to long-term survival. The spectrum of pathologic changes in the vessels range from mild intimal thickening to severe concentric intimal fibrosis involving extramural, as well as intramural, coronaries to lesions virtually identical to native atherosclerosis. Patients with diffuse narrowing involving large and small intramyocardial vessels appear to be at greater risk for myocardial infarction, death, or retransplantation than patients with other types of coronary pathology. Although important, these large vessel changes are rarely identified by endomyocardial biopsy.
abstract_id: PUBMED:19522662
Considerations in sirolimus use in the early and late post-transplant periods. Sirolimus is an antiproliferative immunosuppressive agent that inhibits the mammalian target of rapamycin. It is highly effective in preventing acute renal allograft rejection and can be used with either calcineurin inhibitors, antimetabolites or corticosteroids. Early studies in renal transplantation have provided insight into optimal dosing strategies of sirolimus and of concomitant immunosuppressive agents. Familiarity with the adverse effect profile of sirolimus and pharmacokinetic and dynamic interactions with other immunosuppressive agents allows for earlier recognition and better management of sirolimus-related complications. The role of sirolimus in preserving long-term renal function, post-transplant malignancies and in prevention of atherosclerosis is currently being considered.
Answer: Yes, cyclosporin A does have an effect on accelerated atherosclerosis in the absence of graft rejection. According to a study (PUBMED:8719466), cyclosporin A, at immunosuppressant and nontoxic doses, can delay the onset and progression of accelerated atherosclerosis. The study's results support the hypothesis that cyclosporin A's effects are more significant at the recipient side of the anastomosis, where accelerated atherosclerosis begins to develop. This suggests that cyclosporin A may have a protective role against the development of accelerated atherosclerosis even when graft rejection is not a factor. |
Instruction: Thyroid dose during neurointerventional procedures: does lead shielding reduce the dose?
Abstracts:
abstract_id: PUBMED:17533529
Thyroid dose during neurointerventional procedures: does lead shielding reduce the dose? Purpose: To assess radiation dose to the thyroid in patients undergoing neurointerventional procedures and to evaluate dose reduction to the thyroid by lead shielding.
Methods And Materials: A randomized patient study was undertaken to evaluate the dose reduction by thyroid lead shields and assess their practicality in a clinical setting. Sixty-five patients attending for endovascular treatment of arteriovenous malformations (AVMs) and aneurysms were randomized into one of 2 groups a) No Thyroid Shield and b) Thyroid Lead Shield. Two thermoluminescent dosimeters (TLDs) were placed over the thyroid gland (1 on each side) at constant positions on each patient in both groups. A thyroid lead shield (Pb eq. 0.5 mm) was placed around the neck of patients in the thyroid lead shield group after the neurointerventional radiologist had obtained satisfactory working access above the neck. The total dose-area-product (DAP) value, number and type of digital subtraction angiography (DSA) runs and fluoroscopy time were recorded for all patients.
Results: Of the 72 patients who initially attended for neurointerventional procedures, 7 were excluded due to failure to consent or because of procedures involving access to the external carotid circulation. Of the remaining 65 who were randomized, a further 9 were excluded due to; procedureabandonment, unfeasible shield placement or shield interference with the procedure. Patient demographics included mean age of 47.9 yrs (15-74), F:M=1.4:1. Mean fluoroscopy time was 25.9 min. Mean DAP value was 13,134.8 cGy x cm(2) and mean number of DSA runs was 13.4. The mean relative thyroid doses were significantly different (p< 0.001) between the unshielded (7.23 mSv/cGy2 x 105) and shielded groups (3.77 mSv/cGy2 x 105). A mean thyroid dose reduction of 48% was seen in the shielded group versus the unshielded group.
Conclusion: Considerable doses to the thyroid are incurred during neurointerventional procedures, highlighting the need for increased awareness of patient radiation protection. Thyroid lead shielding yields significant radiation protection, is inexpensive and when not obscuring the field of view, should be used routinely.
abstract_id: PUBMED:35461784
Evaluation of thyroid radiation dose during abdominal Computed Tomography procedures and dose reduction effectiveness of thyroid shielding. Introduction: During abdominal Computed Tomography (CT) studies, vicinity organs receive a dose from scatter radiation. The thyroid is considered an organ at greater risk due to high radiosensitivity.
Methods: The primary objective of this study was to determine the entrances surface dose (ESD) to the thyroid during abdominal CT studies and to evaluate the efficiency of dose reduction by lead shielding. The calibrated thermoluminescence dosimeter (TLD) chips were used to measure the ESD during 180 contrast-enhanced (CE) and non-contrast-enhanced (NC) abdominal CT studies in the presence and absence of lead shielding.
Results: Thyroid shielding reduces the ESD by 72.3% (0.55 mGy), 86.5% (2.95 mGy) and 64.0% (2.24 mGy) during NC, 3-phase and 4-phase abdominal CT scans. Also, the patient height was identified as a parameter that inversely influenced the thyroid dose, proving that the taller patients receive less dose to the thyroid. Regardless, the scan parameters such as time and display field of view (DFOV) positively impact the thyroid dose.
Conclusion: Lead shielding can prevent the external scatter reaching the thyroid region by 64%-87% during the non-vicinity scans such as abdomen CT. However, the actual dose saving lies between 0.2% and 0.4%, compared to the total effective dose of the whole CT procedure.
Implications For Practice: The thyroid shield can effectively reduce external scatter radiation during abdominal CT procedures. However, the dose saving is insignificant compared to the total effective dose from the whole examination. Therefore, the use of thyroid shielding should be carefully evaluated during CT abdomen procedures.
abstract_id: PUBMED:31047181
How Much Does Lead Shielding during Fluoroscopy Reduce Radiation Dose to Out-of-Field Body Parts? Background: Fluoroscopy technologists routinely place a lead shield between the x-ray table and the patient's gonads, even if the gonads are not directly in the x-ray field. Internal scatter radiation is the greatest source of radiation to out-of-field body parts, but a shield placed between the patient and the x-ray source will not block internal scatter. Prior nonfluoroscopy research has shown that there is a small reduction in radiation dose when shielding the leakage radiation that penetrates through the collimator shutters. The goal of this in vitro study was to determine if there was any radiation dose reduction when shielding leakage radiation during fluoroscopy.
Methods: This was an in vitro comparison study of radiation doses using different collimation and shielding strategies during fluoroscopy. Ionization chamber measurements were obtained during fluoroscopy of an acrylic block with and without collimation and shielding. Ionization chamber readings were taken in-field at 0 cm and out-of-field at 7.5, 10, and 12.5 cm from beam center.
Results: Collimation reduced 87% of the out-of-field radiation dose, and the remaining measurable dose was because of internal scatter. The radiation dose contribution from leakage radiation was negligible, as there was not any measurable radiation dose difference when shielding leakage radiation, with P value of .48.
Conclusion: These results call into question the clinical utility of routinely shielding out-of-field body parts during fluoroscopy.
abstract_id: PUBMED:25741089
Characterization of a lead breast shielding for dose reduction in computed tomography. Objective: Several studies have been published regarding the use of bismuth shielding to protect the breast in computed tomography (CT) scans and, up to the writing of this article, only one publication about barium shielding was found. The present study was aimed at characterizing, for the first time, a lead breast shielding.
Materials And Methods: The percentage dose reduction and the influence of the shielding on quantitative imaging parameters were evaluated. Dose measurements were made on a CT equipment with the aid of specific phantoms and radiation detectors. A processing software assisted in the qualitative analysis evaluating variations in average CT number and noise on images.
Results: The authors observed a reduction in entrance dose by 30% and in CTDIvol by 17%. In all measurements, in agreement with studies in the literature, the utilization of cotton fiber as spacer object reduced significantly the presence of artifacts on the images. All the measurements demonstrated increase in the average CT number and noise on the images with the presence of the shielding.
Conclusion: As expected, the data observed with the use of lead shielding were of the same order as those found in the literature about bismuth shielding.
abstract_id: PUBMED:27780674
Effects of shielding on pelvic and abdominal IORT dose distributions. Purpose: To study the impact of shielding elements in the proximity of Intra-Operative Radiation Therapy (IORT) irradiation fields, and to generate graphical and quantitative information to assist radiation oncologists in the design of optimal shielding during pelvic and abdominal IORT.
Method: An IORT system was modeled with BEAMnrc and EGS++ Monte Carlo codes. The model was validated in reference conditions by gamma index analysis against an experimental data set of different beam energies, applicator diameters, and bevel angles. The reliability of the IORT model was further tested considering shielding layers inserted in the radiation beam. Further simulations were performed introducing a bone-like layer embedded in the water phantom. The dose distributions were calculated as 3D dose maps.
Results: The analysis of the resulting 2D dose maps parallel to the clinical axis shows that the bevel angle of the applicator and its position relative to the shielding have a major influence on the dose distribution. When insufficient shielding is used, a hotspot nearby the shield appears near the surface. At greater depths, lateral scatter limits the dose reduction attainable with shielding, although the presence of bone-like structures in the phantom reduces the impact of this effect.
Conclusions: Dose distributions in shielded IORT procedures are affected by distinct contributions when considering the regions near the shielding and deeper in tissue: insufficient shielding may lead to residual dose and hotspots, and the scattering effects may enlarge the beam in depth. These effects must be carefully considered when planning an IORT treatment with shielding.
abstract_id: PUBMED:19940673
Radiation dose to the thyroid gland and breast from multidetector computed tomography of the cervical spine: does bismuth shielding with and without a cervical collar reduce dose? Purpose: This study aimed to assess the radiation dose reduction that could be achieved using an in-line bismuth shielding over the thyroid gland and breast and to determine the effect of a cervical spine collar on thyroid dose reduction and image noise when performing computed tomography of the cervical spine using automatic tube current modulation.
Materials And Methods: An anthropomorphic phantom was scanned using a commercially available 64-channel computed tomographic scanner. A standardized trauma cervical spine protocol was used. Scans were obtained with and without a standard cervical spine immobilization collar and with and without bismuth-impregnated thyroid and breast shields. Thermoluminescent dosimeters were placed over the thyroid gland and breasts for each scan. A paired t test was used to determine whether the skin entry dose differed significantly between the shielded and unshielded thyroid and breast and to determine whether placing the thyroid shield over a cervical immobilization collar resulted in a significant dose reduction. Region of interest of pixel values was used to determine image noise.
Results: The average measured skin entry dose for the unshielded thyroid gland was 21.9 mGy (95% confidence interval, 18.9-4.7). With a bismuth shield applied directly over the skin, the dose to the thyroid gland was reduced by 22.5% (P < 0.05). With the bismuth shield applied over the cervical spine collar, the dose reduction to the thyroid was 10.4%, which was not statistically significant (P = 0.16) compared with the dose reduction without the cervical collar. Skin entry dose over the breasts was significant, although they were outside the primary scan range. Without bismuth shielding, the skin entry dose was 1.5 mGy, and with bismuth shielding, the dose was significantly reduced by 36.6% (P < 0.01). Image noise increased most when shielding was used with an immobilization collar.
Conclusions: There is a significant dose reduction to the thyroid gland and breasts when a bismuth shield is placed on the skin. The dose saving achieved by placing the shield on the cervical collar is approximately halved compared with placement on the skin, and this did not reach statistical significance, and this was accompanied by an increase on image noise. Bismuth shields should not be used in combination with cervical immobilization collars.
abstract_id: PUBMED:12070727
Patient and occupational dose in neurointerventional procedures. Neurointerventional procedures can involve very high doses of radiation to the patient. Our purpose was to quantify the exposure of patients and workers during such procedures, and to use the data for optimisation. We monitored the coiling of 27 aneurysms, and embolisation of four arteriovenous malformations. We measured entrance doses at the skull of the patient using thermoluminescent dosemeters. An observer logged the dose-area product (DAP), fluoroscopy time and characteristics of the digital angiographic and fluoroscopic projections. We also measured entrance doses to the workers at the glabella, neck, arms, hands and legs. The highest patient entrance dose was 2.3 Gy, the average maximum entrance dose 0.9+/-0.5 Gy. The effective dose to the patient was estimated as 14.0+/-8.1 mSv. Other average values were: DAP 228+/-131 Gy cm(2), fluoroscopy time 34.8+/-12.6 min, number of angiographic series 19.3+/-9.4 and number of frames 267+/-143. The highest operator entrance dose was observed on the left leg (235+/-174 microGy). The effective dose to the operator, wearing a 0.35 mm lead equivalent apron, was 6.7+/-4.6 microSv. Thus, even the highest patient entrance dose was in the lower part of the range in which nonstochastic effects might arise. Nevertheless, we are trying to reduce patient exposure by optimising machine settings and clinical protocols, and by informing the operator when the total DAP reaches a defined threshold. The contribution of neurointerventional procedures to occupational dose was very small.
abstract_id: PUBMED:32651066
Impact of gonad shielding for AP pelvis on dose and image quality on different female sizes: A phantom study. Introduction: In clinical practice AP pelvis standard protocols are suitable for average size patients. However, as the average body size has increased over the past decades, radiographers have had to improve their practice in order to ensure that adequate image quality with minimal radiation dose to the patient is achieved. Gonad shielding has been found to be an effective way to reduce the radiation dose to the ovaries. However, the effect of increased body size, or fat thickness, in combination with gonad shielding is unclear. The goal of the study was to investigate the impact of gonad shielding in a phantom of adult female stature with increasing fat thicknesses on SNR (as a measure for image quality) and dose for AP pelvis examination.
Methods: An adult Alderson female pelvis phantom was imaged with a variety of fat thickness categories as a representation of increasing BMI. 72 images were acquired using both AEC and manual exposure with and without gonad shielding. The radiation dose to the ovaries was measured using a MOSFET system. The relationship between fat thickness, SNR and dose when the AP pelvis was performed with and without shielding was investigated using the Wilcoxon signed rank test. P-values < 0.05 were considered to be statistically significant.
Results: Ovary dose and SNR remained constant despite the use of gonad shielding while introducing fat layers.
Conclusion: The ovary dose did not increase with an increase of fat thickness and the image quality was not altered.
Implications For Practice: Based on this phantom study it can be suggested that obese patients can expect the same image quality as average patients while respecting ALARA principle when using adequate protocols.
abstract_id: PUBMED:37940176
Exploring Past to Present Shielding Guidelines. Purpose: To explore the data and supporting evidence for the 2019 statement by the American Association of Physicists in Medicine (AAPM) that recommends limits to the routine use of fetal and gonadal shielding in medical imaging.
Methods: Three researchers searched 5 online databases, selecting articles from scholarly journals and radiology trade publications. Search results were filtered to include literature published from January 1, 2016, to August 9, 2022, to ensure relevance and provide historical background for the 2019 AAPM statement.
Results: The use of patient shielding during medical imaging did not reduce dose, and in certain instances, increased dose received by patients during computed tomography, fluoroscopy, or dental imaging. The use of shielding interfered with technology designed to reduce patient dose, including automatic exposure control and dose modulation. Research showed that errors in shield placement were common and that shields can act as sources of infection or carriers of harmful lead dust.
Discussion: In each article reviewed, a compelling case was made for discontinuing routine patient shielding during radiographic procedures. Serious opposition to the discontinuation of the shielding practice was not found. Opportunities exist for further study into technologists' and the public's understanding of the effects of radiation and technologists' compliance with new shielding policies.
Conclusion: The challenges with properly using shielding, paired with recent technological advancements and a new understanding of radiation protection, have negated the need for contact shielding. This legacy practice can be discontinued in clinical settings, and educational materials for technologists and students should be updated to reflect these changes.
abstract_id: PUBMED:26891787
PAEDIATRIC NECK MULTIDETECTOR COMPUTED TOMOGRAPHY: THE EFFECT OF BISMUTH SHIELDING ON THYROID DOSE AND IMAGE QUALITY. This study investigated the effect of bismuth shielding on thyroid dose and image quality in paediatric neck multidetector computed tomography (MDCT) performed with fixed tube current (FTC) and automatic exposure control (AEC). Four paediatric anthropomorphic phantoms representing the equivalent newborn, 1-, 5- and 10-y-old child were subjected to neck CT using a 16-slice MDCT system. Each scan was performed without and with single- and double-layered bismuth shield placed on the skin surface above the thyroid. Scans were repeated with cotton spacers of 1, 2 and 3 cm thick placed between the skin and shield, to study the effect of skin-to-shielding distance on image noise. Thyroid dose was measured with thermoluminescent dosemeters. The location of the thyroid within the phantom slices was determined by anthropometric data from patients' CT examinations whose body stature closely matched the phantoms. Effective dose (E) was estimated using the dose-length product (DLP) method. Image quality of resulted CT images was assessed through the image noise. Activation of AEC was found to decrease the thyroid dose by 46 % to the 10-y-old phantom subjected to neck CT. When FTC technique is used, single- and double-layered bismuth shielding was found to reduce the thyroid dose to the same phantom by 35 and 47 %, respectively. The corresponding reductions in AEC-activated scans were 60 and 66 %, respectively. Elevation of shields by 1-, 2- and 3-cm cotton spacers decreased the image noise by 69, 87 and 92 %, respectively, for single-layered FTC, without considerably affecting the thyroid dose. AEC was more effective in thyroid dose reduction than in-plane bismuth shields. Application of cotton spacers had no significant impact on thyroid dose, but significantly decreased the image noise.
Answer: Yes, lead shielding does reduce the thyroid dose during neurointerventional procedures. A study assessed the radiation dose to the thyroid in patients undergoing neurointerventional procedures and evaluated the effectiveness of dose reduction by using thyroid lead shields. The study involved 65 patients who were randomized into two groups: one with a thyroid lead shield and one without. The results showed that the mean relative thyroid doses were significantly lower in the shielded group compared to the unshielded group, with a mean thyroid dose reduction of 48% observed in the shielded group (PUBMED:17533529). This indicates that thyroid lead shielding is effective in reducing radiation exposure to the thyroid during neurointerventional procedures and should be used routinely when it does not interfere with the procedure. |
Instruction: Do recognizable lifetime eating disorder phenotypes naturally occur in a culturally asian population?
Abstracts:
abstract_id: PUBMED:25787700
Do recognizable lifetime eating disorder phenotypes naturally occur in a culturally asian population? A combined latent profile and taxometric approach. Background: We examined whether empirically derived eating disorder (ED) categories in Hong Kong Chinese patients (N = 454) would be consistent with recognizable lifetime ED phenotypes derived from latent structure models of European and American samples.
Method: We performed latent profile analysis (LPA) using indicator variables from data collected during routine assessment, and then applied taxometric analysis to determine whether latent classes were qualitatively versus quantitatively distinct.
Results: Latent profile analysis identified four classes: (i) binge/purge (47%); (ii) non-fat-phobic low-weight (34%); (iii) fat-phobic low-weight (12%); and (iv) overweight disordered eating (6%). Taxometric analysis identified qualitative (categorical) distinctions between the binge/purge and non-fat-phobic low-weight classes, and also between the fat-phobic and non-fat-phobic low-weight classes. Distinctions between the fat-phobic low-weight and binge/purge classes were indeterminate.
Conclusion: Empirically derived categories in Hong Kong showed recognizable correspondence with recognizable lifetime ED phenotypes. Although taxometric findings support two distinct classes of low weight EDs, LPA findings also support heterogeneity among non-fat-phobic individuals.
abstract_id: PUBMED:17879986
Prevalence and correlates of eating disorders among Asian Americans: results from the National Latino and Asian American Study. Objective: Our study examines lifetime and 12-month prevalence estimates of eating disorders in Asian American men and women. We also report on the association between social factors and eating disorders, BMI categories, treatment, and impairment.
Method: We use data from the National Latino and Asian American Study, a nationally representative survey of the U.S. household population of Latino and Asian Americans. Our present study is based on data from the sample of Asian Americans (N = 2,095).
Results: Overall, Asian Americans present with low prevalence for eating disorders. Only lifetime prevalence for binge eating disorder (BED) is significantly higher for Asian women compared to Asian men. Our results show that age is strongly associated with BED and any binge eating. High current BMI of 30-39.9 and >or=40 is strongly associated with BED and any binge eating. Treatment utilization is low, and respondents reported some role impairment.
Conclusion: Our findings show that despite low prevalence estimates, eating disorders are present among Asian American men and women. Our data suggest that researchers consider more flexibility in defining and classifying eating disorders, to better detect and measure the prevalence of eating disorders among Asian Americans.
abstract_id: PUBMED:25598951
Culturally Sensitive Intervention for Latina Women with Eating Disorders: A Case Study. Objective: We describe cognitive-behavioral therapy for bulimia nervosa (CBT-BN) with a Latina woman that incorporates culturally relevant topics.
Method: A single case report of a 31-year-old monolingual Latina woman with BN describes the application of a couple-based intervention adjunctive to CBT-BN.
Results: The patient reported no binge and purge episodes by session 20 and remained symptom free until the end of treatment (session 26). Improvement was observed in the Eating Disorders Examination (EDE) comparing baseline (EDE=5.74) with post treatment (EDE=1.25).
Conclusions: The case illustrates how cultural adaptations such as including a family member, being flexible on topics and scheduling, and providing culturally relevant interventions can lead to successful completion of a course of therapy and facilitate ongoing interventions to ensure continued recovery.
abstract_id: PUBMED:30351277
"A Full Stomach": Culturally Sensitive Diagnosis of Eating Disorders among Ethiopian Adolescents in Israel. In recent decades there has been a significant increase in the prevalence of eating disorders among non-Western populations. This article aims to address unique sociocultural issues regarding the procedure and dilemmas of the diagnosis process of eating disorders among Ethiopian adolescents in Israel. We will discuss cultural aspects relating to the perception of the disease and the circumstantial contexts relating to this population, such as the process of immigration, integration into Israeli society and issues related to identity and trauma. Diagnostic dilemmas relating to the differences between traditional vs Western perceptions of the illness will be discussed. For illustration, two case studies will be presented. In the discussion, a culturally-sensitive diagnostic model is proposed. Based on Cultural Formulation Interview, this model assumes that the observation of clinical cases from different cultural backgrounds cannot be achieved solely through a western diagnostic prism. Rather, we suggest that the diagnostic process should continue throughout the entire therapeutic process.
abstract_id: PUBMED:32061193
Are bulimia nervosa and binge eating disorder increasing? Results of a population-based study of lifetime prevalence and lifetime prevalence by age in South Australia. Objective: This study aimed to provide updated lifetime prevalence estimates of eating disorders, specifically bulimia nervosa (BN) and binge eating disorder (BED) and investigate changes over time in lifetime prevalence by age.
Method: Two thousand nine hundred seventy-seven participants from South Australia were interviewed in the Health Omnibus Survey. DSM-5 criteria were used for current and broad (in accord with the International Statistical Classification of Diseases and Related Health Problems-11 [ICD-11]) criteria for lifetime prevalence of BED.
Results: This study found that the lifetime prevalence of BN was 1.21% (95% CI [0.87, 1.67]) and 2.59% (95% CI [2.07, 3.22]) for males and females, respectively, and that lifetime prevalence for BED-broad was 0.74% (95% CI [0.49, 1.11]) and 1.85% (95% CI [1.42, 2.40]) for males and females, respectively, which is higher than reported in previous research. Current prevalence (past 3 months) of BN was 0.40% (95% CI [0.23, 0.70]) and 0.81% (95% CI [0.54, 1.20]) for males and females, respectively, and current prevalence for BED was found to be 0.03 (95% CI [0.01, 0.04]) and 0.20% (95% CI [0.09, 0.44]) for males and females, respectively.
Conclusions: The current study confirmed the moderate community prevalence of BN and BED. BED was found to be less prevalent than BN in the present study, and a significant lifetime prevalence by age effect was found for both. Lifetime prevalence by age indicated that past increases in prevalence may be waning in this century and that overall BN and BED are not increasing in Australia.
abstract_id: PUBMED:20390608
Treating Asian American women with eating disorders: multicultural competency and empirically supported treatment. Disordered eating and body dissatisfaction are occurring among Asian American women, but the vast majority of treatment literature is based on White Western women. Empirically supported treatments are increasingly encouraged for eating disorders, but therapists find little guidance for implementing them in a culturally sensitive manner. This paper reviews eating problems in Asian American women and explores concepts important to cultural competency in therapy. Examples of how cultural adaptations could be made to an empirically supported treatment are illustrated in a case scenario using aspects of C. G. Fairburn's Enhanced Cognitive Behavioral Therapy for Eating Disorders (2008).
abstract_id: PUBMED:29620392
Randomized controlled trial of a culturally-adapted program for Latinas with binge eating. Binge eating disorder (BED) is the most prevalent eating disorder among Latinas. Furthermore, Latinas report more frequent binge eating and higher levels of associated mental health symptoms as compared with non-Latino White women. Research demonstrates that Latinas' eating problems largely go undetected and untreated and that they face numerous barriers to seeking professional help. Cognitive-behavioral therapy (CBT)-based guided self-help (CBTgsh) for binge eating is a more affordable and disseminable intervention than traditional CBT treatment. In this paper, we present the findings from a randomized controlled trial (RCT) of a culturally adapted CBTgsh program in a sample of overweight and obese Latinas with BED, the first RCT of this type with an ethnic minority population. Study participants (N = 40) diagnosed with BED were randomly assigned to the CBTgsh (n = 21) or waitlist (n = 19) condition. Treatment with the CBTgsh program resulted in significant reductions in frequency of binge eating, depression, and psychological distress and 47.6% of the intention-to-treat CBTgsh group were abstinent from binge eating at follow-up. In contrast, no significant changes were found from pre- to 12-week follow-up assessments for the waitlisted group. Results indicate that CBTgsh can be effective in addressing the needs of Latinas who binge eat and can lead to improvements in symptoms. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
abstract_id: PUBMED:34684334
Self-Reported Lifetime History of Eating Disorders and Mortality in the General Population: A Canadian Population Survey with Record Linkage. Eating disorders (EDs) are often reported to have the highest mortality of any mental health disorder. However, this assertion is based on clinical samples, which may provide an inaccurate view of the actual risks in the population. Hence, in the current retrospective cohort study, mortality of self-reported lifetime history of EDs in the general population was explored. The data source was the Canadian Community Health Survey: Mental Health and Well-Being (CCHS 1.2), linked to a national mortality database. The survey sample was representative of the Canadian household population (mean age = 43.95 years, 50.9% female). The survey inquired about the history of professionally diagnosed chronic conditions, including EDs. Subsequently, the survey dataset was linked to the national mortality dataset (for the date of death) up to 2017. Cox proportional hazards models were used to explore the effect of EDs on mortality. The unadjusted-hazard ratio (HR) for the lifetime history of an ED was 1.35 (95% CI 0.70-2.58). However, the age/sex-adjusted HR increased to 4.5 (95% CI 2.33-8.84), which was over two times higher than age/sex-adjusted HRs for other mental disorders (schizophrenia/psychosis, mood-disorders, and post-traumatic stress disorder). In conclusion, all-cause mortality of self-reported lifetime history of EDs in the household population was markedly elevated and considerably higher than that of other self-reported disorders. This finding replicates prior findings in a population-representative sample and provides a definitive quantification of increased risk of mortality in EDs, which was previously lacking. Furthermore, it highlights the seriousness of EDs and an urgent need for strategies that may help to improve long-term outcomes.
abstract_id: PUBMED:31025301
Clinical and socio-demographic features in childhood vs adolescent-onset anorexia nervosa in an Asian population. Purpose: Childhood-onset anorexia nervosa (AN) may be under-recognised and under-treated due to atypical presentations. The aims of this study are: (1) describe features of AN in patients ≤ 18 years in an Asian population; and (2) compare childhood-onset and adolescent-onset AN.
Methods: This study involved a retrospective chart review of patients ≤ 18 years in a Asian population who were treated for anorexia nervosa at the Eating Disorders Service at Singapore General Hospital between Jan 2003 and Dec 2014 (n = 435). Childhood-onset AN was defined as onset < 13 years, while adolescent-onset AN was defined as onset between 13 and 18 years.
Results: Patients were predominantly female (95.4%) and Chinese (83%). The childhood-onset group (8.3%) had mean age of onset 11.5 ± 1.0 years, compared to 15.2 ± 1.6 years for the adolescent-onset group. The childhood and adolescent-onset groups were similar in socio-demographic variables, as well as gender distribution, AN subtype, number of psychiatric comorbidities, family history of psychiatric illness, body image issues and excessive exercise. The childhood-onset group had significantly longer duration of illness prior to presentation (4.75 vs 2.62 years), greater frequency of comorbid obsessive-compulsive disorder (19.4% vs 5.3%) and were more likely to report teasing as a trigger for AN (58.3% vs 31.6%). The childhood-onset group had significantly longer duration of inpatient stay (5.97 vs 3.22 weeks), as well as a greater number of total admissions (2.78 vs 1.37).
Conclusion: Our results suggest that cultural factors may impact the development or identification of AN in an Asian context. Possible delay in diagnosis of childhood-onset AN may lead to a more unfavorable clinical course.
Level Of Evidence: Level V, descriptive study.
abstract_id: PUBMED:20593482
A comparative study of South Asian and non-Asian referrals to an eating disorders service in Leicester, UK. Literature on eating disorders (EDs) among South Asian people in Britain is limited. In an extension of an earlier study, referrals to the Leicester Adult Eating Disorders Service were examined between 1991 and 2005. All South-Asians presenting to the service were compared on age, gender and diagnosis with non-Asians. Female Asian patients diagnosed as bulimia nervosa and ED not otherwise specified were compared on a larger number of variables with the same number of non-Asian patients matched for diagnosis. As there were only six Asian patients with anorexia nervosa they were excluded from this comparison. Only 4.5% of female patients were Asian, as opposed to 13.8% of the local young female population in the most recent UK census. Overall, Asian women were significantly younger than the non-Asians but did not differ significantly with regard to clinical features or treatment variables. Asians are under-represented amongst women presenting to the service. The explanation for this remains uncertain. Overall, Asian sufferers who reach secondary specialist services closely resemble non-Asian patients.
Answer: Yes, recognizable lifetime eating disorder phenotypes do naturally occur in a culturally Asian population. A study conducted in Hong Kong Chinese patients identified four distinct classes of eating disorders that showed recognizable correspondence with lifetime eating disorder phenotypes commonly found in European and American samples. These classes included binge/purge, non-fat-phobic low-weight, fat-phobic low-weight, and overweight disordered eating. The study used latent profile analysis (LPA) and taxometric analysis to determine the presence of qualitatively distinct classes, supporting the existence of two distinct classes of low weight eating disorders (PUBMED:25787700). |
Instruction: Does clinician treatment choice improve the outcomes of manual therapy for nonspecific low back pain?
Abstracts:
abstract_id: PUBMED:15965405
Does clinician treatment choice improve the outcomes of manual therapy for nonspecific low back pain? A metaanalysis. Objective: The purpose of this study is to quantitatively compare outcomes for trials when treating clinicians did, or did not, have the discretion to decide on treatment technique.
Methods: CINAHL, EMBASE, MEDLINE, the Physiotherapy Evidence Database, the Cochrane Controlled Trials register, reference list searching, and citation tracking were investigated. Ten randomized controlled trials (RCTs) of mobilization and manipulation for nonspecific low back pain (NSLBP) met the inclusion criteria. The effectiveness of manual therapy with and without clinician technique choice was assessed using descriptive statistics and metaanalysis for the outcomes of pain and activity limitation.
Results: In approximately two thirds of the included RCTs, clinicians had choice of treatment technique. There were no systematic differences favoring results for RCTs that did allow clinician choice of treatment technique.
Conclusions: Few quality studies are available, and conclusions on the basis of these data need to be interpreted with caution. However, allowing clinicians to choose from a number of treatment techniques does not appear to have improved the outcomes of these RCTs that have investigated the effect of manual therapy for NSLBP. If tailoring manual therapy treatment to NSLBP patients does positively impact on patient outcomes, this is not yet systematically apparent.
abstract_id: PUBMED:35578230
Manual therapy versus advice to stay active for nonspecific back and/or neck pain: a cost-effectiveness analysis. Background: Low back and neck pain are the most common musculoskeletal disorders worldwide, and imply suffering and substantial societal costs, hence effective interventions are crucial. The aim of this study was to evaluate the cost-effectiveness of manual therapy compared with advice to stay active for working age persons with nonspecific back and/or neck pain.
Methods: The two interventions were: a maximum of 6 manual therapy sessions within 6 weeks, including spinal manipulation/mobilization, massage and stretching, performed by a naprapath (index group), respectively information from a physician on the importance to stay active and on how to cope with pain, according to evidence-based advice, at 2 occasions within 3 weeks (control group). A cost-effectiveness analysis with a societal perspective was performed alongside a randomized controlled trial including 409 persons followed for one year, in 2005. The outcomes were health-related Quality of Life (QoL) encoded from the SF-36 and pain intensity. Direct and indirect costs were calculated based on intervention and medication costs and sickness absence data. An incremental cost per health related QoL was calculated, and sensitivity analyses were performed.
Results: The difference in QoL gains was 0.007 (95% CI - 0.010 to 0.023) and the mean improvement in pain intensity was 0.6 (95% CI 0.068-1.065) in favor of manual therapy after one year. Concerning the QoL outcome, the differences in mean cost per person was estimated at - 437 EUR (95% CI - 1302 to 371) and for the pain outcome the difference was - 635 EUR (95% CI - 1587 to 246) in favor of manual therapy. The results indicate that manual therapy achieves better outcomes at lower costs compared with advice to stay active. The sensitivity analyses were consistent with the main results.
Conclusions: Our results indicate that manual therapy for nonspecific back and/or neck pain is slightly less costly and more beneficial than advice to stay active for this sample of working age persons. Since manual therapy treatment is at least as cost-effective as evidence-based advice from a physician, it may be recommended for neck and low back pain. Further health economic studies that may confirm those findings are warranted. Trial registration Current Controlled Trials ISRCTN56954776. Retrospectively registered 12 September 2006, http://www.isrctn.com/ISRCTN56954776 .
abstract_id: PUBMED:31074484
Physiotherapy Based on a Biobehavioral Approach with or Without Orthopedic Manual Physical Therapy in the Treatment of Nonspecific Chronic Low Back Pain: A Randomized Controlled Trial. Objective: To compare the effectiveness of a biobehavioral approach with and without orthopedic manual physical therapy on the intensity and frequency of pain in patients diagnosed with nonspecific chronic low back pain.
Methods: A single-blind randomized controlled trial. Fifty patients were randomly allocated into two groups: one group received biobehavioral therapy with orthopedic manual physical therapy, and the other group received only biobehavioral therapy. Both groups completed a total of eight sessions, with a frequency of two sessions per week. The somatosensory, physical, and psychological variables were recorded at baseline and during the first and third month after initiation of treatment.
Results: In both groups, the treatment was effective, presenting significant differences for all the variables in the time factor. There were no significant differences between groups in intensity or frequency of pain, with a large effect size (>0.80), but there were intragroup differences for both intervention groups at one- and three-month follow-up. There were also no significant differences between groups in the secondary variables during the same follow-up period.
Conclusions: The results of this study suggest that orthopedic manual physical therapy does not increase the effects of a treatment based on biobehavioral therapy in the short or medium term, but these results should be interpreted with caution.
abstract_id: PUBMED:20377854
Does targeting manual therapy and/or exercise improve patient outcomes in nonspecific low back pain? A systematic review. Background: A central element in the current debate about best practice management of non-specific low back pain (NSLBP) is the efficacy of targeted versus generic (non-targeted) treatment. Many clinicians and researchers believe that tailoring treatment to NSLBP subgroups positively impacts on patient outcomes. Despite this, there are no systematic reviews comparing the efficacy of targeted versus non-targeted manual therapy and/or exercise. This systematic review was undertaken in order to determine the efficacy of such targeted treatment in adults with NSLBP.
Method: MEDLINE, EMBASE, Current Contents, AMED and the Cochrane Central Register of Controlled Trials were electronically searched, reference lists were examined and citation tracking performed. Inclusion criteria were randomized controlled trials of targeted manual therapy and/or exercise for NSLPB that used trial designs capable of providing robust information on targeted treatment (treatment effect modification) for the outcomes of activity limitation and pain. Included trials needed to be hypothesis-testing studies published in English, Danish or Norwegian. Method quality was assessed using the criteria recommended by the Cochrane Back Review Group.
Results: Four high-quality randomized controlled trials of targeted manual therapy and/or exercise for NSLBP met the inclusion criteria. One study showed statistically significant effects for short-term outcomes using McKenzie directional preference-based exercise. Research into subgroups requires much larger sample sizes than traditional two-group trials and other included studies showed effects that might be clinically important in size but were not statistically significant with their samples sizes.
Conclusions: The clinical implications of these results are that they provide very cautious evidence supporting the notion that treatment targeted to subgroups of patients with NSLBP may improve patient outcomes. The results of the studies included in this review are too patchy, inconsistent and the samples investigated are too small for any recommendation of any treatment in routine clinical practice to be based on these findings. The research shows that adequately powered controlled trials using designs capable of providing robust information on treatment effect modification are uncommon. Considering how central the notion of targeted treatment is to manual therapy principles, further studies using this research method should be a priority for the clinical and research communities.
abstract_id: PUBMED:34824001
Does adding hip strengthening exercises to manual therapy and segmental stabilization improve outcomes in patients with nonspecific low back pain? A randomized controlled trial. Background: The literature is unclear on the need for hip strengthening in persons with low back pain (LBP).
Objectives: To investigate the effectiveness of hip strengthening exercises when added to manual therapy and lumbar segmental stabilization in patients with chronic nonspecific LBP.
Methods: Seventy patients with chronic nonspecific LBP were randomly assigned to either the manual therapy and lumbar segmental stabilization group or the manual therapy and lumbar segmental stabilization plus specific hip strengthening group. A 10 cm visual analogue scale and the Rolland-Morris Questionnaire were the primary clinical outcome measures at baseline, at the end of treatment (posttreatment), and 6- and 12-months posttreatment. Hip strength and kinematics were measured as secondary outcomes .
Results: While within-group improvements in pain, disability, and hip extensors strength occurred in both groups, there were no significant between-group differences at posttreatment or follow-ups. Mean difference in changes in pain level between groups at posttreatment and at 6- and 12-month follow-up were 0.5 points (95% confidence interval [CI]: -0.5, 1.5), 0.3 points (95% CI: -0.9, 1.5), and 0.0 points (95% CI: -1.1, 1.1), respectively. The mean differences in changes in disability were 0.8 points (95% CI: -1.3, 2.7), 0.0 points (95% CI: -2.4, 2.4), and 0.4 points (95% CI: -2.0, 2.8), respectively. Finally, we did not observe any between-group differences for any of the other outcomes at any timepoint.
Conclusion: The addition of specific hip strengthening does not appear to result in improved clinical outcomes for patients with nonspecific LBP.
abstract_id: PUBMED:26693216
Manual Therapy by General Medical Practitioners for Nonspecific Low Back Pain in Primary Care: The ManRück Study Protocol of a Clinical Trial. Background: Nonspecific low back pain (LBP) is a common reason for accessing primary care. Manual therapy (MT) may be an effective treatment, but data from clinical studies including relevant subgroups and clinical settings are sparse. The objective of this article is to describe the protocol of a study that will measure whether an MT protocol provided by general medical practitioners will lead to a faster pain reduction in patients with nonspecific LBP than does standard medical care.
Methods/design: The study is an experimental pre-/postintervention design. The intervention consists of add-on MT treatment by general medical practitioners who have received MT training but are otherwise inexperienced in mobilization techniques. Participating general medical practitioners (n = 10) will consecutively recruit and treat patients before and after their training, serving as their own internal controls. The primary end point is a combined outcome assessing change in pain score over days 0 to 3 and time until pain is reduced by 2 points on an 11-point numeric pain scale and painkiller use is stopped. Secondary outcomes are patients' functional capacities assessed using a questionnaire, amount of sick leave taken, patient satisfaction, and referrals for further treatment.
Trial Registration: German clinical trials register: DRKS-ID DRKS00003240.
abstract_id: PUBMED:27858681
The effect of visceral osteopathic manual therapy applications on pain, quality of life and function in patients with chronic nonspecific low back pain. Background: The efficacy of osteopathic manual therapy (OMT) applications on chronic nonspecific low back pain (LBP) has been demonstrated. However, visceral applications, which are an important part of OMT techniques, have not been included in those studies.
Objective: The study's objective was to determine the effect of OMT including visceral applications on the function and quality of life (QoL) in patients with chronic nonspecific LBP.
Design: The study was designed with a simple method of block randomization.
Methods: Thirty-nine patients with chronic nonspecific LBP were included in the study. OMT group consisted of 19 patients to whom OMT and exercise methods were applied. The visceral osteopathic manual therapy (vOMT) group consisted of 20 patients to whom visceral applications were applied in addition to the applications carried out in the other group. Ten sessions were performed over a two-week period. Pain (VAS), function (Oswestry Index) and QoL (SF-36) assessments were carried out before the treatment and on the sixth week of treatment.
Results: Both of the treatments were found to be effective on pain and function, physical function, pain, general health, social function of the QoL sub-parameter. vOMT was effective on all sub-QoL parameters (p<0.05). Comparing the groups, it was determined that the energy and physical limitations of the QoL scores in vOMT were higher (p< 0.05).
Conclusion: Visceral applications on patients with non-specific LBP gave positive results together with OMT and exercise methods. We believe that visceral fascial limitations, which we think cause limitations and pain in the lumbar segment, should be taken into consideration.
abstract_id: PUBMED:36292479
Manual Physiotherapy Combined with Pelvic Floor Training in Women Suffering from Stress Urinary Incontinence and Chronic Nonspecific Low Back Pain: A Preliminary Study. Stress urinary incontinence (SUI) represents one of the most common subtypes of urinary incontinence (UI) reported by women. Studies have shown an association of SUI with nonspecific low back pain (NSLBP). The primary aim of the present study was to explore the long-term effects of a combined treatment of manual techniques and pelvic floor muscle (PFM) training in women suffering from SUI associated with NSLBP. The secondary aim was to evaluate which manual approach combined with PFM rehabilitation is more effective in improving symptoms related to SUI and in reducing pain perception related to NSLBP. Twenty-six patients suffering from SUI associated with chronic NSLBP were randomly assigned to one of two groups: the postural rehabilitation group (PRg) or the spinal mobilization group (SMg). Both groups performed a manual approach combined with PFM rehabilitation. All patients were evaluated before the treatment (T0), after 10 sessions (T1) and after 30 days from the end of the treatment (T2). The results showed an improvement in both groups in all of the investigated outcomes. Combining manual therapy and PFM training within the same therapy session may be useful for improving both SUI and NSLBP and increasing the quality of life of women suffering from SUI associated with NSLBP.
abstract_id: PUBMED:26406198
Evidence based orthopaedic manual therapy for patients with nonspecific low back pain: An integrative approach. Background And Objectives: Orthopaedic manual therapy (OMT) should be based not only on the best available evidence but also on patient values and clinician expertise. Low back pain (LBP) is a complex issue as the majority of people who suffer from LBP cannot be given a specific diagnosis based on imaging studies but kinematic analyses appear to be useful to determine dysfunctional patterns. In physical therapy, various forms of OMT are currently used to manage LBP and there is growing evidence for its use. The underlying principles of OMT are to treat neuro-musculo-skeletal disorders, the aim of which is to reduce pain, as well as improve movement and function. Manual physical therapists use a range of treatment approaches including passive techniques (``hands on'') as well as different active techniques (``hands off'') and communication skills. Systems of stratification are available for classification of people with LBP into specific sub-groups (with sub-group specific OMT intervention). This approach has been shown to be more efficient than generic treatment, although subgroups are not mutually exclusive. Various mechanisms of action are reported in the literature concerning OMT effects. These effects may be biomechanical, neurophysiological and psychological. Moreover, it is essential that the treatment, regardless of the concept of OMT, is carried out on the basis of a systematic and valid clinical examination protocol aimed to correctly classify LBP. The use of pain provocative tests during combined movement examination provides confidence that examination findings are valid and can therefore be confidently used in clinical practice to manage patient. The integrative approach presented in this article is a mix of previously developed classification systems (i.e. based on pain mechanisms, prognosis, treatment responsiveness) and new tools, as kinematic analyses for LBP, and a novel validated combined movements examinationCONCLUSION: As LBP is a complex and multidimensional problem, the integrative approach may help clinicians and researchers to better understand and then to treat patients with non-specific LBP. The efficacy of OMT treatments using an integrative approach in specific patients subgroups should be objectively analyzed according to validated kinematic analyses in future studies.
abstract_id: PUBMED:32023423
Comparison of Three Manual Therapy Techniques as Trigger Point Therapy for Chronic Nonspecific Low Back Pain: A Randomized Controlled Pilot Trial. Objectives: This pilot study aimed to compare the efficacy of manual pressure release (MPR), strain counterstrain (SCS), and integrated neuromuscular inhibition technique (INIT) in the management of chronic nonspecific low back pain (LBP). Design: Single-blind, randomized, controlled pilot trial. Setting: Neurosurgery clinic. Subjects: Forty-eight patients (46 women; mean age, 35.47 ± 10.58 years) diagnosed chronic nonspecific LBP and who had at least one active myofascial trigger point (MTrP) in the quadratus lumborum, iliocostalis lumborum, gluteus maximus, gluteus medius, and gluteus minimus muscles were included. Interventions: Patients received a standard home exercise program in addition to the MPR technique (MPR group), SCS technique (SCS group), and INIT (INIT group) for 12 sessions (2 days/week for 6 weeks). Outcome measures: The primary outcome was the visual analog scale (VAS). The secondary outcomes were MTrP examination, pressure pain threshold, lumbar active range of motion, Oswestry Disability Index (ODI), Beck depression inventory, and state-trait anxiety inventory. Results: There is no significant difference in terms of the percentage of deactivated MTrPs after 1st session and 12th session between groups (p > 0.05), but the percentage of deactivated MTrPs was less in MPR group than other groups. The overall group-by-time interaction for the repeated measures analysis of variance was not significant for primary and secondary outcomes (p > 0.05), but the improvement in the VAS-activity and the ODI was slightly better in the SCS group compared with other groups. Conclusions: These preliminary findings, which might help provide a glimpse into the clinical effectiveness of three manual therapy techniques (MPR, SCS, and INIT) rather than statistical significance, indicated that SCS or INIT might provide slightly better improvement in pain during activity, deactivation of MTrPs, and disability related to pain in chronic nonspecific LBP.
Answer: Based on the available abstracts, allowing clinicians to choose from a number of treatment techniques does not appear to have improved the outcomes of randomized controlled trials (RCTs) that have investigated the effect of manual therapy for nonspecific low back pain (NSLBP) (PUBMED:15965405). The study mentioned in this abstract did not find systematic differences favoring results for RCTs that allowed clinician choice of treatment technique. Therefore, based on these data, clinician treatment choice does not seem to significantly improve the outcomes of manual therapy for NSLBP. However, it is important to note that the conclusions should be interpreted with caution due to the limited number of quality studies available. |
Instruction: Does hepatic graft weight affect the reduction of spleen size after living donor liver transplantation?
Abstracts:
abstract_id: PUBMED:20430196
Does hepatic graft weight affect the reduction of spleen size after living donor liver transplantation? Objective: Our aim was to evaluate whether the reduction in spleen volume at 6 months after living donor liver transplantation (LDLT) was affected by the size of the right lobe liver graft.
Patients And Methods: We analyzed 87 adult recipients of right lobe liver grafts who displayed preoperative splenomegaly: spleen volume>500 cm3 by computed tomographic (CT) volumetry. The recipients were grouped according to the graft weight-to-recipient weight ratio: GRWR>1 versus GRWR<1. The 2 groups were compared at 6 months after LDLT for mean postoperative spleen volume (SV) and mean SV change ratio 5, which was defined as [(SVpreop-SV6m)/SVpreop]x100%, where SVpreop and SV6m represent SV calculated based on CT examinations preoperatively and at 6 month follow-up after LDLT, respectively.
Results: The GRWR ranged from 0.77 to 1.66. There were 53 patients with GRWR>1 and 34 with GRWR<1. Our analysis showed significant hepatic graft volume regeneration and SV reduction at 6 months after LDLT. The SV change ratio weakly but significantly correlated with the transplanted liver graft weight (Pearson correlation coefficient, r=0.274; P<.009). In the group GRWR>1, the mean postoperative SV and the mean SV change ratio were 632+/-220 cm3 and decreased by 32+/-11%, respectively. The mean postoperative SV and the mean SV change ratio in group GRWR<1 were 598+/-188 cm3 and decreased by 34+/-13%, respectively. There were no differences in mean postoperative SV and mean SV change ratios between the 2 groups.
Conclusion: LDLT using a right lobe graft resulted in a significant reduction of SV at 6 months after surgery, but there were no significant differences between recipients who received different sized right lobe liver grafts.
abstract_id: PUBMED:31640624
Dual graft living donor liver transplantation - a case report. Background: Living donor liver transplantation (LDLT) has emerged as an equally viable option to deceased donor liver transplant for treating end stage liver disease patients. Optimising the recipient outcome without compromising donor safety is the primary goal of LDLT. Achieving the adequate graft to recipient weight ratio (GRWR) is important to prevent small for size syndrome which is an uncommon but potentially lethal complication of LDLT.
Case Presentation: Here we describe a case of successful dual lobe liver transplant for a 32 years old patient with ethanol related end stage liver disease. A right lobe graft without middle hepatic vein and another left lateral sector graft were transplanted successfully. Recipient and both donors recovered uneventfully.
Conclusion: Dual lobe liver transplant is a feasible strategy to achieve adequate GRWR without compromising donor safety.
abstract_id: PUBMED:32709407
Hepatic vein in living donor liver transplantation. Right lobe living donor liver transplantation (LDLT) is a major development in adult LDLT that has significantly increased the donor pool by providing larger graft size and by decreasing risk of small-for-size graft syndrome. However, right lobe anatomy is complex, not only from the inflow but also from the outflow perspective. Outflow reconstruction is one of the key requirements of a successful LDLT and venous drainage of the liver graft is just as important as hepatic inflow for the integrity of graft function. Outflow complications may cause acute graft failure which is not always easy to diagnose. The right lobe graft consists of two sections and three hepatic venous routes for drainage that require reconstruction. In order to obtain a congestion free graft, several types of vascular conduits and postoperative interventions are needed to assure an adequate venous allograft drainage. This review described the anatomy, functional basis and the evolution of outflow reconstruction in right lobe LDLT.
abstract_id: PUBMED:17061990
Liver graft-to-recipient spleen size ratio as a novel predictor of portal hyperperfusion syndrome in living donor liver transplantation. Portal hyperperfusion in a small-size liver graft is one cause of posttransplant graft dysfunction. We retrospectively analyzed the potential risk factors predicting the development of portal hyperperfusion in 43 adult living donor liver transplantation recipients. The following were evaluated: age, body weight, native liver disease, spleen size, graft size, graft-to-recipient weight ratio (GRWR), total portal flow, recipient portal venous flow per 100 g graft weight (RPVF), graft-to-recipient spleen size ratio (GRSSR) and portosystemic shunting. Spleen size was directly proportional to the total portal flow (p = 0.001) and RPVF (p = 0.014). Graft hyperperfusion (RPVF flow > 250 mL/min/100 g graft) was seen in eight recipients. If the GRSSR was < 0.6, 5 of 11 cases were found to have graft hyperperfusion (p = 0.017). The presence of portosystemic shunting was significant in decreasing excessive RPVF (p = 0.059). A decrease in portal flow in the hyperperfused grafts was achieved by intraoperative splenic artery ligation or splenectomy. Spleen size is a major factor contributing to portal flow after transplant. The GRSSR is associated with posttransplant graft hyperperfusion at a ratio of < 0.6.
abstract_id: PUBMED:31762248
Impact of Non-middle Hepatic Vein Reconstruction on the Result of Low Graft-to-recipient Weight Ratio Living Donor Liver Transplantation Objective: To analyze of the minimum graft-to-recipient weight ratio (GRWR) required for living donor liver transplantation (LDLT) without middle hepatic vein branch (MHVT) reconstruction.
Methods: We retrospectively collected the clinical data and outcomes of 303 LDLT patients over 16 years from 2001 to 2017. The minimum GRWR of non-middle hepatic vein reconstruction was analyzed by propensity score (PSM).
Results: With PSM analysis, no significant differences were observed in postoperative complications, SFSS, inpatient time, liver function, and coagulation function, but significant differences in 1-year, 3-year and 5-year survival between MHVT reconstruction and non-reconstruction group. The patients with MHVT reconstruction had better short-term and long-term survival than those without reconstruction.
Conclusion: For LDLT patients without HMVT reconstruction, GRWR should be greater than 0.86%; for patients with HMVT reconstruction, GRWR is acceptable between 0.5% and 0.6%.
abstract_id: PUBMED:34095727
Which is better to use "body weight" or "standard liver weight", for predicting small-for-size graft syndrome after living donor liver transplantation? Aim: Little evidence about whether to apply graft-to-recipient body weight ratio (GRWR) or graft weight to standard liver weight (GW/SLW) for graft selection has been published. The aim of the present study was to clarify the importance of the correct use of GRWR and GW/SLW for selecting graft according to the recipients' physique in living donor liver transplantation (LDLT).
Methods: Data were collected for 694 recipients who underwent LDLT between 1997 and 2020.
Results: One of the marginal grafts meeting GW/SLW ≥ 35% but GRWR < 0.7% has been used in more recipients with men and higher body mass index (BMI), and the other meeting GRWR ≥ 0.7% but GW/SLW < 35% has been used in more recipients with women with lower BMI. In the cohort of BMI > 30 kg/m2, the recipients with GRWR < 0.7% had a significantly higher incidence of small-for-size graft syndrome (SFSS) compared to those with GRWR ≥ 0.7% (P = 0.008, 46.2% vs 5.9%), and using the cutoff of GW/SLW < 35% could not differentiate. In contrast, in the cohort of BMI ≤ 30 kg/m2, the recipients with GW/SLW < 35% also had a significantly higher incidence of SFSS (P = 0.013, 16.9% vs 9.4%). Multivariate analysis showed that GRWR < 0.7% [odds ratio (OR) 14.145, P = 0.048] was the independent risk factor for SFSS in obese recipients, and GW/SLW < 35% [OR 2.685, P = 0.002] was the independent risk factor in non-obese recipients.
Conclusion: Proper use of the formulas for calculating GRWR and GW/SLW in choosing graft according to recipient BMI is important, not only to meet metabolic demand for avoiding SFSS but also to ameliorate donor shortages.
abstract_id: PUBMED:37243395
Optimal graft size in pediatric living donor liver transplantation: How are children different from adults? Background: Pediatric liver transplantation is an established treatment for end-stage liver disease in children. However, it is still posing relevant challenges, such as optimizing the graft selection according to the recipient size. Unlike adults, small children tolerate large-for-size grafts and insufficient graft volume might represent an issue in adolescents when graft size is disproportionate.
Methods: Graft-size matching strategies over time were examined in pediatric liver transplantation. This review traces the measures/principles put in place to prevent large-for-size or small-for-size grafts in small children to adolescents with a literature review and an analysis of the data issued from the National Center for Child Health and Development, Tokyo, Japan.
Results: Reduced left lateral segment (LLS; Couinaud's segment II and III) was widely applicable for small children less than 5 kg with metabolic liver disease or acute liver failure. There was significantly worse graft survival if the actual graft-to-recipient weight ratio (GRWR) was less than 1.5% in the adolescent with LLS graft due to the small-for-size graft. Children, particularly adolescents, may then require larger GRWR than adults to prevent small-for-size syndrome. The suggested ideal graft selections in pediatric LDLT are: reduced LLS, recipient body weight (BW) < 5.0 kg; LLS, 5.0 kg ≤ BW < 25 kg; left lobe (Couinaud's segment II, III, IV with middle hepatic vein), 25 kg ≤ BW < 50 kg; right lobe (Couinaud's segment V, VI, VII, VIII without middle hepatic vein), 50 kg ≤ BW. Children, particularly adolescents, may then require larger GRWR than adults to prevent small-for-size syndrome.
Conclusion: Age-appropriate and BW-appropriate strategies of graft selection are crucial to secure an excellent outcome in pediatric living donor liver transplantation.
abstract_id: PUBMED:32897590
Small-for-size graft, small-for-size syndrome and inflow modulation in living donor liver transplantation. The extended application of living donor liver transplantation (LDLT) has revealed the problem of graft size mismatching called "small-for-size syndrome (SFSS)." The initial trials to resolve this problem involved increasing the procured graft size, from left to right, and even extending to include a right lobe graft. Clinical cases of living right lobe donations have been reported since then, drawing attention to the risks of increasing the liver volume procured from a living donor. However, not only other modes of increasing graft volume (GV) such as auxiliary or dual liver transplantation, but also control of the increased portal pressure caused by a small-for-size graft (SFSG), such as a porto-systemic shunt or splenectomy and optimal outflow reconstruction, have been trialed with some positive results. To establish an effective strategy for transplanting SFSG and preventing SFSS, it is essential to have precise knowledge and tactics to evaluate graft quality and GV, when performing these LDLTs with portal pressure control and good venous outflow. Thus, we reviewed the updated literature on the pathogenesis of and strategies for using SFSG.
abstract_id: PUBMED:35419409
Liver Graft-to-Spleen Volume Ratio as a Useful Predictive Factor of the Outcomes in Living Donor Liver Transplantation: A Retrospective Study. Background: In living donor liver transplantation (LDLT), graft-to-recipient weight ratio (GRWR) <0. 8% is an important index for predicted portal hypertension, which may induce the graft small-for-size syndrome (SFSS). Recently, the value of graft-to-spleen volume ratio (GSVR) on predicted portal hypertension had been reported, whether without splenectomy prevent portal hypertension in transplantation remains disputed, we aimed to identify GSVR contributing to portal venous pressure (PVP) and outcomes without simultaneous splenectomy in LDLT.
Methods: A retrospective study had been designed. Excluded patients with splenectomy, 246 recipients with LDLT between 2016 and 2020 were categorized into a low GSVR group and a normal GSVR group. Preoperative, intraoperative, and postoperative data were collected, then we explored different GSVR values contributing to portal hypertension after reperfusion.
Results: According to the first quartile of the distributed data, two groups were divided: low GSVR (<1.03 g/mL) and normal GSVR (>1.03 g/mL). For the donors, there were significant differences in donor age, graft type, liver size, GRWR, and GSVR (P < 0.05). Following the surgical factors, there were significant differences in blood loss and CRBC transfusion (P < 0.05). The low GSVR has demonstrated had a significant relationship with ascites drainage and portal venous flow after LDLT (P < 0.05). Meanwhile, low GSVR heralds worse results which covered platelet count, international normalized ratio (INR), and portal venous velocity. Kaplan-Meier analysis showed that there was a significant difference between the two groups, while the low GSVR group demonstrated worse recipients survival compared with the normal GSVR group (P < 0.05).
Conclusions: Without splenectomy, low GSVR was an important predictor of portal hypertension and impaired graft function after LDLT.
abstract_id: PUBMED:29892174
Living Donor Liver Transplantation Using Small-for-Size Grafts: Does Size Really Matter? Background: In living donor liver transplantation (LDLT), graft-to-recipient weight ratio (GRWR) > 0.8% is perceived as the critical graft size. This lower limit of GRWR (0.8%) has been challenged over the last decade owing to the surgical refinements, especially related to inflow and outflow modulation techniques. Our aim was to compare the recipient outcome in small-for-size (GRWR < 0.8) versus normal-sized grafts (GRWR > 0.8) and to determine the risk factors for mortality when small-for-size grafts (SFSG) were used.
Methods: Data of 200 transplant recipients and their donors were analyzed over a period of two years. Routine practice of harvesting middle hepatic vein (MHV) or reconstructing anterior sectoral veins into neo-MHV was followed during LDLT. Outcomes were compared in terms of mortality, hospital stay, ICU stay, and occurrence of various complications such as functional small-for-size syndrome (F-SFSS), hepatic artery thrombosis (HAT), early allograft dysfunction (EAD), portal vein thrombosis (PVT), and postoperative sepsis. A multivariate analysis was also done to determine the risk factors for mortality in both the groups.
Results: Recipient and donor characteristics, intraoperative variables, and demographical data were comparable in both the groups (GRWR < 0.8 and GRWR ≥ 0.8). Postoperative 90-day mortality (15.5% vs. 22.85%), mean ICU stay (10 vs. 10.32 days), and mean hospital stay (21.4 vs. 20.76 days) were statistically similar in the groups. There was no difference in postoperative outcomes such as occurrence of SFSS, HAT, PVT, EAD, or sepsis between the groups. Thrombosis of MHV/reconstructed MHV was a risk factor for mortality in grafts with GRWR < 0.8 but not in those with GRWR > 0.8.
Conclusion: Graft survival after LDLT using a small-for-size right lobe graft (GRWR < 0.8%) is as good as with normal grafts. However, patency of anterior sectoral outflow by MHV or reconstructed MHV is crucial to maintain graft function when SFSG are used.
Answer: The reduction in spleen volume after living donor liver transplantation (LDLT) does not appear to be significantly affected by the size of the right lobe liver graft. A study analyzed 87 adult recipients of right lobe liver grafts with preoperative splenomegaly and grouped them according to the graft weight-to-recipient weight ratio (GRWR) being greater than 1 or less than 1. The study found that while there was a significant reduction in spleen volume (SV) at 6 months after LDLT, there were no significant differences in mean postoperative SV and mean SV change ratios between the two groups with different sized right lobe liver grafts. The SV change ratio did weakly but significantly correlate with the transplanted liver graft weight, but the correlation was not strong enough to suggest a significant impact of graft weight on spleen size reduction (PUBMED:20430196). |
Instruction: Is pelvic clinical evaluation still relevant?
Abstracts:
abstract_id: PUBMED:19620032
Is pelvic clinical evaluation still relevant? Objectives: Establish specificity, sensibility of clinical pelvimetry using X-ray pelvimetry as the reference exam; Assess reproducibility of clinical pelvimetry comparing two physicians' findings.
Patients And Methods: During a longitudinal study of 29 months, we compared clinical clinical pelvimetry findings of 114 patients with results of X-ray pelvimetry. Reproducibility was assessed comparing pelvic measures performed by two physicians for 40 patients. Based on caesarean ratio of 7% due to cephalopelvic disproportion in our department, the required size of sample was estimated between 114 and 200 patients. Statistics tests used were the independent test of Chi-2, the Kappa coefficient, the T-test and discriminant analyze. A p-value of 0.05 was considered as significant.
Results: The sensibility of clinical pelvimetry was found at 83.7%, specificity at 88.9%. Positive predictive value was 97.6% and negative predictive value founded at 50%. Best concordances were obtained for the measures of vertical diameter of Michaelis, Trillat diameter and assessment of sciatic spines and inominated lines. Clinical pelvimetry was reproducible with a kappa value of 0.62.
Conclusion: Clinical pelvimetry, due to his good specificity and satisfactory reproducibility is still relevant in case of unavailability of X-ray pelvimetry as it happens in Africans ressourceless countries.
abstract_id: PUBMED:37737436
International Urogynecology consultation chapter 2 committee 3: the clinical evaluation of pelvic organ prolapse including investigations into associated morbidity/pelvic floor dysfunction. Introduction And Hypothesis: This manuscript from Chapter 2 of the International Urogynecology Consultation (IUC) on Pelvic Organ Prolapse (POP) reviews the literature involving the clinical evaluation of a patient with POP and associated bladder and bowel dysfunction.
Methods: An international group of 11 clinicians performed a search of the literature using pre-specified search MESH terms in PubMed and Embase databases (January 2000 to August 2020). Publications were eliminated if not relevant to the clinical evaluation of patients or did not include clear definitions of POP. The titles and abstracts were reviewed using the Covidence database to determine whether they met the inclusion criteria. The manuscripts were reviewed for suitability using the Specialist Unit for Review Evidence checklists. The data from full-text manuscripts were extracted and then reviewed.
Results: The search strategy found 11,242 abstracts, of which 220 articles were used to inform this narrative review. The main themes of this manuscript were the clinical examination, and the evaluation of comorbid conditions including the urinary tract (LUTS), gastrointestinal tract (GIT), pain, and sexual function. The physical examination of patients with pelvic organ prolapse (POP) should include a reproducible method of describing and quantifying the degree of POP and only the Pelvic Organ Quantification (POP-Q) system or the Simplified Pelvic Organ Prolapse Quantification (S-POP) system have enough reproducibility to be recommended. POP examination should be done with an empty bladder and patients can be supine but should be upright if the prolapse cannot be reproduced. No other parameters of the examination aid in describing and quantifying POP. Post-void residual urine volume >100 ml is commonly used to assess for voiding difficulty. Prolapse reduction can be used to predict the possibility of postoperative persistence of voiding difficulty. There is no benefit of urodynamic testing for assessment of detrusor overactivity as it does not change the management. In women with POP and stress urinary incontinence (SUI), the cough stress test should be performed with a bladder volume of at least 200 ml and with the prolapse reduced either with a speculum or by a pessary. The urodynamic assessment only changes management when SUI and voiding dysfunction co-exist. Demonstration of preoperative occult SUI has a positive predictive value for de novo SUI of 40% but most useful is its absence, which has a negative predictive value of 91%. The routine addition of radiographic or physiological testing of the GIT currently has no additional value for a physical examination. In subjects with GIT symptoms further radiological but not physiological testing appears to aid in diagnosing enteroceles, sigmoidoceles, and intussusception, but there are no data on how this affects outcomes. There were no articles in the search on the evaluation of the co-morbid conditions of pain or sexual dysfunction in women with POP.
Conclusions: The clinical pelvic examination remains the central tool for evaluation of POP and a system such as the POP-Q or S-POP should be used to describe and quantify. The value of investigation for urinary tract dysfunction was discussed and findings presented. The routine addition of GI radiographic or physiological testing is currently not recommended. There are no data on the role of the routine assessment of pain or sexual function, and this area needs more study. Imaging studies alone cannot replace clinical examination for the assessment of POP.
abstract_id: PUBMED:27467865
Defecography: a still needful exam for evaluation of pelvic floor diseases. The aim of this discussion is to describe what is a defecography, how we have to perform it, what can we see and to present the main physio-pathological illnesses of pelvic floor and anorectal region that can be studied with this method and its advantages over other screening techniques. Defecography is a contrastographic radiological examination that highlights structural and functional pelvic floor diseases. Upon preliminary ileum-colic opacification giving to patient radiopaque contrast, are first acquired static images (at rest, in maximum voluntary contraction of the pelvic muscles, while straining) and secondarily dynamic sequences (during evacuation), allowing a complete evaluation of the functionality of the anorectal region and the pelvic floor. Defecography is an easy procedure to perform widely available, and economic, carried out in conditions where the patient experiences symptoms, the most realistic possible. It can be still considered reliable technology and first choice in many patients in whom the clinic alone is not sufficient and it is not possible or necessary to perform a study with MRI.
abstract_id: PUBMED:30017491
Review of pelvic and perineal neuromuscular fatigue: Evaluation and impact on therapeutic strategies. Background: Pelvic floor fatigue is known by its clinical consequences (fecal incontinence, stress urinary incontinence, pelvic organ prolapse), but there are still few studies on the subject.
Objective: This article presents an overview of the current knowledge of pelvic and perineal fatigue, focusing on its assessment and consequences in terms of evaluation and therapeutic strategies, to propose an evaluation that could be routinely performed.
Methods: We performed a systematic review of the literature in MEDLINE via PubMed and Cochrane Library databases by using the keywords pelvic floor, muscular fatigue, physiopathology, stress urinary incontinence, pelvic organ prolapse, fecal incontinence, physical activity, and pelvic rehabilitation. We included reports of systematic reviews and retrospective and prospective studies on adult humans and animals in English or French published up to April 2018 with no restriction on start date.
Results: We selected 59 articles by keyword search, 18 by hand-search and 3 specific guidelines (including the 2009 International Continence Society recommendations); finally 45 articles were included; 14 are described in the Results section (2 reviews of 6 and 20 studies, and 12 prospective observational or cross-over studies of 5 to 317 patients including 1 of animals). Perineal fatigue can be assessed by direct assessment, electromyography and spectral analysis and during urodynamics. Because pelvic floor fatigue assessments are not evaluated routinely, this fatigability is not always identified and is often falsely considered an exclusive pelvic floor weakness, as suggested by some rehabilitation methods that also weaken the pelvic floor instead of enhancing it.
Conclusion: Pelvic floor fatigue is not evaluated enough on a routine basis and the assessment is heterogeneous. A better knowledge of pelvic floor fatigue by standardized routine evaluation could lead to targeted therapeutic strategies.
abstract_id: PUBMED:21056359
Evaluation of chronic pelvic and perineal pain Objective: To describe the tools allowing evaluation of chronic pelvic and perineal pain and to define their indications.
Material And Methods: A review of the literature was performed by searching the Medline database (National Library of Medicine). Search terms were either Medical subject heading (MeSH) keywords (pelvic pain, pain measurement, prostatitis, quality of life) or terms derived from the title or abstract. Search terms were used alone or in combinations by using the "AND" operator. The literature search was conducted from 1990 to the present time.
Results: Various rating scales and questionnaires constitute useful tools for clinical evaluation of the patient's chronic pain. They cannot replace clinical interview and cannot be used to establish a diagnosis. The main clinical assessment tools include severity scales, body diagrams, descriptive assessment (sensory and affective), evaluation of the impact on sleep, activities of daily living, quality of life and behaviour and assessment of mood and anxiety. In addition to these general tools, specific questionnaire have been developed in the fields of interstitial cystitis/painful bladder syndrome and chronic prostatitis/chronic pelvic pain syndrome. These specific questionnaires are designed for evaluation of the severity of symptoms, assessment of the disability related to the symptoms and the impact on quality of life, and follow-up of the course of symptoms and the response to treatment.
Conclusion: Rapid and easy to use tools are essential in routine clinical practice. The recommended assessment tools are VAS (visual analogue scale) or numerical severity scales, body diagrams and brief questionnaires such as the Questionnaire sur la Douleur de Saint-Antoine (QDSA) (Saint-Antoine pain questionnaire) or Questionnaire Concis sur les Douleurs (QCD) (validated French translation of the Brief Pain Inventory).
abstract_id: PUBMED:19932409
Clinical approach and office evaluation of the patient with pelvic floor dysfunction. Pelvic floor disorders are common health issues for women and have a great impact on quality of life. These disorders can present with a wide spectrum of symptoms and anatomic defects. This article reviews the clinical approach and office evaluation of patients with pelvic floor disorders, including pelvic organ prolapse, urinary dysfunction, anal incontinence, sexual dysfunction, and pelvic pain. The goal of treatment is to provide as much symptom relief as possible. After education and counseling, patients may be candidates for non-surgical or surgical treatment, and expectant management.
abstract_id: PUBMED:29241877
Adult onset Still's Disease. Adult onset Still's disease is a rare systemic condition at the crossroads between auto-inflammatory syndromes and autoimmune diseases, with considerable heterogeneity in terms of clinical presentation, evolution and severity. This article reviews the main advances and lesser known aspects of this entity related to its clinical spectrum (atypical cutaneous lesions, unusual manifestations, macrophage activation syndrome, disease phenotypes), the emerging controversy around its association with delayed malignancy, the search for new biomarkers for its diagnosis, evaluation of prognosis (clinical factors, prognostic indexes and biomarkers to identify patients at risk of severe organ failure or life-threatening complications), and the determinants in the choice of biological treatment.
abstract_id: PUBMED:26605450
An Evaluation Model for a Multidisciplinary Chronic Pelvic Pain Clinic: Application of the RE-AIM Framework. Objective: Chronic pelvic pain (CPP) is a prevalent, debilitating, and costly condition. Although national guidelines and empiric evidence support the use of a multidisciplinary model of care for such patients, such clinics are uncommon in Canada. The BC Women's Centre for Pelvic Pain and Endometriosis was created to respond to this need, and there is interest in this model of care's impact on the burden of disease in British Columbia. We sought to create an approach to its evaluation using the RE-AIM (Reach, Efficacy, Adoption, Implementation, Maintenance) evaluation framework to assess the impact of the care model and to guide clinical decision-making and policy.
Methods: The RE-AIM evaluation framework was applied to consider the different dimensions of impact of the BC Centre. The proposed measures, data sources, and data management strategies for this mixed-methods approach were identified.
Results: The five dimensions of impact were considered at individual and organizational levels, and corresponding indicators were proposed to enable integration into existing data infrastructure to facilitate collection and early program evaluation.
Conclusion: The RE-AIM framework can be applied to the evaluation of a multidisciplinary chronic pelvic pain clinic. This will allow better assessment of the impact of innovative models of care for women with chronic pelvic pain.
abstract_id: PUBMED:17363445
Magnetic resonance-based female pelvic anatomy as relevant for maternal childbirth injury simulations. The objectives of the study are to review the female pelvic floor anatomy relevant to childbirth simulations, to discuss available methods for clinical evaluation of female pelvic floor function, and to review the variation in pelvic floor changes after vaginal childbirth. A high-resolution magnetic resonance (MR) data set from an asymptomatic nullipara was used to illustrate the MR anatomy of the female pelvic floor. Manual segmentation was performed and three-dimensional reconstructions of the pelvic floor structures were generated, which were used to illustrate the 3D anatomy of the pelvic floor. Variation in the post partum appearance of the levator ani muscles is illustrated using other 2D MR data sets, which depict unilateral and bilateral disruptions in the puborectalis portion of levator ani, as well as shape variations, which may be seen in the post partum levator. The clinical evaluation of the pelvic floor is then reviewed. The female pelvis is composed of a bony scaffold, from which the pelvic floor muscles (obturator internus, levator ani) are suspended. The rectum fits in a midline groove in the levator ani. The vagina is suspended across the midline, attaching bilaterally to the obturator and levator ani. The vagina supports the bladder and urethra. MR studies have demonstrated disruptions in levator ani attachments after vaginal childbirth. Such disruptions are rare in women who have not given birth vaginally. Changes to the neuromuscular apparatus of the pelvic floor can also be demonstrated after vaginal delivery. The combination of childbirth-related anatomic and neurological injury to the pelvic floor may be associated with pelvic floor dysfunction (PFD). These changes are difficult to study in vivo but may be studied through simulations. Appropriate consideration of clinical anatomy is important in these simulations.
abstract_id: PUBMED:37795440
Oncological and reproductive outcomes of conization combined with pelvic node evaluation in patients with early-stage cervical cancer: a systematic review and meta-analysis. Objective: This study aims to preliminarily assess the oncological and reproductive outcomes of fertility preservation treatment using conization combined with pelvic node evaluation in young patients with early-stage cervical cancer (ECC) through meta-analysis.
Methods: In this meta-analysis, we analyzed studies published in PubMed, Embase, Cochrane Central Register of Controlled Trials (CENTRAL), International Clinical Trials Registry Platform (ICTRP), and Clinical Trials. gov that appeared in our search from inception to 0 7/02/2023.
Results: There were 17 relevant studies with a total of 620 patients included, of which 444 patients received conization combined with pelvic node evaluation. The combined pregnancy rate was 45.4% (95% CI: 0.34-0.57), the combined live birth rate was 33.9% (95% CI: 0.26-0.42), the combined miscarriage rate was 4.8% (95% CI: 0.02-0.092), the combined preterm delivery rate was 5.1% (95% CI: 0.02-0.092), and the combined recurrence rate was 1.9% (95% CI: 0.006-0.035), which did not significantly differ from that of patients who received radical surgery (OR: 0.689, 95% CI: 0.506-0.938).
Conclusion: Cervical conization combined with pelvic lymph node evaluation for fertility preservation in young ECC patients can achieve oncological outcomes similar to radical surgery while improving pregnancy success rates and preserving postoperative fertility. In summary, fertility preservation treatment using cervical conization combined with pelvic lymph node evaluation may be considered as a viable option for young ECC patients with strong fertility preservation desire, resulting in better pregnancy and live birth outcomes.
Systematic Review Registration: https://www.crd.york.ac.uk/PROSPERO/#myprospero, identifier PROSPERO (CRD42023423432).
Answer: Yes, pelvic clinical evaluation is still relevant. The study by PUBMED:19620032 found that clinical pelvimetry has good specificity (88.9%) and satisfactory reproducibility (kappa value of 0.62), making it relevant especially in resource-limited settings where X-ray pelvimetry may not be available. Similarly, the International Urogynecology Consultation (PUBMED:37737436) emphasizes that the clinical pelvic examination remains the central tool for the evaluation of pelvic organ prolapse (POP) and recommends using systems like the POP-Q or S-POP to describe and quantify POP. Defecography is also highlighted as a relevant and reliable technology for evaluating pelvic floor diseases when clinical evaluation alone is insufficient and MRI is not possible or necessary (PUBMED:27467865). Moreover, the review of pelvic and perineal neuromuscular fatigue (PUBMED:30017491) suggests that pelvic floor fatigue is not evaluated enough on a routine basis and calls for a standardized routine evaluation to lead to targeted therapeutic strategies. The evaluation of chronic pelvic and perineal pain (PUBMED:21056359) and the clinical approach to pelvic floor dysfunction (PUBMED:19932409) further support the relevance of pelvic clinical evaluation in diagnosing and managing various conditions. Additionally, the systematic review and meta-analysis on fertility preservation treatment in early-stage cervical cancer (PUBMED:37795440) indicate that cervical conization combined with pelvic lymph node evaluation can achieve oncological outcomes similar to radical surgery while preserving fertility, underscoring the importance of pelvic evaluation in oncological and reproductive outcomes. |
Instruction: Outcomes with coronary artery bypass graft surgery versus percutaneous coronary intervention for patients with diabetes mellitus: can newer generation drug-eluting stents bridge the gap?
Abstracts:
abstract_id: PUBMED:24939927
Outcomes with coronary artery bypass graft surgery versus percutaneous coronary intervention for patients with diabetes mellitus: can newer generation drug-eluting stents bridge the gap? Background: Coronary artery bypass graft surgery (CABG) compared with percutaneous coronary intervention (PCI) reduces mortality in patients with diabetes mellitus. However, prior trials compared CABG with balloon angioplasty or older generation stents, and it is not known if the gap between CABG and PCI can be reduced by newer generation drug-eluting stents.
Methods And Results: PUBMED/EMBASE/CENTRAL search for randomized trials comparing mode of revascularization in patients with diabetes mellitus. Primary outcome was all-cause mortality. Secondary outcomes were myocardial infarction, repeat revascularization, and stroke. Mixed treatment comparison analyses were performed using a random-effects Poisson regression model. Sixty-eight randomized trials that enrolled 24 015 diabetic patients with a total of 71 595 patient-years of follow-up satisfied our inclusion criteria. When compared with CABG (reference rate ratio [RR]=1.0), PCI with paclitaxel-eluting stent (RR=1.57 [1.15-2.19]) or sirolimus-eluting stent (RR=1.43 [1.06-1.97]) was associated with an increase in mortality. However, PCI with cobalt-chromium everolimus-eluting stent (RR=1.11 [0.67-1.84]) was not associated with a statistically significant increase in mortality. When compared with CABG, there was excess repeat revascularization with PCI, which progressively declined from plain old balloon angioplasty (341% increase) to bare metal stent (218% increase) to paclitaxel-eluting stent (81% increase) and to sirolimus-eluting stent (47% increase). However, for PCI with cobalt-chromium everolimus-eluting stent (RR=1.31 [0.74-2.29]), the excess repeat revascularization was not statistically significant although the point estimate favored CABG. CABG was associated with numerically higher stroke.
Conclusions: In patients with diabetes mellitus, evidence from indirect comparison shows similar mortality between CABG and PCI using cobalt-chromium everolimus-eluting stent. CABG was associated with numerically excess stroke and PCI with cobalt-chromium everolimus-eluting stent with numerically increased repeat revascularization. This hypothesis needs to be tested in future trials.
abstract_id: PUBMED:29217002
The DELTA 2 Registry: A Multicenter Registry Evaluating Percutaneous Coronary Intervention With New-Generation Drug-Eluting Stents in Patients With Obstructive Left Main Coronary Artery Disease. Objectives: The aim of this study was to evaluate clinical outcomes of unprotected left main coronary artery percutaneous coronary intervention (PCI) with new-generation drug-eluting stents in a "real world" population.
Background: PCI of the unprotected left main coronary artery is currently recommended as an alternative to coronary artery bypass grafting (CABG) in selected patients.
Methods: All consecutive patients with unprotected left main coronary artery stenosis treated by PCI with second-generation drug-eluting stents were analyzed in this international, all-comers, multicenter registry. The results were compared with those from the historical DELTA 1 (Drug Eluting Stent for Left Main Coronary Artery) CABG cohort using propensity score stratification. The primary endpoint was the composite of death, myocardial infarction (MI), or stroke at the median time of follow-up.
Results: A total of 3,986 patients were included. The mean age was 69.6 ± 10.9 years, diabetes was present in 30.8%, and 21% of the patients presented with acute MI. The distal left main coronary artery was involved in 84.6% of the lesions. At a median of 501 days (≈17 months) of follow-up, the occurrence of the primary endpoint of death, MI, or cerebrovascular accident was lower in the PCI DELTA 2 group compared with the historical DELTA 1 CABG cohort (10.3% vs. 11.6%; adjusted hazard ratio: 0.73; 95% confidence interval: 0.55 to 0.98; p = 0.03). Of note, an advantage of PCI was observed with respect to cerebrovascular accident (0.8% vs. 2.0%; adjusted hazard ratio: 0.37; 95% confidence interval: 0.16 to 0.86; p = 0.02), while an advantage of CABG was observed with respect to target vessel revascularization (14.2% vs. 2.9%; adjusted hazard ratio: 3.32; 95% confidence interval: 2.12 to 5.18; p < 0.0001).
Conclusions: After a median follow-up period of 17 months, PCI with new-generation drug-eluting stents was associated with an overall low rate of the composite endpoint of death, MI, or cerebrovascular accident.
abstract_id: PUBMED:22988961
Racial differences in long-term outcomes after percutaneous coronary intervention with paclitaxel-eluting coronary stents. Objectives: To assess the influence of race on long-term outcomes following percutaneous coronary intervention (PCI) with paclitaxel-eluting stents (PES).
Background: Data on the influence of race on long-term outcomes following PCI with drug-eluting stents are limited because of severe underrepresentation of minority populations in randomized trials.
Methods: We compared 5-year outcomes of 2,301 whites, 127 blacks, and 169 Asians treated with PES in the TAXUS IV, V, and ATLAS trials. Outcomes were adjusted using a propensity score logistic regression model with 1:4 matching.
Results: Blacks were more likely than whites to be female, have a history of hypertension, diabetes mellitus, congestive heart failure, and stroke, but were less likely to have prior coronary artery disease. Compared with whites, Asians were younger, more likely to be male, have stable angina, and left anterior descending disease, and less likely to have silent ischemia, previous coronary artery bypass surgery, prior coronary artery disease, diabetes mellitus, peripheral vascular disease, and to receive glycoprotein IIb/IIIa inhibitors. Despite higher antiplatelet compliance, the adjusted 5-year rates of myocardial infarction (15.4% vs. 5.4%, P < 0.001) and stent thrombosis (5.6% vs. 1.1%, P = 0.002) were higher in blacks than whites. Despite lower antiplatelet compliance, Asians had no differences in myocardial infarction and stent thrombosis compared with whites. Mortality and revascularization rates were similar between the three groups.
Conclusions: The long-term risk of major thrombotic events after PCI with PES was higher in blacks, but not Asians, compared with whites. The mechanisms underlying these racial differences warrant further investigation.
abstract_id: PUBMED:34610874
Long-Term Outcomes After Percutaneous Coronary Intervention With Second-Generation Drug-Eluting Stents or Coronary Artery Bypass Grafting for Multivessel Coronary Disease. More evidence is required with respect to the comparative effectiveness of percutaneous coronary intervention (PCI) with second-generation drug-eluting stents (DESs) versus coronary artery bypass grafting (CABG) in contemporary clinical practice. This prospective observational registry-based study compared the outcomes of 6,647 patients with multivessel disease who underwent PCI with second-generation DES (n = 3,858) or CABG (n = 2,789) between January 2006 and June 2018 and for whom follow-up data were available for at least 2 to 13 years (median 4.8). The primary outcome was a composite of death, spontaneous myocardial infarction, or stroke. Baseline differences were adjusted using propensity scores and inverse probability weighting. In the overall cohort, there were no significant between-group differences in the adjusted risks for the primary composite outcome (hazard ratio [HR] for PCI vs CABG 1.03, 95% confidence interval [CI] 0.86 to 1.25, p = 0.73) and all-cause mortality (HR 0.95, 95% CI 0.76 to 1.20, p = 0.68). This relative treatment effect on the primary outcome was similar in patients with diabetes (HR 1.15, 95% CI 0.91 to 1.46, p = 0.25) and without diabetes (HR 0.95, 95% CI 0.73 to 1.22, p = 0.67) (p for interaction = 0.24). The adjusted risk of the primary outcome was significantly greater after PCI than after CABG in patients with left main involvement (HR 1.39, 95% CI 1.01 to 1.90, p = 0.044), but not in those without left main involvement (HR 0.94, 95% CI 0.76 to 1.16, p = 0.56) (p = 0.03 for interaction). In this prospective real-world long-term registry, we observed that the risk for the primary composite of death, spontaneous myocardial infarction, or stroke was similar between PCI with contemporary DES and CABG.
abstract_id: PUBMED:35350870
Trends in Clinical Practice and Outcomes After Percutaneous Coronary Intervention of Unprotected Left Main Coronary Artery. Background The use of percutaneous coronary intervention (PCI) to treat unprotected left main coronary artery disease has expanded rapidly in the past decade. We aimed to describe nationwide trends in clinical practice and outcomes after PCI for left main coronary artery disease. Methods and Results Patients (n=4085) enrolled in the SCAAR (Swedish Coronary Angiography and Angioplasty Registry) as undergoing PCI for left main coronary artery disease from 2005 to 2017 were included. A count regression model was used to analyze time-related differences in procedural characteristics. The 3-year major adverse cardiovascular and cerebrovascular event rate defined as death, myocardial infarction, stroke, and repeat revascularization was calculated with the Kaplan-Meier estimator and Cox proportional hazard model. The number of annual PCI procedures grew from 121 in 2005 to 589 in 2017 (389%). The increase was greater for men (479%) and individuals with diabetes (500%). Periprocedural complications occurred in 7.9%, decreasing from 10% to 6% during the study period. A major adverse cardiovascular and cerebrovascular event occurred in 35.7% of patients, falling from 45.6% to 23.9% (hazard ratio, 0.56; 95% CI, 0.41-0.78; P=0.001). Radial artery access rose from 21.5% to 74.2% and intracoronary diagnostic procedures from 14.0% to 53.3%. Use of bare-metal stents and first-generation drug-eluting stents fell from 19.0% and 71.9%, respectively, to 0, with use of new-generation drug-eluting stents increasing to 95.2%. Conclusions Recent changes in clinical practice relating to PCI for left main coronary artery disease are characterized by a 4-fold rise in procedures conducted, increased use of evidence-based adjunctive treatment strategies, intracoronary diagnostics, newer stents, and more favorable outcomes.
abstract_id: PUBMED:25127976
Prolonged effectiveness of coronary artery bypass surgery versus drug-eluting stents in diabetics with multi-vessel disease: an updated systematic review and meta-analysis. Background: Currently, the appropriateness of percutaneous coronary intervention (PCI) using drug-eluting stents (DES) versus coronary artery bypass grafting (CABG) for patients with diabetes (DM) and multi-vessel disease (MVD) is uncertain due to limited evidence from few randomised controlled trials (RCTs). We aimed to compare the clinical effectiveness of CABG versus PCI-DES in DM-MVD patients using an evidence-based approach.
Methods: A systematic review and meta-analyses were conducted to compare the risk of all-cause mortality, myocardial infarction (MI), repeat revascularisation, cerebrovascular events (CVE), and major adverse cardiac or cerebrovascular events (MACCE).
Results: A total of 1,837 and 3,052 DM-MVD patients were pooled from four RCTs (FREEDOM, SYNTAX, VA CARDS, and CARDia) and five non-randomised studies. At mean follow-up of 3 years, CABG compared with PCI-DES was associated with a lower risk of all-cause mortality and MI in RCTs. By contrast, no significant differences were observed in the mean 3.5-year risk of all-cause mortality and MI in non-randomised trials. However, the risk of repeat revascularisations following PCI-DES compared with CABG was 2.3 (95% CI=1.8-2.8) and 3.0 (2.3-4.2)-folds higher in RCTs and non-randomised trials, respectively. Accordingly, the risk of MACCE at 3 years following CABG compared with PCI-DES was lower in both RCTs and non-randomised trials [0.65 (: 0.55-0.77); and 0.77 (0.60-0.98), respectively].
Conclusions: Based on our pooled results, we recommend CABG compared with PCI-DES for patients with DM-MVD. Although non-randomised trials suggest no additional survival-, MI-, and CVE- benefit from CABG over PCI-DES, these results should be interpreted with care.
abstract_id: PUBMED:23641928
Percutaneous coronary intervention with drug-eluting stents or coronary artery bypass surgery in subjects with type 2 diabetes. There is a debate as to whether percutaneous coronary intervention (PCI) with drug-eluting stents or coronary artery bypass surgery (CABG) is the best procedure for subjects with type 2 diabetes and coronary artery disease requiring revascularization. There is some evidence that by following these procedures, there is less further revascularization with CABG than PCI in subjects with diabetes. Two recent studies, namely, the FREEDOM (Future Revascularization Evaluation in patients with Diabetes mellitus: Optimal Management of Multivessel Disease) trial and a trial using a real-world diabetic population from a Registry have shown that the benefits of CABG over PCI in subjects with type 2 diabetes extend to lower rates of death and myocardial infarction, in addition to lower rates of revascularization. However, the rates of stroke may be higher with CABG than PCI with drug-eluting stents in this population. Thus, if CABG is going to be preferred to PCI in subjects with type 2 diabetes and multivessel coronary disease, consideration should be given as to how to reduce the rates of stroke with CABG.
abstract_id: PUBMED:25168098
Impact of previous coronary artery bypass surgery on clinical outcome after percutaneous interventions with second generation drug-eluting stents in TWENTE trial and non-enrolled TWENTE registry. Background: Patients with previous coronary artery bypass grafting (CABG) who underwent percutaneous coronary intervention (PCI) have an increased repeat revascularization rate, but data on contemporary second-generation drug-eluting stents (DES) are scarce.
Methods: We evaluated 1-year clinical outcome following secondary revascularization by PCI in patients of the TWENTE trial and non-enrolled TWENTE registry, and compared patients with previous CABG versus patients without previous CABG.
Results: Of all 1709 consecutive patients, 202 (11.8%) had previously undergone CABG (on average 11.2±8.5 years ago). CABG patients were older (68.5±9.4 years vs. 64.1±10.7 years, P<0.001) and more often had diabetes (28.7% vs. 20.9%, P=0.01) and previous PCI (40.1% vs. 19.8%, P<0.001) compared to patients without previous CABG. Nevertheless, a higher target vessel revascularization (TVR) rate following PCI in the CABG patients (9.4% vs. 2.3%, P<0.001) was the only significant difference in clinical outcome at 1-year follow-up (available for 99.6%). Among CABG patients, the TVR rate was significantly higher in patients treated for graft lesions (n=65; 95.4% in vein grafts) than in patients treated for native coronary lesions only (n=137) (18.5% vs. 5.1%, P=0.002). Among 1638 patients with PCI of native coronary lesions only, there was only a non-significant difference in TVR between patients with previous CABG versus patients without previous CABG (5.1% vs. 2.3%, P=0.08).
Conclusions: Patients with previous CABG showed a favorable safety profile after PCI with second-generation DES. Nevertheless, their TVR rate was still much higher, driven by more repeat revascularizations after PCI of degenerated vein grafts. In native coronary lesions, there was no such difference.
abstract_id: PUBMED:34238026
Ten-year Outcomes After Drug-Eluting Stents or Bypass Surgery for Left Main Coronary Disease in Patients With and Without Diabetes Mellitus: The PRECOMBAT Extended Follow-Up Study. Background Several trials reported differential outcomes after percutaneous coronary intervention with drug-eluting stents (DES) and coronary-artery bypass grafting (CABG) for multivessel coronary disease according to the presence of diabetes mellitus (DM). However, it is not well recognized how DM status affects very-long-term (10-year) outcomes after DES and CABG for left main coronary artery disease. Methods and Results In the PRECOMBAT (Premier of Randomized Comparison of Bypass Surgery versus Angioplasty using Sirolimus-Eluting Stent in Patients with Left Main Coronary Artery Disease) trial, patients with LMCA were randomly assigned to undergo PCI with sirolimus-eluting stents (n=300) or CABG (n=300). The primary outcome was the incidence of major adverse cardiac or cerebrovascular events (MACCE; a composite of death from any cause, myocardial infarction, stroke, or ischemia-driven target-vessel revascularization). Outcomes were examined in patients with (n=192) and without (n=408) medically treated diabetes. The follow-up was extended to at least 10 years for all patients (median, 11.3 years). The 10-year rates of MACCE were not significantly different between DES and CABG in patients with DM (36.3% versus 26.7%, respectively; hazard ratio [HR], 1.35; 95% CI, 0.83-2.19; P=0.23) and without DM (25.3% versus 22.9%, respectively; HR, 1.15; 95% CI, 0.79-1.67; P=0.48) (P-for-interaction=0.48). There were no significant between-group differences in composite of death, MI, or stroke, and all-cause mortality, regardless of DM status. TVR rates were consistently higher after DES than CABG. Conclusions In this 10-year extended follow-up of PRECOMBAT, we found no significant difference between DES and CABG with respect to the incidences of MACCE, serious composite outcome, and all-cause mortality in patients with and without DM with LMCA disease. However, owing to the limited number of patients and no adjustment for multiple testing, overall findings should be considered hypothesis-generating, highlighting the need for further research. Registration URL: https://www.clinicaltrials.gov; Unique identifier: NCT03871127 and NCT00422968.
abstract_id: PUBMED:15340741
Diabetes and percutaneous coronary revascularization in the drug-eluting stent era. Diabetes mellitus has reached epidemic proportions and is associated with decreased event-free survival following coronary revascularization. Although the historical complication rates for diabetic patients following percutaneous coronary intervention have been less than acceptable, the emerging drug-eluting stent technology when coupled with an aggressive adjunctive pharmacological regimen will improve the complication rates following percutaneous revascularization for this high-risk group of patients. This review will focus on the historical data associated with revascularization, percutaneous and surgical, and diabetes mellitus and will highlight the emerging data of drug-eluting stents and adjunctive pharmacology.
Answer: The comparison between coronary artery bypass graft surgery (CABG) and percutaneous coronary intervention (PCI) with drug-eluting stents (DES) for patients with diabetes mellitus has been a subject of ongoing research. Historically, CABG has been associated with better outcomes compared to PCI with balloon angioplasty or older generation stents in diabetic patients. However, newer generation DES have shown promise in potentially bridging the gap between the two treatment modalities.
A study comparing CABG with PCI using cobalt-chromium everolimus-eluting stents (a newer generation DES) did not find a statistically significant increase in mortality for PCI, although CABG was associated with numerically higher stroke rates and PCI with numerically increased repeat revascularization rates. This suggests that mortality outcomes between CABG and PCI with this newer DES may be similar, but this hypothesis requires further testing in future trials (PUBMED:24939927).
The DELTA 2 Registry, which evaluated PCI with new-generation DES in patients with unprotected left main coronary artery disease, found that after a median follow-up of 17 months, PCI was associated with an overall low rate of the composite endpoint of death, myocardial infarction, or cerebrovascular accident. This study suggests that PCI with new-generation DES could be a viable alternative to CABG in selected patients (PUBMED:29217002).
In a long-term observational registry-based study, the outcomes of patients with multivessel disease who underwent PCI with second-generation DES were compared with those who underwent CABG. The study found no significant differences in the adjusted risks for the primary composite outcome of death, spontaneous myocardial infarction, or stroke between the two groups, even in patients with diabetes (PUBMED:34610874).
Another systematic review and meta-analysis comparing CABG with PCI-DES in diabetic patients with multi-vessel disease recommended CABG over PCI-DES, although non-randomized trials suggested no additional survival benefit from CABG over PCI-DES. The review highlighted the need for careful interpretation of these results (PUBMED:25127976).
Overall, while newer generation DES have shown improved outcomes compared to older stents, and in some studies, outcomes comparable to CABG, the evidence is not yet conclusive. Further research, including randomized controlled trials, is needed to fully determine whether newer generation DES can bridge the gap between CABG and PCI in patients with diabetes mellitus. |
Instruction: Are trait-scaling relationships invariant across contrasting elevations in the widely distributed treeline species Nothofagus pumilio?
Abstracts:
abstract_id: PUBMED:27208350
Are trait-scaling relationships invariant across contrasting elevations in the widely distributed treeline species Nothofagus pumilio? Premise Of The Study: The study of scaling examines the relative dimensions of diverse organismal traits. Understanding whether global scaling patterns are paralleled within species is key to identify causal factors of universal scaling. I examined whether the foliage-stem (Corner's rules), the leaf size-number, and the leaf mass-leaf area scaling relationships remained invariant and isometric with elevation in a wide-distributed treeline species in the southern Chilean Andes.
Methods: Mean leaf area, leaf mass, leafing intensity, and twig cross-sectional area were determined for 1-2 twigs of 8-15 Nothofagus pumilio individuals across four elevations (including treeline elevation) and four locations (from central Chile at 36°S to Tierra del Fuego at 54°S). Mixed effects models were fitted to test whether the interaction term between traits and elevation was nonsignificant (invariant).
Key Results: The leaf-twig cross-sectional area and the leaf mass-leaf area scaling relationships were isometric (slope = 1) and remained invariant with elevation, whereas the leaf size-number (i.e., leafing intensity) scaling was allometric (slope ≠ -1) and showed no variation with elevation. Leaf area and leaf number were consistently negatively correlated across elevation.
Conclusions: The scaling relationships examined in the current study parallel those seen across species. It is plausible that the explanation of intraspecific scaling relationships, as trait combinations favored by natural selection, is the same as those invoked to explain across species patterns. Thus, it is very likely that the global interspecific Corner's rules and other leaf-leaf scaling relationships emerge as the aggregate of largely parallel intraspecific patterns.
abstract_id: PUBMED:23788748
Similar variation in carbon storage between deciduous and evergreen treeline species across elevational gradients. Background And Aims: The most plausible explanation for treeline formation so far is provided by the growth limitation hypothesis (GLH), which proposes that carbon sinks are more restricted by low temperatures than by carbon sources. Evidence supporting the GLH has been strong in evergreen, but less and weaker in deciduous treeline species. Here a test is made of the GLH in deciduous-evergreen mixed species forests across elevational gradients, with the hypothesis that deciduous treeline species show a different carbon storage trend from that shown by evergreen species across elevations.
Methods: Tree growth and concentrations of non-structural carbohydrates (NSCs) in foliage, branch sapwood and stem sapwood tissues were measured at four elevations in six deciduous-evergreen treeline ecotones (including treeline) in the southern Andes of Chile (40°S, Nothofagus pumilio and Nothofagus betuloides; 46°S, Nothofagus pumilio and Pinus sylvestris) and in the Swiss Alps (46°N, Larix decidua and Pinus cembra).
Key Results: Tree growth (basal area increment) decreased with elevation for all species. Regardless of foliar habit, NSCs did not deplete across elevations, indicating no shortage of carbon storage in any of the investigated tissues. Rather, NSCs increased significantly with elevation in leaves (P < 0·001) and branch sapwood (P = 0·012) tissues. Deciduous species showed significantly higher NSCs than evergreens for all tissues; on average, the former had 11 % (leaves), 158 % (branch) and 103 % (sapwood) significantly (P < 0·001) higher NSCs than the latter. Finally, deciduous species had higher NSC (particularly starch) increases with elevation than evergreens for stem sapwood, but the opposite was true for leaves and branch sapwood.
Conclusions: Considering the observed decrease in tree growth and increase in NSCs with elevation, it is concluded that both deciduous and evergreen treeline species are sink limited when faced with decreasing temperatures. Despite the overall higher requirements of deciduous tree species for carbon storage, no indication was found of carbon limitation in deciduous species in the alpine treeline ecotone.
abstract_id: PUBMED:29394527
Insights into intraspecific wood density variation and its relationship to growth, height and elevation in a treeline species. The wood economics spectrum provides a general framework for interspecific trait-trait coordination across wide environmental gradients. Whether global patterns are mirrored within species constitutes a poorly explored subject. In this study, I first determined whether wood density co-varies together with elevation, tree growth and height at the within-species level. Second, I determined the variation of wood density in different stem parts (trunk, branch and twigs). In situ trunk sapwood, trunk heartwood, branch and twig densities, in addition to stem growth rates and tree height were determined in adult trees of Nothofagus pumilio at four elevations in five locations spanning 18° of latitude. Mixed effects models were fitted to test relationships among variables. The variation in wood density reported in this study was narrow (ca. 0.4-0.6 g cm-3 ) relative to global density variation (ca. 0.3-1.0 g cm-3 ). There was no significant relationship between stem growth rates and wood density. Furthermore, the elevation gradient did not alter the wood density of any stem part. Trunk sapwood density was negatively related to tree height. Twig density was higher than branch and trunk densities. Trunk heartwood density was always significantly higher than sapwood density. Negative across-species trends found in the growth-wood density relationship may not emerge as the aggregate of parallel intraspecific patterns. Actually, trees with contrasting growth rates show similar wood density values. Tree height, which is tightly related to elevation, showed a negative relationship with sapwood density.
abstract_id: PUBMED:24812110
An experimental approach to explain the southern Andes elevational treeline. Unlabelled: •
Premise Of The Study: The growth limitation hypothesis (GLH) is the most accepted mechanistic explanation for treeline formation, although it is still uncertain whether it applies across taxa. The successful establishment of Pinus contorta--an exotic conifer species in the southern hemisphere--above the Nothofagus treeline in New Zealand may suggest a different mechanism. We tested the GLH in Nothofagus pumilio and Pinus contorta by comparing seedling performance and carbon (C) balance in response to low temperatures.•
Methods: At a southern Chilean treeline, we grew seedlings of both species 2 m above ground level, to simulate coupling between temperatures at the meristem and in the air (colder), and at ground level, i.e., decoupling air temperature (relatively milder). We recorded soil and air temperatures as well. After 3 yr, we measured seedling survival and biomass (as a surrogate of growth) and determined nonstructural carbohydrates (NSC).•
Key Results: Nothofagus and Pinus did not differ in survival, which, as a whole, was higher at ground level than at the 2-m height. The root-zone temperature for the growing season was 6.6°C. While biomass and NSC decreased significantly for Nothofagus at the 2-m height compared with ground level (C limitation), these trends were not significant for Pinus•
Conclusions: The treeline for Nothofagus pumilio is located at an isotherm that fully matches global patterns; however, its physiological responses to low temperatures differed from those of other treeline species. Support for C limitation in N. pumilio but not in P. contorta indicates that the physiological mechanism explaining their survival and growth at treeline may be taxon-dependent.
abstract_id: PUBMED:26868524
Living on the edge: adaptive and plastic responses of the tree Nothofagus pumilio to a long-term transplant experiment predict rear-edge upward expansion. Current climate change affects the competitive ability and reproductive success of many species, leading to local extinctions, adjustment to novel local conditions by phenotypic plasticity or rapid adaptation, or tracking their optima through range shifts. However, many species have limited ability to expand to suitable areas. Altitudinal gradients, with abrupt changes in abiotic conditions over short distances, represent "natural experiments" for the evaluation of ecological and evolutionary responses under scenarios of climate change. Nothofagus pumilio is the tree species which dominates as pure stands the montane forests of Patagonia. We evaluated the adaptive value of variation in quantitative traits of N. pumilio under contrasting conditions of the altitudinal gradient with a long-term reciprocal transplant experimental design. While high-elevation plants show little response in plant, leaf, and phenological traits to the experimental trials, low-elevation ones show greater plasticity in their responses to changing environments, particularly at high elevation. Our results suggest a relatively reduced potential for evolutionary adaptation of high-elevation genotypes, and a greater evolutionary potential of low-elevation ones. Under global warming scenarios of forest upslope migration, high-elevation variants may be outperformed by low-elevation ones during this process, leading to the local extinction and/or replacement of these genotypes. These results challenge previous models and predictions expected under global warming for altitudinal gradients, on which the leading edge is considered to be the upper treeline forests.
abstract_id: PUBMED:32173741
Xylem anatomy needs to change, so that conductivity can stay the same: xylem adjustments across elevation and latitude in Nothofagus pumilio. Background And Aims: Plants have the potential to adjust the configuration of their hydraulic system to maintain its function across spatial and temporal gradients. Species with wide environmental niches provide an ideal framework to assess intraspecific xylem adjustments to contrasting climates. We aimed to assess how xylem structure in the widespread species Nothofagus pumilio varies across combined gradients of temperature and moisture, and to what extent within-individual variation contributes to population responses across environmental gradients.
Methods: We characterized xylem configuration in branches of N. pumilio trees at five sites across an 18° latitudinal gradient in the Chilean Andes, sampling at four elevations per site. We measured vessel area, vessel density and the degree of vessel grouping. We also obtained vessel diameter distributions and estimated the xylem-specific hydraulic conductivity. Xylem traits were studied in the last five growth rings to account for within-individual variation.
Key Results: Xylem traits responded to changes in temperature and moisture, but also to their combination. Reductions in vessel diameter and increases in vessel density suggested increased safety levels with lower temperatures at higher elevation. Vessel grouping also increased under cold and dry conditions, but changes in vessel diameter distributions across the elevational gradient were site-specific. Interestingly, the estimated xylem-specific hydraulic conductivity remained constant across elevation and latitude, and an overwhelming proportion of the variance of xylem traits was due to within-individual responses to year-to-year climatic fluctuations, rather than to site conditions.
Conclusions: Despite conspicuous adjustments, xylem traits were coordinated to maintain a constant hydraulic function under a wide range of conditions. This, combined with the within-individual capacity for responding to year-to-year climatic variations, may have the potential to increase forest resilience against future environmental changes.
abstract_id: PUBMED:23456320
Fine-scale genetic structure of Nothofagus pumilio (lenga) at contrasting elevations of the altitudinal gradient. Montane forests provide the natural framework to test for various ecological settings at distinct elevations as they may affect population demography, which in turn will affect the spatial genetic structure (SGS). We analyzed the fine-scale SGS of Nothofagus pumilio, which dominates mountain areas of Patagonia, in three pairs of sites at contrasting elevations (low- vs. high-elevation). Within a total area of 1 ha fresh leaf tissue from 90 individuals was collected at each of the six studied stands following a spatially explicit sampling design. Population genetic diversity parameters were analyzed for all sampled individuals using five polymorphic isozyme loci, and a subset of 50 individuals per stand were also screened for five microsatellite loci. The SGS was assessed on 50 individuals/stand, using the combined datasets of isozymes and microsatellites. Most low-elevation stands consisted of older individuals with complex age structures and genetically diverse plots. In contrast, high-elevation stands and one post-fire low-elevation population yielded even-aged structures with evidence of growth suppression, and were genetically homogeneous. All stands yielded significant SGS. Similarly to mature stands of the non-sprouter congener Nothofagus dombeyi, multi-age low-altitude N. pumilio yielded significant SGS weakened by competing species of the understory and the formation of seedling banks. Alike the sprouter Nothofagus antarctica, high-altitude stands produced significant SGS as a consequence of occasional seedling establishment reinforced by vegetative spread.
abstract_id: PUBMED:30070684
Size-dependent variations in individual traits and trait scaling relationships within a shade-tolerant evergreen tree species. Premise Of Study: The plant size-trait relationship is a fundamental dimension in the spectrum of plant form and function. However, it remains unclear whether the trait scaling relationship within species is modified by tree size. Investigating size-dependent trait covariations within species is crucial for understanding the ontogenetic constraints on the intraspecific economic spectrum and, more broadly, the structure and causes of intraspecific trait variations.
Methods: We measured eight morphological, stoichiometric, and hydraulic traits for 604 individual plants of a shade-tolerant evergreen tree species, Litsea elongata, in a subtropical evergreen forest of eastern China. Individual trait values were regressed against tree basal diameter to evaluate size-dependent trait variations. Standardized major axis regression was employed to examine trait scaling relationships and to test whether there was a common slope and elevation in the trait scaling relationship across size classes.
Key Results: Small trees tended to have larger, thinner leaves and longer, slenderer stems than larger trees, which indicates an acquisitive economic strategy in juvenile trees. Leaf nitrogen concentrations increased with plant size, which was likely due to a high ratio of structural to photosynthetic nitrogen in the evergreen leaves of large trees. Bivariate trait scaling was minimally modified by tree size, although the elevation of some relationships differed between size classes.
Conclusions: Our results suggest that there are common economic and biophysical constraints on intraspecific trait covariation, independent of tree size. Small and large trees tend to be located at opposite ends of an intraspecific plant economic spectrum.
abstract_id: PUBMED:21039558
Intraspecific trait variation and covariation in a widespread tree species (Nothofagus pumilio) in southern Chile. • The focus of the trait-based approach to study community ecology has mostly been on trait comparisons at the interspecific level. Here we quantified intraspecific variation and covariation of leaf mass per area (LMA) and wood density (WD) in monospecific forests of the widespread tree species Nothofagus pumilio to determine its magnitude and whether it is related to environmental conditions and ontogeny. We also discuss probable mechanisms controlling the trait variation found. • We collected leaf and stem woody tissues from 30-50 trees of different ages (ontogeny) from each of four populations at differing elevations (i.e. temperatures) and placed at each of three locations differing in soil moisture. • The total variation in LMA (coefficient of variation (CV) = 21.14%) was twice that of WD (CV = 10.52%). The total variation in traits was never less than 23% when compared with interspecific studies. Differences in elevation (temperature) for the most part explained variation in LMA, while differences in soil moisture and ontogeny explained the variation in WD. Traits covaried similarly in the altitudinal gradient only. • Functional traits of N. pumilio exhibited nonnegligible variation; LMA varied for the most part with temperature, while WD mostly varied with moisture and ontogeny. We demonstrate that environmental variation can cause important trait variation without species turnover.
abstract_id: PUBMED:31327712
Scaling Principles of Distributed Circuits. Identifying shared quantitative features of a neural circuit across species is important for 3 reasons. Often expressed in the form of power laws and called scaling relationships [1, 2], they reveal organizational principles of circuits, make insights gleaned from model systems widely applicable, and explain circuit performance and function, e.g., visual circuits [3, 4]. The visual circuit is topographic [5, 6], wherein retinal neurons target and activate predictable spatial loci in primary visual cortex. The brain, however, contains many circuits, where neuronal targets and activity are unpredictable and distributed throughout the circuit, e.g., olfactory circuits, in which glomeruli (or mitral cells) in the olfactory bulb synapse with neurons distributed throughout the piriform cortex [7-10]. It is unknown whether such circuits, which we term distributed circuits, are scalable. To determine whether distributed circuits scale, we obtained quantitative descriptions of the olfactory bulb and piriform cortex in six mammals using stereology techniques and light microscopy. Two conserved features provide evidence of scalability. First, the number of piriform neurons n and bulb glomeruli g scale as n∼g3/2. Second, the average number of synapses between a bulb glomerulus and piriform neuron is invariant at one. Using theory and modeling, we show that these two features preserve the discriminatory ability and precision of odor information across the olfactory circuit. As both abilities depend on circuit size, manipulating size provides evolution with a way to adapt a species to its niche without designing developmental programs de novo. These principles might apply to other distributed circuits like the hippocampus.
Answer: Trait-scaling relationships in the widely distributed treeline species Nothofagus pumilio were examined across contrasting elevations in the southern Chilean Andes. The study found that the foliage-stem (Corner's rules), the leaf size-number, and the leaf mass-leaf area scaling relationships remained invariant with elevation. Specifically, the leaf-twig cross-sectional area and the leaf mass-leaf area scaling relationships were isometric (slope = 1) and did not vary with elevation. However, the leaf size-number scaling was allometric (slope ≠ -1) and also showed no variation with elevation. Leaf area and leaf number were consistently negatively correlated across elevation. These findings suggest that the scaling relationships observed within Nothofagus pumilio parallel those seen across species, indicating that global interspecific scaling patterns may emerge as the aggregate of largely parallel intraspecific patterns (PUBMED:27208350). |
Instruction: Does the type of skin replacement surgery influence the rate of infection in acute burn injured patients?
Abstracts:
abstract_id: PUBMED:23622869
Does the type of skin replacement surgery influence the rate of infection in acute burn injured patients? Introduction: Infection is a major cause of morbidity and mortality following burn. Early debridement and wound closure minimize the risk of infection. This study aimed to examine the association of surgical modalities with burn wound infection (BWI) rate, graft loss and length of stay (LOS) outcome.
Method: This study is a retrospective analysis of all patients undergoing surgical intervention at the Royal Perth Hospital between 2004 and 2011. Multivariable regression analyses were used to predict the impact of burn and patient factors on the outcomes.
Results: Seven hundred seventy patients were eligible for inclusion with 74.8% males and a mean total body surface area (TBSA) burnt of 7.9% (range 1.0-75). Sixty-seven patients (8.7%) had positive post-operative swabs indicating potential wound infection. Age and TBSA significantly increased the risk of BWI (confirmed by quantitative swab). Positive microbiology was not associated with surgery type. Age, TBSA, diabetes and surgical modalities had significant influence on LOS in hospital. Only TBSA was an independent predictor of graft loss.
Conclusion: Age, TBSA and diabetes were associated with poorer outcomes after burn. Surgery type was not associated independently with the risk of infection.
abstract_id: PUBMED:37510832
Skin Bank Establishment in Treatment of Severe Burn Injuries: Overview and Experience with Skin Allografts at the Vienna Burn Center. Depending on their extent, burn injuries require different treatment strategies. In cases of severe large-area trauma, the availability of vital skin for autografting is limited. Donor skin allografts are a well-established but rarely standardized option for temporary wound coverage. Ten patients were eligible for inclusion in this retrospective study. Overall, 202 donor skin grafts obtained from the in-house skin bank were applied in the Department of Plastic and Reconstructive and Aesthetic Surgery, Medical University of Vienna. Between 2017 and 2022, we analysed the results in patient treatment, the selection of skin donors, tissue procurement, tissue processing and storage of allografts, as well as the condition and morphology of the allografts before application. The average Abbreviated Burn Severity Index (ABSI) was 8.5 (range, 5-12), and the mean affected total body surface area (TBSA) was 46.1% (range, 20-80%). In total, allograft application was performed 14 times. In two cases, a total of eight allografts were removed due to local infection, accounting for 3.96% of skin grafts. Six patients survived the acute phase of treatment. Scanning electron microscope images and histology showed no signs of scaffold decomposition and intact tissue layers of the allografts. The skin banking program and the application of skin allografts at the Vienna Burn Center can be considered successful. In severe burn injuries, skin allografts provide time by serving as sufficient wound coverage after early necrosectomy. Having an in-house skin banking program at a dedicated burn centre is particularly advantageous since issues of availability and distribution can be minimized. Skin allografts provide a reliable treatment option in patients with extensive burn injuries.
abstract_id: PUBMED:26730039
Evaluation of Amniotic Membrane Effectiveness in Skin Graft Donor Site Dressing in Burn Patients. Although the recipient site in burn wounds is dressed with universally accepted materials, the ideal management of split-thickness skin donor sites remains controversial. The aim of our study is to compare two methods of wound dressing in donor sites of split-thickness skin graft in patients undergoing burn wound reconstructive surgery. Forty-two consecutive patients with second- and third-degree burns with a total body surface area between 20 and 40 % were enrolled in this randomized clinical trial conducted in Motahari Burn Hospital in Tehran, Iran. In each patient, two anatomic areas with similar features were randomly selected as intervention and control donor sites. The intervention site was dressed with amniotic membrane, whereas the control site was treated with Vaseline-impregnated gauze. Wounds were examined daily by expert surgeons to measure the clinical outcomes including duration of healing, severity of pain, and infection rate. The mean ± SD age of patients was 31.17 ± 13.72 years; furthermore, burn percentage had a mean ± SD of 31.19 ± 10.56. The mean ± SD of patients' cooperation score was 1.6 ± 0.79 in the intervention group compared with 2.93 ± 0.71 in the control group, revealing a statistically significant difference (P < 0.05). Duration of wound healing was significantly shorter (P < 0.05) in the intervention group (17.61 ± 2.56 days) compared with the control group (21.16 ± 3.45 days). However, there was no significant difference in terms of wound infection rate between donor sites in the control and intervention groups (P > 0.05). Amniotic membrane as an alternative for dressing of skin graft donor sites provides significant benefits by increasing patients' comfort via diminishing the number of dressing changes and facilitating the process of wound healing.
abstract_id: PUBMED:35706557
The Role of Skin Substitutes in Acute Burn and Reconstructive Burn Surgery: An Updated Comprehensive Review. Burns disrupt the protective skin barrier with consequent loss of cutaneous temperature regulation, infection prevention, evaporative losses, and other vital functions. Chronically, burns lead to scarring, contractures, pain, and impaired psychosocial well-being. Several skin substitutes are available and replace the skin and partially restore functional outcomes and improve cosmesis. We performed a literature review to update readers on biologic and synthetic skin substitutes to date applied in acute and reconstructive burn surgery. Improvement has been rapid in the development of skin substitutes in the last decade; however, no available skin substitute fulfills criteria as a perfect replacement for damaged skin.
abstract_id: PUBMED:34584504
Relationship Between Serum Albumin Levels And The Outcome Of Split-Thickness Skin Graft In Burn Injury Patients. Burn injury is still a global health problem due to its high incidence. Healing of burn wounds requires an optimal state of the body that is characterized by serum albumin level, especially in the category of patients that require skin graft to cover the wound caused by the deep burn. This study investigates the relationship between albumin levels and the outcome of split-thickness skin graft (STSG) and obtains a tolerance limit for albumin levels that can be successful in STSG. This was a prospective cohort study at our Plastic Surgery Center in Bandung, West Java, Indonesia from June 2019 to November 2020. Fortyseven burn injury patients who had undergone STSG qualified as the study subjects based on the criteria set. Of these patients, 85.11% were male and 68.08% were in the productive age. Preoperative albumin level has no significant correlation with graft outcome (P>0.05). Area Under the Curve (AUC) is 0.758; (95% CI: 0.605, 0.910). The optimal cut-off point for albumin levels is 2.175 (sensitivity of 0.78 and a specificity of 0.714). In our study, graft healing has no significant correlation with albumin levels. Further study is needed to assess the relationship between serum albumin levels (preoperative and postoperative) with outcome of the graft, and assess infection status.
abstract_id: PUBMED:31492583
Dermal regenerative matrix use in burn patients: A systematic review. Background: Dermal regenerative matrices (DRMs) have been used for several decades in the treatment of acute and reconstructive burn injury. The objective of this study was to perform a systematic review of the literature to assess clinical outcomes and safety profile of DRMs in full-thickness burn injury.
Methods: Comprehensive searches of MEDLINE, EMBASE, CINAHL, and Cochrane Library were performed from 1988 to 2017. Two independent reviewers completed preliminary and full-text screening of all articles. English-language articles reporting on DRM use in patients with full-thickness burn injury were included.
Results: Literature search generated 914 unique articles. Following screening, 203 articles were assessed for eligibility, and 72 met inclusion criteria for analysis. DRM was applied to1084 patients (74% acute burns, 26% burn reconstruction). Of the twelve studies that described changes in ROM, significant improvement was observed in 95% of reconstructive patients. The most frequently treated reconstructive sites were the neck, hand/wrist, lower extremity, and axilla. Vancouver scar scale was used in eight studies and indicated a significant improvement in the scar quality with DRM. The overall complication rate was 13%, most commonly infection, graft loss, hematoma formation, and contracture.
Conclusions: Although variability in functional and cosmetic outcomes was observed, DRM demonstrates improvements in ROM and scar appearance without objective regression. Essential demographic data were lacking in many studies, highlighting the need for future standardization of reporting outcomes in burns following application of dermal substitutes.
abstract_id: PUBMED:35111453
Skin Graft Versus Local Flaps in Management of Post-burn Elbow Contracture. Introduction Contracture is a pathological scar tissue resulting from local skin tissue damage, secondary to different local factors. It can restrict joint mobility, resulting in deformity and disability. This study aimed to investigate the outcomes of skin grafts compared to local flaps to reconstruct post-burn elbow contractures. These parameters included regaining function, range of movement, recurrence, and local wound complications. Methodology A retrospective study reviewed 21 patients for elbow reconstruction over 12 months. Only patients with post-burn elbow contracture were included. Other causes, including previous corrective surgery, associated elbow stiffness, and patients who opted out of post-operative physiotherapy, were excluded. Patients were categorized according to the method of coverage into three groups: graft alone (G1), local flap (G2), or combined approach (G3). Results Females were three times at higher risk to suffer a burn injury, while almost half of the cases were children. Scald injury represented 81% of burn causes. G1,2,3 were used in 47.6%, 42.9% and 9.5% of cases retrospectively. The overall rate of infection was 28.6%. Hundred percent graft taken was recorded in 83.3 % of cases; however, flap take was 91.1%. After 12 months of follow-up, re-contracture was 60% and 22.8% in G1 and G2; however, the satisfaction rate was 70% and 100% in both groups retrospectively. The overall satisfaction was 85.7% in all groups. Conclusion Grafts and local flaps are reasonable options for post contracture release; however, flaps are superior. Coverage selection depends on the lost tissue area and exposure of underlying deep structures. Physiotherapy and patient satisfaction are crucial in the outcomes.
abstract_id: PUBMED:34237749
Copper Ions Ameliorated Thermal Burn-Induced Damage in ex vivo Human Skin Organ Culture. Introduction: The zone of stasis is formed around the coagulation zone following skin burning and is characterized by its unique potential for salvation. The cells in this zone may die or survive depending on the severity of the burn and therefore are target for the local treatments of burns. Their low survival rate is consistent with decreased tissue perfusion, hypotension, infection, and/or edema, resulting in a significant increase in the wound size following burning. Copper is an essential trace mineral needed for the normal function of almost all body tissues, including the skin.
Objective: The aim of the work was to study the effect copper ions have on skin burn pathophysiology.
Methods: Skin obtained from healthy patients undergoing abdominoplasty surgery was cut into 8 × 8 mm squares, and round 0.8-mm diameter burn wounds were inflicted on the skin explants. The burned and control intact skin samples were cultured up to 27 days after wounding. Immediately following injury and then again every 48 h, saline only or containing 0.02 or 1 µM copper ions was added onto the skin explant burn wounds.
Results: We found that exposing the wounded sites immediately after burn infliction to 0.02 or 1 µM copper ions reduced the deterioration of the zone of stasis and the increase in wound size. The presence of the copper ions prevented the dramatic increase of pro-inflammatory cytokines (interleukin (IL)-6 and IL-8) and transforming growth factor beta-1 that followed skin burning. We also detected re-epithelialization of the skin tissue and a greater amount of collagen fibers upon copper treatment.
Conclusion: The deterioration of the zone of stasis and the increase in wound size following burning may be prevented or reduced by using copper ion-based therapeutic interventions.
abstract_id: PUBMED:32190588
Autologous Platelet Rich Plasma As A Preparative for Resurfacing Burn Wounds with Split Thickness Skin Grafts. Background: Split thickness skin graft is a widely accepted technique to cover large defects. Shearing, hematoma and infection have often been attributed as major causes for graft loss. Autologous platelet rich plasma (PRP) has been used in various treatment modalities in the field of plastic surgery for its healing, adhesive and hemostatic properties owing to the growth factors that are released. This Study primarily throws light on the usage of PRP over difficult Burn wound beds to augment graft uptake and attenuate complications.
Methods: The patients were divided into two groups of those who were subjected to use of autologous PRP as a preparative burn surfacing and the control group who underwent standard method of treatment.
Results: Patients in PRP group significantly showed a higher graft adherence rate as compared to those with other method. It also reduced pain, and hematoma formation.
Conclusion: Application of PRP is a safe, cost effective, easy method to increase graft adherence rate in patients with burns where graft loss is noticed and there is shortage of donor sites.
abstract_id: PUBMED:21991221
Nosocomial infection in sulaimani burn hospital, iraq. Wound infections are common problems in burn units and mostly originate from nosocomial contamination. The development of infections in burn cases is serious because of their effects on the course of the disease and patient outcomes. Many burned patients die as a result of infection during their hospital courses. The rate of infection in burn cases is extremely high in developing countries. For these reasons, we carried out a study of the bacteriological profiles and a comparison of antimicrobial resistance patterns of predominant bacteria, over a period of one year. The study was conducted during a period of 12 months (from May 2008 to April 2009) at the Sulaimani Burn, Reconstructive and Plastic Surgery Hospital, Sulaimani, Iraq. In the present study, a total of 1126 samples of swabs and blood were processed from 760 admitted patients (505 female and 255 male). Bacteria isolates were found in 935 samples (83%) and only 180 wound swabs were sterile (16%); the others became contaminated in the lab and no test was performed (1%). Total bacteria isolated: 1402 organisms (average number of organisms per sample, 1.5). Staphylococcus aureuswas the commonest cause of bacterial burn wound invasion, accounting for 34% of cases, while Pseudomonas aeruginosacaused 18%. The total percentage of multidrug-resistant organisms among all organisms was 51% (715/1402); a percentage of 87.1% was recorded with reference to methicillin-resistant Staphylococcus aureusamong all cases of Staphylococcus aureus(176/202); non-fermenting bacteria among all non-fermenting bacteria were 28.8%.
Answer: The type of skin replacement surgery does not appear to independently influence the rate of infection in acute burn injured patients. According to a retrospective analysis of patients undergoing surgical intervention at the Royal Perth Hospital between 2004 and 2011, positive microbiology was not associated with the type of surgery performed. Instead, age, total body surface area (TBSA) burnt, and diabetes were factors that significantly influenced the length of stay in the hospital and were associated with poorer outcomes after burn. The type of surgery was not independently associated with the risk of infection (PUBMED:23622869).
Additionally, in a study at the Vienna Burn Center, the use of donor skin allografts from an in-house skin bank for temporary wound coverage in severe burn injuries showed a low rate of local infection, with only 3.96% of skin grafts being removed due to infection (PUBMED:37510832). This suggests that the use of skin allografts, a type of skin replacement surgery, can be successful with a relatively low infection rate when managed properly in a specialized center.
Moreover, a study evaluating the effectiveness of amniotic membrane in dressing skin graft donor sites in burn patients found no significant difference in terms of wound infection rate between donor sites treated with amniotic membrane and those treated with Vaseline-impregnated gauze (PUBMED:26730039). This indicates that the choice of dressing material in this context did not significantly affect the infection rate.
In summary, the available evidence suggests that the type of skin replacement surgery does not independently influence the rate of infection in acute burn injured patients. Other factors such as patient age, TBSA burnt, and comorbidities like diabetes may play a more significant role in infection risk and outcomes following burn injuries. |
Instruction: Simulation-based endovascular skills assessment: the future of credentialing?
Abstracts:
abstract_id: PUBMED:18372149
Simulation-based endovascular skills assessment: the future of credentialing? Objectives: Simulator-based endovascular skills training measurably improves performance in catheter-based image-guided interventions. The purpose of this study was to determine whether structured global performance assessment during endovascular simulation correlated well with trainee-reported procedural skill and prior experience level.
Methods: Fourth-year and fifth-year general surgery residents interviewing for vascular fellowship training provided detailed information regarding prior open vascular and endovascular operative experience. The pretest questionnaire responses were used to separate subjects into low (<20 cases) and moderate (20 to 100) endovascular experience groups. Subjects were then asked to perform a renal angioplasty/stent procedure on the Procedicus Vascular Intervention System Trainer (VIST) endovascular simulator (Mentice Corporation, Gothenburg, Sweden). The subjects' performance was supervised and evaluated by a blinded expert interventionalist using a structured global assessment scale based on angiography setup, target vessel catheterization, and the interventional procedure. Objective measures determined by the simulator were also collected for each subject. A postsimulation questionnaire was administered to determine the subjects' self-assessment of their performance.
Results: Seventeen surgical residents from 15 training programs completed questionnaires before and after the exercise and performed a renal angioplasty/stent procedure on the endovascular simulator. The beginner group (n = 8) reported prior experience of a median of eight endovascular cases (interquartile range [IQR], 6.5-17.8; range, 4-20), and intermediate group (n = 9) had previously completed a median of 42 cases (IQR, 31-44; range, 25-89, P = .01). The two groups had similar prior open vascular experience (79 cases vs 75, P = .60). The mean score on the structured global assessment scale for the low experience group was 2.68 of 5.0 possible compared with 3.60 for the intermediate group (P = .03). Scores for subcategories of the global assessment score for target vessel catheterization (P = .02) and the interventional procedure (P = .05) contributed more to the differentiation between the two experience groups. Total procedure time, fluoroscopy time, average contrast used, percentage of lesion covered by the stent, placement accuracy, residual stenosis rates, and number of cine loops utilized were similar between the two groups (P > .05).
Conclusion: Structured endovascular skills assessment correlates well with prior procedural experience within a high-fidelity simulation environment. In addition to improving endovascular training, simulators may prove useful in determining procedural competency and credentialing standards for endovascular surgeons.
abstract_id: PUBMED:21549986
Credentialing of surgical skills centers. Major imperatives regarding quality of patient care and patient safety are impacting surgical care and surgical education. Also, significant emphasis continues to be placed on education and training to achieve proficiency, expertise, and mastery in surgery. Simulation-based surgical education and training can be of immense help in acquiring and maintaining surgical skills in safe environments without exposing patients to risk. Opportunities for repetition of tasks can be provided to achieve pre-established standards, and knowledge and skills can be verified using valid and reliable assessment methods. Also, expertise and mastery can be attained through repeated practice, specific feedback, and establishment of progressively higher learning goals. Simulation-based education and training can help surgeons maintain their skills in infrequently performed procedures and regain proficiency in procedures they have not performed for a period of time. In addition, warm-ups and surgical rehearsals in simulated environments should enhance performance in real settings. Major efforts are being pursued to advance the field of simulation-based surgical education. New education and training models involving validation of knowledge and skills are being designed for practicing surgeons. A competency-based national surgery resident curriculum was recently launched and is undergoing further enhancements to address evolving education and training needs. Innovative simulation-based surgical education and training should be offered at state-of-the-art simulation centers, and credentialing and accreditation of these centers are key to achieving their full potential.
abstract_id: PUBMED:26797930
The role of simulation in the development of endovascular surgical skills. Endovascular trainees in the National Health Service still largely rely on the apprentice-apprenticeship model from the late 19th century. As the scope for endovascular therapy increases, due to the rapid innovation, evolution and refinement of technology, so too do patients' therapeutic options. This climate has also opened the door for more novel training adjuncts, to address the gaps that exist in our current endovascular training curriculum. The aim of this paper is to present a succinct overview of endovascular simulation, synthesizing the trials and research behind this rapidly evolving training as well as highlighting areas where further research is required. The authors searched MEDLINE and EMBASE for relevant manuscripts on all aspects of endovascular simulation training. A comprehensive Google search was also undertaken to look for any relevant information on endovascular training courses available and any unpublished work that had been presented at relevant scientific meetings. Papers were categorized into the four models: synthetic, animal, virtual reality and human cadaver, and separate searches for evidence of skill transfer were also undertaken. Authors of novel research projects were contacted for further details of unpublished work and permission granted to report such findings in this manuscript.
abstract_id: PUBMED:26198625
Targeting clinical outcomes: Endovascular simulation improves diagnostic coronary angiography skills. Objective: The purpose of this study is to determine the effects of simulation-based medical education (SBME) on the skills required to perform coronary angiography in the cardiac catheterization laboratory.
Background: Cardiovascular fellows commonly learn invasive procedures on patients. Because this approach is not standardized, it can result in inconsistent skill acquisition through exclusion of concepts and skills. Also, the learning curve varies between trainees yielding variability in skill acquisition. Therefore, coronary angiography skills are an excellent target for SBME in an environment in which direct patient care is not jeopardized.
Methods: From January 2013 to June 2013, 14 cardiovascular fellows entering the cardiac catheterization laboratory at a tertiary care teaching hospital were tested on an endovascular simulator to assess baseline skills. All fellows subsequently underwent didactic teaching and preceptor-lead training on the endovascular simulator. Topics included basic catheterization skills and a review of catheterization laboratory systems. Following training, all fellows underwent a post-training assessment on the endovascular simulator. Paired t tests were used to compare items on the skills checklist and simulator defined variables.
Results: Cardiovascular fellows scored significantly higher on a diagnostic coronary angiography skills checklist following SBME using an endovascular simulator. The mean pretest score was 66.6% (SD = 9.7%) compared to 86.0% (SD = 6.3%) following simulator training (P < 0.001). Additional findings include significant reduction in procedure time and use of cine-fluoroscopy at posttest.
Conclusions: SBME significantly improved cardiovascular fellows' performance of simulated coronary angiography skills. Standardized simulation-based education is a valuable adjunct to traditional clinical education for cardiovascular fellows.
abstract_id: PUBMED:35224180
A micro-credentialing methodology for improved recognition of HE employability skills. Increasingly, among international organizations concerned with unemployment rates and industry demands, there is an emphasis on the need to improve graduates' employability skills and the transparency of mechanisms for their recognition. This research presents the Employability Skills Micro-credentialing (ESMC) methodology, designed under the EPICA Horizon 2020 (H2020) project and tested at three East African universities, and shows how it fosters pedagogical innovation and promotes employability skills integration and visibility. The methodology, supported by a competency-based ePortfolio and a digital micro-credentialing system, was evaluated using a mixed-method design, combining descriptive statistics and qualitative content analysis to capture complementary stakeholder perspectives. The study involved the participation of 13 lecturers, 169 students, and 24 employers. The results indicate that the ESMC methodology is a promising approach for supporting students in their transition from academia to the workplace. The implementation of the methodology and the involvement of employers entails rethinking educational practices and academic curricula to embed employability skills. It enables all actors to broaden their understanding of the relationship between higher education and the business sector and to sustain visibility, transparency, and reliability of the recognition process. These findings indicate that there are favourable conditions in the region for the adoption of the approach, which is a meaningful solution for the stakeholder community to address the skills gap.
abstract_id: PUBMED:30579447
The relevance of low-fidelity virtual reality simulators compared with other learning methods in basic endovascular skills training. Objective: The use of simulators has shown a profound impact on the development of both training and assessment of endovascular skills. Furthermore, there is evidence that simulator training is of great benefit for novice trainees. However, there are only a few simulators available geared specifically toward novice learners. Whereas research suggests that low-fidelity simulators could fill this gap, there are insufficient data available to determine the role of low-fidelity simulators in the training of endovascular skills.
Methods: Medical students in their fifth year (N = 50) with no previous endovascular experience were randomized into three groups: conventional learning through a video podcast (group V; n = 12), low-fidelity simulation training with tablet-paired touch-gesture navigation (group A; n = 12), and low-fidelity simulation training with tablet-paired physical endovascular tool navigation (group S; n = 26). Within their respective groups, all students attended a 1-day class on basic endovascular skills. Questionnaire items for self-assessment before and after the class and assessment after the class of the participant's practical skills on a high-fidelity simulator were analyzed across all three groups as well as for each group separately using nonparametric tests.
Results: All 50 participants completed the training. Participants in group S showed a significantly increased interest in working in interventional cardiology (P = .02) and vascular surgery (P = .03) after the class. Evaluation of the questionnaire items after the class showed that participants in group S rated their practical skills significantly higher after the class compared with those in group V and group A (P < .001 for pairwise comparison of all three groups, respectively), creating a significant trend across the three groups. However, analysis of the practical skills assessment for all three groups showed a significant difference between the groups only for choosing a guidewire (P = .045) and a significant trend in performance across the groups for choosing a guidewire and for positioning the guidewire in the vessel (P = .02 and P = .05, respectively). All other steps of the skills assessment showed no significant differences or a trend across the groups.
Conclusions: Low-fidelity simulation training, particularly with physical endovascular tool navigation, led to increased motivation in novice trainees. Whereas simulator training was associated with increased confidence of trainees in their skills, assessment of their practical skills showed no actual improvement in this study. Overall, low-fidelity simulation has the potential to benefit novice trainees, but possible risks of simulation training should be further evaluated.
abstract_id: PUBMED:18571138
Computer-based endoscopy simulation: emerging roles in teaching and professional skills assessment. Advances in endoscopy simulation are reviewed with emphasis on applications in teaching and skills assessment. Endoscopy simulation has only been realized recently in a computer-based fashion because of advances in technology, but several studies have been performed both to validate computer-based endoscopy simulators and to assess their potential role in training. Multiple studies have shown that simulators can distinguish between clinicians at different skill levels and also have shown improvement in clinician skill, particularly at the early stages of training. This article summarizes those studies. The cost versus benefit of endoscopic simulators is also discussed, as well as the upcoming role of simulators in judging competence and as a tool in the credentialing process.
abstract_id: PUBMED:9882802
Endovascular interventions training and credentialing for vascular surgeons. This article reviews issues concerning the training and credentialing of vascular surgeons in the use of endovascular techniques in the peripheral vascular system. These guidelines update a prior document that was published in 1993. They have been rewritten to accommodate the rapid evolution that has occurred in the field and to provide the appropriate requirements that a vascular surgeon should fulfill to be competent in the basic skills needed to safely and effectively perform all presently accepted diagnostic and therapeutic endovascular procedures.
abstract_id: PUBMED:34053018
Validity of robotic simulation for high-stakes examination: a pilot study. Simulation is increasingly being used to train surgeons and access technical competency in robotic skills. The construct validity of using simulation performance for high-stakes examinations such as credentialing has not been studied appropriately. There are data on how simulation exercises can differentiate between novice and expert surgeons, but there are limited data to support their use for distinguishing intermediate from competent surgeons. Senior cardiothoracic trainees with limited robotic but significant laparoscopic experience ("intermediate surgeons", IS) and practicing robotic thoracic surgeons ("competent surgeons", CS) participating in a thoracic cadaver robotic course were evaluated on three Da Vinci (Xi) simulations. Scores were separately recorded into components and analyzed by t-test for significant differences between groups. 21 competent and 17 intermediate surgeons participated. Overall scores did not have a statistically significant difference in any exercise between groups. Simulation exercises do not appear to distinguish intermediate from competent surgeon performance of robotic skills. Without better validity data, the use of simulation for credentialing should be thoughtfully considered.
abstract_id: PUBMED:37979611
Certification of Basic Skills in Endovascular Aortic Repair Through a Modular Simulation Course With Real Time Performance Assessment. Objective: Endovascular aortic repair (EVAR) is being used increasingly for the treatment of infrarenal abdominal aortic aneurysms. Improvement in educational strategies is required to teach future vascular surgeons EVAR skills, but a comprehensive, pre-defined e-learning and simulation curriculum remains to be developed and tested. EndoVascular Aortic Repair Assessment of Technical Expertise (EVARATE), an assessment tool for simulation based education (SBE) in EVAR, has previously been designed to assess EVAR skills, and a pass limit defining mastery level has been set. However, EVARATE was developed for anonymous video ratings in a research setting, and its feasibility for real time ratings in a standardised SBE programme in EVAR is unproven. This study aimed to test the effect of a newly developed simulation based modular course in EVAR. In addition, the applicability of EVARATE for real time performance assessments was investigated.
Methods: The European Society of Vascular Surgery (ESVS) and Copenhagen Certification Programme in EVAR (ENHANCE-EVAR) was tested in a prospective cohort study. ENHANCE-EVAR is a modular SBE programme in EVAR consisting of e-learning and hands-on SBE. Participants were rated with the EVARATE tool by experienced EVAR surgeons.
Results: Twenty-four physicians completed the study. The mean improvement in EVARATE score during the course was +11.8 (95% confidence interval 9.8 - 13.7) points (p < .001). Twenty-two participants (92%) passed with a mean number of 2.8 ± 0.7 test attempts to reach the pass limit. Cronbach's alpha coefficient was 0.91, corresponding to excellent reliability of the EVARATE scale. Differences between instructors' EVARATE ratings were insignificant (p = .16), with a maximum variation between instructors of ± 1.3 points.
Conclusion: ENHANCE-EVAR, a comprehensive certifying EVAR course, was proven to be effective. EndoVascular Aortic Repair Assessment of Technical Expertise (EVARATE) is a trustworthy tool for assessing performance within an authentic educational setting, enabling real time feedback.
Answer: Simulation-based endovascular skills assessment is increasingly being recognized as a valuable tool for credentialing in the field of vascular surgery. Studies have demonstrated that structured endovascular skills assessment correlates well with prior procedural experience within a high-fidelity simulation environment, suggesting that simulators may be useful in determining procedural competency and setting credentialing standards for endovascular surgeons (PUBMED:18372149). Simulation-based surgical education and training can help surgeons acquire and maintain surgical skills in safe environments, verify knowledge and skills using valid and reliable assessment methods, and maintain skills in infrequently performed procedures (PUBMED:21549986).
The role of simulation in the development of endovascular surgical skills is evolving rapidly due to technological innovation and the increasing scope of endovascular therapy. Simulation training is being integrated into curricula to address gaps in the current endovascular training model (PUBMED:26797930). Furthermore, simulation-based medical education (SBME) has been shown to significantly improve the performance of cardiovascular fellows in performing simulated coronary angiography skills, indicating that standardized SBME is a valuable adjunct to traditional clinical education (PUBMED:26198625).
The concept of micro-credentialing, which involves the recognition of specific competencies and skills, is also gaining traction. A micro-credentialing methodology has been shown to foster pedagogical innovation and promote the integration and visibility of employability skills, suggesting its potential application in the recognition of endovascular skills (PUBMED:35224180).
While low-fidelity virtual reality simulators have been shown to increase motivation among novice trainees, their actual improvement in practical skills requires further evaluation (PUBMED:30579447). Computer-based endoscopy simulation has been validated as a tool in teaching and skills assessment, with potential applications in judging competence and credentialing (PUBMED:18571138).
However, the validity of using simulation performance for high-stakes examinations such as credentialing has not been fully established, particularly in distinguishing intermediate from competent surgeons (PUBMED:34053018). Nevertheless, a comprehensive certifying EVAR course using simulation has been proven effective, and the EVARATE tool has been shown to be a trustworthy tool for assessing performance within an educational setting, enabling real-time feedback (PUBMED:37979611).
In conclusion, simulation-based endovascular skills assessment is a promising direction for the future of credentialing, offering a standardized, objective, and safe method to evaluate and improve the skills of vascular surgeons. |
Instruction: Tailoring the parametrectomy in stages IA2-IB1 cervical carcinoma: is it feasible and safe?
Abstracts:
abstract_id: PUBMED:15721427
Tailoring the parametrectomy in stages IA2-IB1 cervical carcinoma: is it feasible and safe? Objective: Several authors have proposed the use of a less aggressive surgery (i.e., modified or type 2 radical hysterectomy) for patients affected by early stages cervical carcinoma. However, little attention has been given to the evaluation of adverse prognostic factors before selecting the surgical approach. The aim of this study is to evaluate the feasibility and safety of tailoring parametrectomy on the basis of specific prognostic factors preoperatively assessed.
Methods: Patients with cervical carcinoma FIGO IA2-IB1 entered the study. Eligibility criteria were: age < 75 years, no contraindications for surgery, informed consent, expected cooperation for follow-up. Tumor size was preoperatively assessed by pelvic examination under anesthesia and pelvic MRI. Patients were submitted to systematic lymphadenectomy of superficial obturator, external iliac, and interiliac nodes by laparotomy or laparoscopy. Lymph nodes were sent for frozen section. Node-negative patients were submitted to modified radical hysterectomy (type 2). Patients with nodal metastases underwent classical radical hysterectomy (types 3-4) and systematic pelvic and aortic node dissection up to the inferior mesenteric artery. Survival rates were calculated using the Kaplan-Meier product-limit method.
Results: Eighty-three patients were enrolled in the study. Among these, 63 patients were node-negative at frozen section, and therefore submitted to modified radical hysterectomy (Group A); 20 patients were found having nodal metastases intra-operatively, and therefore submitted to classical radical hysterectomy (Group B). Median follow up was 30 months. Five years overall survival was 95% for Group A, and 74% for Group B.
Conclusions: Pre-treatment evaluation of adverse prognostic factors in patients affected by cervical cancer FIGO stages IA2-IB1 is feasible and mandatory to determine if a less radical surgery is applicable and safe.
abstract_id: PUBMED:37763060
Radical Hysterectomy in Early-Stage Cervical Cancer: Abandoning the One-Fits-All Concept. Two pillars in modern oncology are treatment personalization and the reduction in treatment-related morbidity. For decades, the one-fits-all concept of radical hysterectomy has been the cornerstone of early-stage cervical cancer surgical treatment. However, no agreement exists about the prevalent method of parametrial invasion, and the literature is conflicting regarding the extent of parametrectomy needed to achieve adequate surgical radicality. Therefore, authors started investigating if less radical surgery was feasible and oncologically safe in these patients. Two historical randomized controlled trials (RCTs) compared classical radical hysterectomy (RH) to modified RH and simple hysterectomy. Less radical surgery showed a drastic reduction in morbidity without jeopardizing oncological outcomes. However, given the high frequency of adjuvant radiotherapy, the real impact of reduced radicality could not be estimated. Subsequently, several retrospective studies investigated the chance of tailoring parametrectomy according to the tumor's characteristics. Parametrial involvement was shown to be negligible in early-stage low-risk cervical cancer. An observational prospective study and a phase II exploratory RCT have recently confirmed the feasibility and safety of simple hysterectomy in this subgroup of patients. The preliminary results of a large prospective RCT comparing simple vs. radical surgery for early-stage low-risk cervical cancer show strong probability of giving a final answer on this topic.
abstract_id: PUBMED:34675677
Simple Hysterectomy for Patients with Stage IA2 Cervical Cancer: A Retrospective Cohort Study. Objective: The purpose of this study was to compare long-term survival outcomes of simple hysterectomy versus radical hysterectomy in stage IA2 cervical cancer.
Methods: A total of 440 patients who underwent simple hysterectomy (SH group) or radical hysterectomy (RH group) between 2014 and 2019 were included in this study. Overall survival (OS) and disease-free survival (DFS) were analyzed using the Kaplan-Meier method and compared by the Log rank test. The Cox proportional hazards regression model was employed to control for confounders.
Results: There were 258 patients in the RH group and 182 patients in the SH group. The two groups had similar 5-year DFS rate (89.25% vs 91.14%, P=0.562) and 5-year OS rate (95.71% vs 94.76%, P=0.482). Multivariable analysis showed that simple hysterectomy was not independently associated with poorer DFS (aHR, 1.608; 95% CI, 0.640-4.041; P=0.312) and OS (aHR, 1.122; 95% CI, 0.319-3.493; P=0.858) than radical hysterectomy for women with stage IA2 cervical cancer.
Conclusion: For stage IA2 cervical cancer, a simple hysterectomy is safe and effective. Further studies are needed to testify against our findings.
abstract_id: PUBMED:34030221
Laterally extended parametrectomy. Objective: To describe the laterally extended parametrectomy (LEP) surgical technique, emphasizing the main challenges of the procedure.
Methods: LEP was designed as a more radical surgical procedure aiming to remove the entire parametrial tissue from the pelvic sidewall. Its initial indications were for lymph node positive Stage Ib (current International Federation of Gynecology and Obstetrics 2018 Stage IIIc) and Stage IIb cervical cancer. Currently, with most guidelines recommending definitive radiochemotherapy for these cases, initial LEP indications have become debatable. LEP is now mainly indicated for removing tumors involving the soft structures of the pelvic sidewall during a pelvic exenteration, aiming to obtain lateral free margins. This expands the lateral borders of the dissection to not only the medial surface of internal iliac vessels, but also to the true limits of the pelvic sidewall.
Results: During LEP, the parietal and visceral branches of the hypogastric vessels are divided at the entry and exit level of the pelvis. Consequently, the entire internal iliac system is excised, and no connective or lymphatic tissue remain on the pelvic sidewall. The main technical challenges of LEP are caused by the difficulty in ligating large caliber vessels (internal iliac artery and vein) and the variable anatomic distribution of pelvic sidewall veins.
Conclusion: LEP is a feasible technique for removing pelvic sidewall recurrences, aiming to obtain surgical free margins.
abstract_id: PUBMED:28254677
Robotic Radical Parametrectomy With Upper Vaginectomy and Pelvic Lymphadenectomy in Patients With Occult Cervical Carcinoma After Extrafascial Hysterectomy. Study Objective: To confirm the safety and feasibility outcomes of robotic radical parametrectomy and pelvic lymphadenectomy and compare the clinicopathological features of women requiring adjuvant treatment with the historical literature.
Design: Retrospective cohort study and review of literature (Canadian Task Force classification II-2).
Setting: Department of Obstetrics and Gynecology, University of North Carolina at Chapel Hill.
Patients: All patients who underwent robotic radical parametrectomy with upper vaginectomy (RRPV), and pelvic lymphadenectomy for occult cervical cancer discovered after an extrafascial hysterectomy at our institution between January 2007 and December 2015.
Interventions: RRPV and pelvic lymphadenectomy for occult cervical cancer discovered after an extrafascial hysterectomy. We also performed a literature review of the literature on radical parametrectomy after occult cervical carcinoma.
Measurements And Main Results: Seventeen patients with invasive carcinoma of the cervix discovered after extrafascial hysterectomy underwent RRPV with bilateral pelvic lymphadenectomy. There were 2 intraoperative complications, including 1 bowel injury and 1 bladder injury. One patient required a blood transfusion of 2 units. Three patients underwent adjuvant treatment with chemoradiation with radiation-sensitizing cisplatin. One of these patients had residual carcinoma on the upper vagina, 1 patient had positive parametria and pelvic nodes, and 1 patient had positive pelvic lymph nodes. No patients experienced recurrence, and 1 patient died from unknown causes at 59.4 months after surgery. We analyzed 15 studies reported in the literature and found 238 women who underwent radical parametrectomy; however, no specific preoperative pathological features predicted outcomes, the need for adjuvant treatment, or parametrial involvement.
Conclusion: RRPV is a feasible and safe treatment option. As reflected in the literature, RRPV can help avoid empiric adjuvant chemoradiation; however, no pathological features predict the need for adjuvant treatment after surgery.
abstract_id: PUBMED:27001869
Robotic-assisted radical parametrectomy in patients with malignant gynecological tumors. We describe the operative technique of robotic-assisted laparoscopic radical parametrectomy and analyze perioperative data including adequacy of resections, pathology, and complications in our initial cases. A retrospective study was performed of seven patients with gynecological cancers involving the cervix who had previously been treated with simple hysterectomies and then underwent robotic-assisted radical parametrectomies. Pathology from the initial hysterectomies and the radical parametrectomies was reviewed. Postoperative complications, operative times, estimated blood loss, and length of hospital stay were assessed. The upper part of the vagina, parametrial tissue, and bilateral pelvic lymph nodes of all seven patients who had undergone a previous simple hysterectomy were removed. The mean age was 56.4 (SD ± 10.7) years. Diagnoses from hysterectomy specimens were invasive squamous carcinoma (n = 4), endometrial adenocarcinoma (n = 2), and clear-cell papillary adenocystic cervical carcinoma (n = 1). The median number of lymph nodes removed was 8 (min 4, max 29), and one patient had nodal metastasis. The mean operative time was 228.6 (SD ± 38.9) min, estimated blood loss was 147 (SD ± 58.2) ml, and length of hospital stay was five (SD ± 2.3) days. One intraoperative complication (cystotomy) occurred and was successfully repaired. One postoperative fistula developed on postoperative day 10. This early experience demonstrates that the basic surgical and anatomical principles of radical parametrectomy can be applied to robotic-assisted laparoscopic surgery. Genitourinary fistulae are always a concern with this procedure, and minimization of electrocautery near the bladder and ureters may further reduce complications.
abstract_id: PUBMED:31324105
Individualization of surgical management of cervical cancer stages IA1, IA2. Objective: To evaluate the risk of involvement of sentinel lymph nodes in cervical cancer stage IA1 with lymphovascular space invasion and IA2 using the detection of sentinel lymph nodes.
Design: Original article.
Settings: Department of Gynecology and Obstetrics 3rd Faculty of Medicine, Charles University, Faculty Hospital Královské Vinohrady, Prague; Oncogynecological centrum; Department of Pathology 3rd Faculty of Medicine, Charles University, Faculty Hospital Kralovské Vinohrady, Prague.
Methods: The study included women from prospective protocols LAP I and LAP II with cervical cancer stage IA1 with lymphovascular space invasion and stage IA2 from 2002 to 2018 classified according to FIGO 2014 staging, TNM 8. Detection of sentinel lymph nodes throughout this period was performed using ultra-short protocol with Tc and patent blau and also by histopathological examination.
Results: In the first group (28 women) with stage IA1 and lymphovascular space invasion diagnosed from cone biopsy there were two women with positive lymph nodes (7.1%). In the group stage IA2 (34 women) there were 13 women (38.2%) with positive lymphovascular space invasion and two women had positive lymph nodes (5.9%). The risk of positive lymph nodes for stage IA1 with lymphovascular space invasion and for stage IA2 is not statistically significant OR = 0.8125 (95% CI 0.1070-6.172).
Conclusion: The detection of sentinel lymph nodes aids to individualize the therapy of early stage cervical cancer and helps to reduce the radicalization of surgery. The risk of positive lymph nodes in stage IA1 with lymphovascular space invasion and stage IA2 with/without lymphovascular space invasion is the same. The results confirm, that the detection of sentinel lymph nodes in stage IA1 with lymphovascular space invasion is fully indicated.
abstract_id: PUBMED:33419610
Voiding recovery after radical parametrectomy in cervical cancer patients: An international prospective multicentre trial - SENTIX. Objective: Voiding dysfunctions represent a leading morbidity after radical hysterectomy performed in patients with early-stage cervical cancer. The aim of this study was to perform ad hoc analysis of factors influencing voiding recovery in SENTIX (SENTinel lymph node biopsy in cervIX cancer) trial.
Methods: The SENTIX trial (47 sites, 18 countries) is a prospective study on sentinel lymph node biopsy without pelvic lymphadenectomy in patients with early-stage cervical cancer. Overall, the data of 300 patients were analysed. Voiding recovery was defined as the number of days from surgery to bladder catheter/epicystostomy removal or to post-voiding urine residuum ≤50 mL.
Results: The median voiding recovery time was three days (5th-95th percentile: 0-21): 235 (78.3%) patients recovered in <7 days and 293 (97.7%) in <30 days. Only seven (2.3%) patients recovered after >30 days. In the multivariate analysis, only previous pregnancy (p = 0.033) and type of parametrectomy (p < 0.001) significantly influenced voiding recovery >7 days post-surgery. Type-B parametrectomy was associated with a higher risk of delayed voiding recovery than type-C1 (OR = 4.69; p = 0.023 vs. OR = 3.62; p = 0.052, respectively), followed by type-C2 (OR = 5.84; p = 0.011). Both previous pregnancy and type C2 parametrectomy independently prolonged time to voiding recovery by two days.
Conclusions: Time to voiding recovery is significantly related to previous pregnancy and type of parametrectomy but it is not influenced by surgical approach (open vs minimally invasive), age, or BMI. Type B parametrectomy, without direct visualisation of nerves, was associated with longer recovery than nerve-sparing type C1. Importantly, voiding dysfunctions after radical surgery are temporary, and the majority of the patients recover in less than 30 days, including patients after C2 parametrectomy.
abstract_id: PUBMED:19258936
Frequency of pelvic lymph node metastases and parametrial involvement in stage IA2 cervical cancer: a population-based study and literature review. Unlabelled: The frequency of lymph node metastases in stage IA2 cervical cancer is reported to range from 0% to 9.7%. Treatment recommendations vary likewise from a cone biopsy to a Wertheim radical hysterectomy and pelvic lymph node dissection. The objective of this study was to get insight into the true frequency of lymph node metastases and/or parametrial involvement in stage IA2 cervical cancer.
Methods: : The hospital records of 48 patients with stage IA2 cervical carcinoma who registered from 1994 to 2006 were reviewed, and a literature search was performed.
Results: : Of 48 registered patients, 14 were confirmed to have stage IA2. No lymph node metastases or parametrial invasion and recurrences were found. The collated literature data showed a risk of lymph node metastases of 4.8% (range, 0%-9.7%). The presence of adenocarcinoma and the absence of lymph vascular space invasion resulted in a low risk on lymph node metastases (0.3% and 1.3%, respectively). Parametrial involvement has not been reported.
Conclusions: : The risk of the selected patients with stage IA2 cervical cancer on lymph node metastases is low. In patients with stage IA2 squamous cell cancer with lymph vascular space invasion, a standard pelvic lymph node dissection should be recommended. Parametrectomy should be included if the nodes are positive. In the other patients, the treatment can be individualized and does not have to include lymph node dissection or parametrectomy.
abstract_id: PUBMED:21860209
Robotic radical parametrectomy in cervical cancer. Radical parametrectomy is indicated in cases of undiagnosed early-stage invasive cervical carcinoma discovered after simple hysterectomy performed for a presumed benign disease process. This radical surgical procedure is rarely performed for benign disease; however, there are some benign conditions such as endometriosis or ovarian remnant syndrome which may require wide excision, including parametria. Traditionally, radical parametrectomy has been performed via laparotomy; however, a minimally invasive approach via laparoscopy has been reported to be feasible and safe. Here we describe the robotic surgical approach to radical parametrectomy.
Answer: Tailoring the parametrectomy in stages IA2-IB1 cervical carcinoma appears to be both feasible and safe according to the studies reviewed. A study evaluating the feasibility and safety of tailoring parametrectomy based on preoperatively assessed specific prognostic factors found that pre-treatment evaluation of adverse prognostic factors is feasible and mandatory to determine if a less radical surgery is applicable and safe. In this study, patients with node-negative cervical carcinoma FIGO IA2-IB1 underwent modified radical hysterectomy, while those with nodal metastases underwent classical radical hysterectomy. The 5-year overall survival was 95% for the group that received modified radical hysterectomy and 74% for the group that underwent classical radical hysterectomy, suggesting that less radical surgery can be safe and effective when appropriately selected (PUBMED:15721427).
Additionally, a retrospective cohort study comparing simple hysterectomy versus radical hysterectomy in stage IA2 cervical cancer found that the two groups had similar 5-year disease-free survival and overall survival rates. Multivariable analysis showed that simple hysterectomy was not independently associated with poorer outcomes than radical hysterectomy for women with stage IA2 cervical cancer, supporting the safety and effectiveness of less radical surgery in this stage (PUBMED:34675677).
Moreover, the literature suggests that the extent of parametrectomy needed to achieve adequate surgical radicality has been questioned, and less radical surgery has been shown to drastically reduce morbidity without jeopardizing oncological outcomes. Observational prospective studies and a phase II exploratory randomized controlled trial have confirmed the feasibility and safety of simple hysterectomy in early-stage low-risk cervical cancer patients (PUBMED:37763060).
In conclusion, the evidence indicates that tailoring the extent of parametrectomy in stages IA2-IB1 cervical carcinoma based on preoperative prognostic factors and tumor characteristics is a feasible and safe approach that can lead to reduced morbidity while maintaining oncological safety. |
Instruction: Is preoperative subclassification of type I choledochal cyst necessary?
Abstracts:
abstract_id: PUBMED:22563281
Is preoperative subclassification of type I choledochal cyst necessary? Objective: The aim of this study was to evaluate the frequency of postoperative biliary stricture and its risk factors in patients undergoing surgery for type I choledochal cyst.
Materials And Methods: A total of 35 patients with type I choledochal cyst underwent laparoscopic cyst excision and Roux-en-Y hepaticojejunostomy between August 2004 and August 2011. Their medical records and radiologic images (including endoscopic retrograde cholangiopancreatography, magnetic resonance cholangiopancreatography, pancreatobiliary computed tomography, or ultrasound) were retrospectively analyzed to evaluate the frequency of postoperative biliary stricture and its risk factors.
Results: Postoperative biliary stricture was found in 10 (28.6%) of 35 patients. It developed more frequently in patients with type Ia choledochal cyst (53.8%, 7 of 13 patients) than in patients with type Ic choledochal cyst (13.6%, 3 of 22 patients), which was statistically significant (p = 0.011). There were no significant associations between other factors and postoperative biliary stricture.
Conclusion: Type Ia is a risk factor of postoperative anastomotic stricture. Therefore, preoperative radiologic subclassification of type Ia and Ic may be useful in predicting postoperative outcomes of choledochal cysts.
abstract_id: PUBMED:10668086
Preoperative evaluation of choledochal cyst with MR cholangiopancreatography Unlabelled: The choledochal cyst is a rare congenital disorder usually diagnosed in childhood. It requires a complete surgical resection to prevent complications, particularly the risk of malignant changes. At present, the preoperative examination requires a direct opacification of the biliary tree, but this is an invasive technique with a high risk of infection, especially in pediatric patients.
Case Report: A choledochal cyst was diagnosed in a five-year-old girl with recurrent abdominal pain. Diagnosis was made by ultrasound and preoperative evaluation by magnetic resonance-cholangiopancreatography using single-shot fast-spin echo sequences. A complete correlation was observed between surgical, preoperative cholangiography and MRCP data.
Conclusion: Recent improvement in MRCP techniques provide a complete anatomic analysis of choledochal cysts, enabling one to diagnose an anomalous junction of the pancreaticobiliary duct, even the presence of stones within the biliary tree. This short and noninvasive examination should in the future replace direct opacification of the biliary tree for the preoperative assessment of choledochal cysts.
abstract_id: PUBMED:24314174
Preoperative imaging does not predict intrahepatic involvement in choledochal cysts. Introduction: Choledochal cyst (CDC) is a congenital malformation of the bile ducts, which can include the intrahepatic or extrahepatic bile ducts. We hypothesize that preoperative intrahepatic ductal dilation is not predictive of postoperative intrahepatic involvement.
Methods: We retrospectively reviewed all cases of CDC in children diagnosed at a single institution between 1991 and 2013.
Results: Sixty-two patients were diagnosed with CDC during the study period with a median follow-up time of 2.25 (range 0-19.5) years. Forty-two patients (68%) were diagnosed with type I disease preoperatively, and 15 patients (24%) were diagnosed with type IV-A disease. The most common presenting symptoms included pain (34%), jaundice (28%), and pancreatitis (25%). There were no deaths or malignancies and only one postoperative stricture. Forty-two patients (68%) had intrahepatic ductal dilation preoperatively. Only four patients (9%) had intrahepatic ductal dilation following resection (P<0.0001). In one patient, this dilation resolved following stricture revision. Of the four patients with postoperative dilation, two were diagnosed with type I disease, and the other two were diagnosed with type IV-A disease preoperatively.
Conclusion: Preoperative intrahepatic ductal dilation is not predictive of postoperative intrahepatic ductal involvement in children with CDC. The preoperative distinction between type I and IV disease is not helpful in treating these patients.
abstract_id: PUBMED:21236453
Use of preoperative, 3-dimensional magnetic resonance cholangiopancreatography in pediatric choledochal cysts. Background: Standard choledochal cyst (CC) operations involve dilated extrahepatic bile duct excision followed by biloenterostomy. However, biliary variants and associated intrahepatic bile duct (IHBD) stenoses or dilatations triggering postoperative sequelae require additional procedures. The usefulness of preoperative 3-dimensional magnetic resonance cholangiopancreatography (3D MRCP) and virtual cholangioscopy (VES) for observing biliary morphology and pancreaticobiliary maljunction (PBM) was evaluated.
Methods: In 16 pediatric CC patients (age range, 4 months to 9 years; median, 3 years), visualization of PBM and aberrant bile duct anatomy and IHBD morphology at the hepatic hilum (HH), umbilical portion (UP), and posterior branch (POST) were compared between 3D-MRCP and intraoperative cholangiography (IOC). VES and intraoperative cholangioscopy (IOS) findings were compared.
Results: HH, UP, and POST visualization rates were 100%, 94%, and 94%, respectively, by 3D-MRCP, and 100%, 69%, and 69%, respectively, by IOC. IHBD stenosis detection rates at each region were 38%, 13%, and 13%, respectively, by 3D-MRCP, and 25%, 0%, and 9%, respectively, by IOC. IHBD dilatation detection rates at each part were 75%, 47%, and 60%, respectively, by 3D-MRCP, and 88%, 82%, and 91%, respectively, by IOC. PBM was confirmed in 56% and 93% of cases on 3D-MRCP and IOC, respectively. Both 3D-MRCP and IOC showed biliary variants in 5 cases (31%). VES showed membranous strictures at HH, UP, and POST in 6, 2, and 2 cases, respectively, whereas IOS did so at HH in 4 cases and POST in 2.
Conclusion: Preoperative 3D-MRCP and VES accurately depict biliary morphology, allowing concrete operative planning in pediatric CC patients, complementing IOC and IOS.
abstract_id: PUBMED:34434994
Risk factors for preoperative carcinogenesis of bile duct cysts in adults. Background: Bile duct cyst (BDC) is a rare congenital bile duct malformation. The incidence of bile duct malignancy in BDC patients is markedly higher than that in the general population. However, few studies have been conducted on the risk factors for preoperative carcinogenesis in BDC patients.
Aim: To analyze the risk factors associated with preoperative carcinogenesis in BDC patients.
Methods: The medical records of BDC patients treated at our hospital between January 2012 and December 2018 were retrospectively reviewed. We constructed a database and compared the characteristics of BDC patients with dysplasia and carcinoma against those with benign cysts. The risk factors for preoperative carcinogenesis were identified using univariate and multivariate analyses.
Results: The cohort comprised 109 BDC patients. Ten patients had preoperative dysplasia or adenocarcinoma. Univariate and multivariate analyses showed that gallbladder wall thickness > 0.3 cm [odds ratio (OR), 6.551; 95% confidence interval (CI), 1.351 to 31.763; P = 0.020] and Todani type IV (OR, 7.675; 95%CI, 1.584 to 37.192; P = 0.011) were independent factors associated with preoperative carcinogenesis.
Conclusion: BDC is a premalignant condition. Our findings show that gallbladder wall thickness > 0.3 cm and Todani type IV are independent risk factors for preoperative carcinogenesis of BDC. They are therefore useful for deciding on the appropriate treatment strategy, especially in asymptomatic patients.
abstract_id: PUBMED:1360834
Magnetic resonance imaging in preoperative assessment of choledochal cysts. N/A
abstract_id: PUBMED:2643985
Preoperative ultrasound assessment of choledochal cysts. Report of three cases. Ultrasonography performed with modern equipment is capable of outlining the anatomy of the biliary and pancreatic ducts within the head of the pancreas. We present two typical cases of choledochal cysts where ultrasound provided complete anatomic information and a more complicated case which required more extensive radiology.
abstract_id: PUBMED:19098668
Percutaneous transhepatic cyst drainage as a "bridge procedure" to definitive treatment of perforated choledochal cysts: a case report. Perforated choledochal cysts are rare. Traditionally, they have been managed by primary T-tube choledochostomy and interval cystectomy with hepaticojejunostomy. We present here our recent experience with a case of perforated choledochal cyst, which was managed with preoperative percutaneous transhepatic cyst drainage, followed by definite surgery for choledochal cyst, cyst excision, and reconstruction with Roux-en-y hepaticojejunostomy. Preoperative percutaneous cyst drainage seems to be useful and effective in managing complicated choledochal cysts. The procedure garners time for patients' general condition and laboratory findings to improve to the extent that general surgery can be performed.
abstract_id: PUBMED:12139902
Choledochal cyst associated with acute pancreatitis. Preoperative evaluation with magnetic resonance N/A
abstract_id: PUBMED:10631730
Preoperative imaging of a choledochal cyst in children: non-breath-holding magnetic resonance cholangiopancreatography. Magnetic resonance cholangiopancreaticography (MRCP) was used to visualize the biliary tract in two children, aged 7 weeks and 10 years, with a choledochal cyst. MRCP was successful in both cases and the findings were confirmed by intraoperative cholangiography.
Answer: The preoperative subclassification of type I choledochal cyst appears to be necessary based on the findings from the studies provided. Specifically, one study found that postoperative biliary stricture developed more frequently in patients with type Ia choledochal cyst (53.8%) than in patients with type Ic choledochal cyst (13.6%), which was statistically significant (p = 0.011) (PUBMED:22563281). This suggests that type Ia is a risk factor for postoperative anastomotic stricture, and therefore, preoperative radiologic subclassification of type Ia and Ic may be useful in predicting postoperative outcomes of choledochal cysts.
While other studies have emphasized the importance of preoperative imaging in the assessment of choledochal cysts, such as the use of magnetic resonance cholangiopancreatography (MRCP) to provide a complete anatomic analysis of choledochal cysts (PUBMED:10668086), and the use of 3D MRCP and virtual cholangioscopy for observing biliary morphology and pancreaticobiliary maljunction (PUBMED:21236453), they do not directly address the subclassification of type I choledochal cysts.
However, it is important to note that another study concluded that preoperative intrahepatic ductal dilation is not predictive of postoperative intrahepatic ductal involvement in children with choledochal cysts, and the preoperative distinction between type I and IV disease is not helpful in treating these patients (PUBMED:24314174). This suggests that while subclassification may be important for predicting certain postoperative outcomes, it may not be universally applicable for all aspects of choledochal cyst management.
In summary, the necessity of preoperative subclassification of type I choledochal cyst is supported by evidence that it can predict the risk of postoperative biliary stricture, which is a significant complication. However, the utility of subclassification may vary depending on the specific outcomes being considered. |
Instruction: Does the Swedish Interactive Threshold Algorithm (SITA) accurately map visual field loss attributed to vigabatrin?
Abstracts:
abstract_id: PUBMED:25539569
Does the Swedish Interactive Threshold Algorithm (SITA) accurately map visual field loss attributed to vigabatrin? Background: Vigabatrin (VGB) is an anti-epileptic medication which has been linked to peripheral constriction of the visual field. Documenting the natural history associated with continued VGB exposure is important when making decisions about the risk and benefits associated with the treatment. Due to its speed the Swedish Interactive Threshold Algorithm (SITA) has become the algorithm of choice when carrying out Full Threshold automated static perimetry. SITA uses prior distributions of normal and glaucomatous visual field behaviour to estimate threshold sensitivity. As the abnormal model is based on glaucomatous behaviour this algorithm has not been validated for VGB recipients. We aim to assess the clinical utility of the SITA algorithm for accurately mapping VGB attributed field loss.
Methods: The sample comprised one randomly selected eye of 16 patients diagnosed with epilepsy, exposed to VGB therapy. A clinical diagnosis of VGB attributed visual field loss was documented in 44% of the group. The mean age was 39.3 years ± 14.5 years and the mean deviation was -4.76 dB ±4.34 dB. Each patient was examined with the Full Threshold, SITA Standard and SITA Fast algorithm.
Results: SITA Standard was on average approximately twice as fast (7.6 minutes) and SITA Fast approximately 3 times as fast (4.7 minutes) as examinations completed using the Full Threshold algorithm (15.8 minutes). In the clinical environment, the visual field outcome with both SITA algorithms was equivalent to visual field examination using the Full Threshold algorithm in terms of visual inspection of the grey scale plots , defect area and defect severity.
Conclusions: Our research shows that both SITA algorithms are able to accurately map visual field loss attributed to VGB. As patients diagnosed with epilepsy are often vulnerable to fatigue, the time saving offered by SITA Fast means that this algorithm has a significant advantage for use with VGB recipients.
abstract_id: PUBMED:10612345
Characteristics of a unique visual field defect attributed to vigabatrin. Purpose: Vigabatrin (VGB) therapy is associated with a loss of peripheral vision. The characteristics and prevalence of VGB-attributed visual field loss (V-AVFL) and associated risk factors were evaluated in patients with epilepsy.
Methods: The material comprised the visual fields and case notes of 88 patients with suspected V-AVFL (25 spontaneous reports and 63 cases from an open-label extension trial) and of 42 patients receiving alternative antiepileptic drugs (AEDs) from a cross-sectional study.
Results: Forty-two reliable cases of visual field loss could not be assigned to an alternative known cause and were therefore attributed to VGB (13 spontaneous reports and 29 from the open-label study). All cases except one were asymptomatic. Seven cases of field loss were present in the reference cohort of 42 patients; all cases could be attributed to a known aetiology. Thirty-six of the 42 confirmed cases of V-AVFL exhibited a bilateral defect that was most profound nasally, and three, a concentric constriction. The prevalence of V-AVFL was 29% (95% confidence interval, 21-39%). Male gender was associated with a 2.1-fold increased relative risk of V-AVFL (95% confidence interval, 1.20-4.6%). Age, body weight, duration of epilepsy, and daily dose of VGB, and concomitant AEDs did not predict the occurrence of V-AVFL.
Conclusions: The unique visual field defect attributed to VGB is profound in terms of the frequency of occurrence and the location and severity of loss. The asymptomatic nature of the field loss indicates that V-AVFL can be elicited only by visual field examination.
abstract_id: PUBMED:15056465
Scotopic threshold response changes after vigabatrin therapy in a child without visual field defects: a new electroretinographic marker of early damage? Vigabatrin (VGB) has been widely used in patients affected by drug-resistant epilepsy and West syndrome. Following reports of visual field loss associated with vigabatrin therapy, some authors have investigated retinal electrophysiologic variables to identify early electrophysiologic markers and pathogenetic mechanisms of retinal damage. There are no previous reports of a scotopic threshold response (STR) reduction associated with vigabatrin therapy. A 13-year-old male child was submitted to a complete electroretinographic study before and after the start of vigabatrin therapy. Of the electroretinographic responses analyzed, only the scotopic threshold response was altered. The scotopic threshold response is a corneal-negative wave in the electroretinogram (ERG) of a fully dark-adapted eye. In cat, this response has been shown to be mediated by K+ spatial buffer currents that flow from proximal to distal retina in retinal glia as a result of elevated concentration of K+ in proximal retina following depolarization of local neurons in response to light onset. The prospective nature of the study in a previously untreated patient on vigabatrin monotherapy allows us to speculate on the underlying pathogenetic mechanisms and level of action of vigabatrin therapy-related retinal damage. If the predictive value of the scotopic threshold response changes is documented, this ERG response could be used to perform a preliminary evaluation of drugs, which modify gamma-aminobutyric acid (GABA) receptors and/or GABA levels.
abstract_id: PUBMED:11077455
Electro-oculography, electroretinography, visual evoked potentials, and multifocal electroretinography in patients with vigabatrin-attributed visual field constriction. Purpose: Symptomatic visual field constriction thought to be associated with vigabatrin has been reported. The current study investigated the visual fields and visual electrophysiology of eight patients with known vigabatrin-attributed visual field loss, three of whom were reported previously. Six of the patients were no longer receiving vigabatrin.
Methods: The central and peripheral fields were examined with the Humphrey Visual Field Analyzer. Full visual electrophysiology, including flash electroretinography (ERG), pattern electroretinography, multifocal ERG using the VERIS system, electro-oculography, and flash and pattern visual evoked potentials, was undertaken.
Results: Seven patients showed marked visual field constriction with some sparing of the temporal visual field. The eighth exhibited concentric constriction. Most electrophysiological responses were usually just within normal limits; two patients had subnormal Arden electro-oculography indices; and one patient showed an abnormally delayed photopic b wave. However, five patients showed delayed 30-Hz flicker b waves, and seven patients showed delayed oscillatory potentials. Multifocal ERG showed abnormalities that sometimes correlated with the visual field appearance and confirmed that the deficit occurs at the retinal level.
Conclusion: Marked visual field constriction appears to be associated with vigabatrin therapy. The field defects and some electrophysiological abnormalities persist when vigabatrin therapy is withdrawn.
abstract_id: PUBMED:24698019
Identification of functional visual field loss by automated static perimetry. Purpose: Diagnosis of functional visual field loss, that is, field loss lacking objective corollaries, has long relied on kinetic visual field examinations using tangent screens or manual perimeters. The modern dominance of automated static perimeters requires the formulation of new diagnostic criteria.
Methods: Retrospective review of automated perimetry records from 36 subjects meeting clinical and tangent screen criteria for functional visual field loss. Thirty-three normal eyes and 57 eyes with true lesions, including optic nerve compression, glaucoma, anterior ischaemic optic neuropathy and vigabatrin toxicity, served as controls.
Results: Standard automated perimetry statistics were unable to reliably discriminate organic versus non-organic visual field loss. Subjective evaluation of perimetric maps indicated that functional fields generally could be identified by the presence of severe and irregular contractions and depressions that did not conform to the visual system's neuro-architecture. Further, functional fields generally presented one or more isolated threshold 'spikes', that is, isolated locations showing much better than average sensitivity. On repeated examinations, threshold spikes always changed locations. Visual evaluation for spikes proved superior to an objective computational algorithm. Fairly reliable objective discrimination of functional fields could be achieved by point-wise correlations of repeated examinations: median intertest correlation coefficients equalled 0.47 compared with 0.81 for true lesions.
Conclusion: Functional visual loss can be identified by automated static perimetry. Useful criteria include severe and irregular contractions and depressions, the presence of isolated threshold spikes and poor intertest correlations.
abstract_id: PUBMED:19682035
Is the risk of vigabatrin-attributed visual field loss lower in infancy than in later life? N/A
abstract_id: PUBMED:19168223
Nasal retinal nerve fiber layer attenuation: a biomarker for vigabatrin toxicity. Purpose: To investigate whether nasal peripapillary retinal nerve fiber layer (RNFL) attenuation is associated with visual field loss attributed to the anti-epileptic drug vigabatrin.
Design: Prospective cross-sectional observational study.
Participants: Twenty-seven individuals with focal-onset epilepsy exposed to vigabatrin and 13 individuals with focal-onset epilepsy exposed to non-GABAergic anti-epileptic drug monotherapy.
Methods: At one visit, suprathreshold perimetry of the central and peripheral field (3-zone, age-corrected Full Field 135 Screening Test) and threshold perimetry of the central field (Program 30-2 and the FASTPAC strategy) were undertaken using the Humphrey Field Analyzer (Carl Zeiss Meditech, Dublin, CA). At a second visit, ocular coherence tomography was undertaken for the right eye using the 3.4 RNFL thickness protocol of the StratusOCT (Carl Zeiss Meditech).
Main Outcome Measures: The magnitude, for each individual, of the RNFL thickness, averaged across the 4 oblique quadrants, and for each separate quadrant.
Results: Of the 27 individuals exposed to vigabatrin, 11 (group I) exhibited vigabatrin-attributed visual field loss, 15 exhibited a normal field, and 1 exhibited a homonymous quadrantanopia (group II). All 13 individuals exposed to non-GABAergic therapy had normal fields (group III). All individuals in group I exhibited abnormal average and nasal quadrant RNFL thicknesses in the presence of a normal temporal quadrant thickness. Most also exhibited additional RNFL attenuation in either the superior or inferior quadrant, or both. Four individuals in group II exhibited an identical pattern of RNFL attenuation suggesting that nasal RNFL thinning is a more sensitive marker for vigabatrin toxicity than visual field loss. None of the 13 individuals in group III exhibited nasal quadrant RNFL attenuation.
Conclusions: Vigabatrin-attributed visual field loss is associated with a characteristic pattern of RNFL attenuation: nasal quadrant thinning and normal temporal quadrant thickness with, or without, superior or inferior quadrant involvement. Nasal attenuation may precede visual field loss. Ocular coherence tomography of the peripapillary RNFL should be considered in patients previously exposed to vigabatrin. It should also be considered at baseline and follow-up in those commencing vigabatrin for treatment of epilepsy or in trials for anti-addiction therapy. The pattern of RNFL thinning seems to be a useful biomarker to identify vigabatrin toxicity.
abstract_id: PUBMED:18184224
Visual field severity indices demonstrate dose-dependent visual loss from vigabatrin therapy. Purpose: The aims of this study were to develop an algorithm to accurately quantify Vigabatrin (VGB)-induced central visual field loss and to investigate the relationship between visual field loss and maximum daily dose, cumulative dose and duration of dose.
Methods: The sample comprised 31 patients (mean age 37.9 years; SD 14.4 years) diagnosed with epilepsy and exposed to VGB. Each participant underwent standard automated static visual field examination of the central visual field. Central visual field loss was determined using continuous scales quantifying severity in terms of area and depth of defect and additionally by symmetry of defect between the two eyes. A simultaneous multiple regression model was used to explore the relationship between these visual field parameters and the drug predictor variables.
Results: The regression model indicated that maximum VGB dose was the only factor to be significantly correlated with individual eye severity (right eye: p = 0.020; left eye: p = 0.012) and symmetry of visual field defect (p = 0.024).
Conclusions: Maximum daily dose was the single most reliable indicator of those patients likely to exhibit visual field defects due to VGB. These findings suggest that high maximum dose is more likely to result in visual field defects than high cumulative doses or those of long duration.
abstract_id: PUBMED:10932265
Separating the retinal electrophysiologic effects of vigabatrin: treatment versus field loss. Objective: To separate the retinal electrophysiologic markers associated with vigabatrin-attributed visual field loss (VGB-VFL) from those associated with current vigabatrin therapy.
Methods: A nonrandomly selected cohort of 8 previous and 18 current vigabatrin users and a reference cohort of 8 never vigabatrin-treated patients with epilepsy receiving other antiepilepsy drugs (AED) underwent electro-oculography (EOG), electroretinography (ERG), and automated static threshold perimetry. A cohort of 22 normal subjects underwent ERG. The validity of the retinal electrophysiologic variables to detect the presence and severity of VGB-VFL was assessed using receiver operator characteristic curves.
Results: Of 26 patients exposed to vigabatrin, 18 exhibited VGB-VFL. No patients receiving alternative AED showed this type of visual field abnormality. The presence and severity of VGB-VFL was significantly associated with the latency (implicit time) and amplitude of the ERG cone function. The amplitude of the cone flicker response was the strongest predictor of VGB-VFL and revealed a sensitivity of 100% at a specificity of 75%. The EOG, the photopic and scotopic ERG, and the latency of the ERG second oscillatory potential (OP2) were not significantly related to the presence of VGB-VFL. Vigabatrin therapy was significantly associated with the photopic amplitude, the scotopic a-wave latency, and the latency of OP2.
Conclusion: In patients who cannot perform reliable perimetry, the cone-specific ERG flicker amplitude provides the best screening method for detecting VGB-VFL.
abstract_id: PUBMED:16891821
Vigabatrin and visual field defects in pediatric epilepsy patients. We studied the prevalence, type and severity of vigabatrin (VGB)-attributed visual field defects (VFDs), and used these data to assess the associated risk factors in pediatric patients. Medical records were retrospectively reviewed for 67 pediatric patients who received VGB alone or in combination with other antiepileptic drugs, and who had undergone visual field examinations using a Humphrey visual field analyzer. Of the 67 patients, 15 had VGB-attributed VFDs: 13 had nasal arcuate type, 1 had nasal and temporal constricted type and 1 had nasal constricted type. In terms of severity, 7 patients had Grade I VGB-attributed VFDs, 5 had Grade II, 2 had Grade III, and 1 had Grade IV. Although there were no significant differences between the VFD and non-VFD groups with regards to all tested parameters, there were no cases of VGB-attributed VFDs in patients with total treatment durations <2 yr and cumulative doses <10 g/kg. In conclusion, the prevalence of VGB-attributed VFDs in VGB-treated pediatric epilepsy patients was 22%. The high frequency of VGB-attributed VFDs indicates that physicians should inform all patients of this risk prior to VGB treatment and perform periodic visual field examinations.
Answer: Yes, the Swedish Interactive Threshold Algorithm (SITA) accurately maps visual field loss attributed to vigabatrin. A study aimed to assess the clinical utility of the SITA algorithm for accurately mapping vigabatrin-attributed field loss found that both SITA Standard and SITA Fast algorithms were able to accurately map visual field loss attributed to vigabatrin. The visual field outcomes with both SITA algorithms were equivalent to visual field examination using the Full Threshold algorithm in terms of visual inspection of the grey scale plots, defect area, and defect severity. Additionally, SITA Standard was approximately twice as fast, and SITA Fast approximately three times as fast, as examinations completed using the Full Threshold algorithm. This time-saving aspect is particularly beneficial for patients diagnosed with epilepsy who are often vulnerable to fatigue (PUBMED:25539569). |
Instruction: Does being informed and feeling informed affect patients' trust in their radiation oncologist?
Abstracts:
abstract_id: PUBMED:22694893
Does being informed and feeling informed affect patients' trust in their radiation oncologist? Objective: We investigated whether the content of information provided by radiation oncologists and their information giving performance increase patients' trust in them.
Methods: Questionnaires were used to assess radiotherapy patients (n=111) characteristics before their first consultation, perception of information giving after the first consultation and trust before the follow-up consultation. Videotaped consultations were scored for the content of the information provided and information giving performance.
Results: Patients mean trust score was 4.5 (sd=0.77). The more anxious patients were, the less they tended to fully trust their radiation oncologist (p=0.03). Patients' age, gender, educational attainment and anxious disposition together explained 7%; radiation oncologists' information giving (content and performance) explained 3%, and patients' perception of radiation oncologists' information-giving explained an additional 4% of the variance in trust scores.
Conclusion: It can be questioned whether trust is a sensitive patient reported outcome of quality of communication in highly vulnerable patients.
Practice Implications: It is important to note that trust may not be a good patient reported outcome of quality of care. Concerning radiation oncologists' information giving performance, our data suggest that they can particularly improve their assessments of patients' understanding.
abstract_id: PUBMED:23858636
Dentistry and healthcare legislation 3: informed consent The relationship between a dentist and his patient is based on trust. The principle of informed consent contributes to the quality of that relationship of trust. According to the professional standards for such a relationship, it is up to the dentist to make sure that the patient is well informed. Reliable information is necessary if the patient is to be in a position to give his or her consent for treatment. The Dutch Law of Agreement to Medical Treatment (WGBO) provides aframework for informed consent. Disciplinary judges establish the scope and if necessary the limits. It is clear that, among other things, not defining the risks beforehand can be the basis for a (disciplinary) complaint. Determining the requirements of informed consent calls for familiarity with the law and communication skills. Programmes in dental education ought to devote more attention to this issue.
abstract_id: PUBMED:25180355
Using informed consent to save trust. Increasingly, bioethicists defend informed consent as a safeguard for trust in caretakers and medical institutions.This paper discusses an ‘ideal type’ of that move. What I call the trust-promotion argument for informed consent states:1. Social trust, especially trust in caretakers and medical institutions, is necessary so that, for example,people seek medical advice, comply with it, and participate in medical research.2. Therefore, it is usually wrong to jeopardise that trust.3. Coercion, deception, manipulation and other violations of standard informed consent requirements seriously jeopardise that trust.4. Thus, standard informed consent requirements are justified.This article describes the initial promise of this argument, then identifies challenges to it. As I show, the value of trust fails to account for some common sense intuitions about informed consent. We should revise the argument, common sense morality, or both.
abstract_id: PUBMED:18316464
Beyond informed consent: the therapeutic misconception and trust. The therapeutic misconception has been seen as presenting an ethical problem because failure to distinguish the aims of research participation from those receiving ordinary treatment may seriously undermine the informed consent of research subjects. Hence, most theoretical and empirical work on the problems of the therapeutic misconception has been directed to evaluate whether, and to what degree, this confusion invalidates the consent of subjects. We argue here that this focus on the understanding component of informed consent, while important, might be too narrow to capture the ethical complexity of the therapeutic misconception. We show that concerns about misplaced trust and exploitation of such trust are also relevant, and ought to be taken into account, when considering why the therapeutic misconception matters ethically.
abstract_id: PUBMED:38345128
Effect of enhanced informed consent on veteran hesitancy to disclose suicidal ideation and related risk factors. Introduction: The concealment of suicidal ideation (SI) constitutes a significant barrier to reducing veteran deaths by suicide and is associated with fear of negative consequences (e.g., involuntary hospitalization). This study examined whether augmenting informed consent with psychoeducation aimed to help patients achieve a more realistic risk appraisal of consequences associated with disclosure of SI, decreased hesitancy to disclose SI, and related risk behaviors among U.S. veterans.
Method: Participants (N = 133) were recruited from combat veteran social media groups and were randomly assigned to a video simulated treatment-as-usual informed consent (control) or to one of two psychoeducation-enhanced informed consent conditions (psychoed, psychoed + trust).
Results: Compared with the control group, participants in both psychoeducation and enhanced informed consent conditions reported lower hesitancy to disclose SI, firearm access, and problems with drugs/thoughts of harming others, as well as greater trust and respect for the simulated clinician.
Conclusions: These findings suggest that brief psychoeducation regarding common factors that affect hesitancy to disclose SI may be beneficial for increasing trust in providers during the informed consent process and decreasing concealment of SI and firearm access among veterans.
abstract_id: PUBMED:10918907
'Informed consent' and prerandomization The usual procedure in randomised controlled trials is to obtain informed consent first, after which participants can be randomised. The reversal of the order, first randomisation and then informed consent, is called pre-randomisation (Zelen design). In the Netherlands, there is discussion as to whether pre-randomisation should be allowed in medical research. Full informed consent regarding the design of the investigation may lead to unwanted loss of distinction between the experimental and control groups, thus reducing the internal validity of the investigation. A possible solution could be to include, in the informed consent procedure, the statement that certain information has been withheld because revealing it now would make the investigation useless, but that it will be revealed to all participants afterwards and that the study design was approved by the medical ethics committee. In this way, the advantage of the enhanced internal validity of the pre-randomisation design is retained while simultaneously keeping intact the sequence of first informed consent and then randomisation.
abstract_id: PUBMED:20881156
How does feeling informed relate to being informed? The DECISIONS survey. Background: An important part of delivering high-quality, patient-centered care is making sure patients are informed about decisions regarding their health care. The objective was to examine whether patients' perceptions about how informed they were about common medical decisions are related to their ability to answer various knowledge questions.
Methods: A cross-sectional survey was conducted November 2006 to May 2007 of a national sample of US adults identified by random-digit dialing. Participants were 2575 English-speaking US adults aged 40 and older who had made 1 of 9 medication, cancer screening, or elective surgery decisions within the previous 2 years. Participants rated how informed they felt on a scale of 0 (not at all informed) to 10 (extremely well-informed), answered decision-specific knowledge questions, and completed standard demographic questions.
Results: Overall, 36% felt extremely well informed (10), 30% felt well informed (8-9), and 33% felt not at all to somewhat informed (0-7). Multivariate logistic regression analyses showed no overall relationship between knowledge scores and perceptions of being extremely well informed (odds ratio [OR] = 0.94, 95% confidence interval [CI] 0.63-1.42, P = 0.78). Three patterns emerged for decision types: a negative relationship for cancer screening decisions (OR = 0.58, CI 0.33-1.02, P = 0.06), no relationship for medication decisions (OR = 0.99, CI 0.54-1.83, P = 0.98), and a positive relationship for surgery decisions (OR = 3.07, 95% CI 0.90-10.54, P = 0.07). Trust in the doctor was associated with feeling extremely well-informed for all 3 types of decisions. Lower education and lower income were also associated with feeling extremely well informed for medication and screening decisions. Retrospective survey data are subject to recall bias, and participants may have had different perspectives or more factual knowledge closer to the time of the decision.
Conclusions: Patients facing common medical decisions are not able to accurately assess how well informed they are. Clinicians need to be proactive in providing adequate information to patients and testing patients' understanding to ensure informed decisions.
abstract_id: PUBMED:35882376
Informed consent: who are we informing? Communication is the foundation of informed consent in research. This article relays the reflections of an American urogynecology fellow and researcher in Kenya on the topic of informed consent. After learning of how a previous foreign researcher's presence in the community had violated the trust that women placed in women's health research, she reflects on how the standard eurocentric approach to obtaining written informed consent in research may sow breakdowns in communication and also perpetuate distrust in research. Particularly for settings in which the language is primarily spoken, or where there are varying literacy levels, the standard research consent should be reimagined to make the informed consent process more equitable and less of an exercise in documentation. Communication of research study information to patients must take into account the diverse and evolving ways in which patients best consume information, and in such a way that it ultimately enhances their autonomy.
abstract_id: PUBMED:19381720
Responding to trust: surgeons' perspective on informed consent. Background: Every day thousands of surgeons and patients negotiate their way through the complex process of decision-making about operative treatments. We conducted a series of qualitative studies, asking patients and surgeons to describe their experience and beliefs about informed decision-making and consent. This study focuses on surgeons' views.
Methods: Open-ended interviews and focus group discussions were conducted with thoracic surgeons who treated esophageal cancer patients by esophagectomy, and general surgeons who routinely performed laparoscopic cholecystectomy. Their views were analyzed using a qualitative approach, grounded in the perspectives of the participants.
Results: Five dominant themes emerged from the analysis: (1) making informed decisions; (2) communicating information and confidence; (3) managing expectations and fears; (4) consent as a decision to trust; (5) commitment inspired by trust. These themes are illustrated by verbatim quotes from the surgeon interviews.
Conclusions: Surgeons carefully assess the risks and benefits of treatment before consenting to perform operative interventions. They are influenced by objective findings and by affective factors such as courage and the determination to survive expressed by their patients. They manage risks, doubts, and fears-both their patients' and their own-relying on trust and commitment on both sides to ensure the success of the surgical mission. The trust of their patients has a strong influence on the surgeons' decisions and actions.
abstract_id: PUBMED:34357789
Does radiology require informed consent for radiation risk? Recent trends in medical decision-making have moved from paternalistic doctor-patient relations to shared decision-making. Informed consent is fundamental to this process and to ensuring patients' ongoing trust in the health-care profession. It cannot be assumed that patients consent to the risk associated with medical exposures, unless they have been provided with the information to make that decision. This position is supported by both the legal and ethical framework around Radiation Protection detailed in this commentary.
Answer: Yes, being informed and feeling informed can affect patients' trust in their radiation oncologist. A study that assessed radiotherapy patients' trust in their radiation oncologist found that the content of information provided by radiation oncologists and their information-giving performance could explain a small percentage of the variance in trust scores. Specifically, patients' perception of radiation oncologists' information-giving explained an additional 4% of the variance in trust scores, on top of the 7% explained by patients' characteristics and the 3% by radiation oncologists' information giving (content and performance) (PUBMED:22694893). This suggests that both the actual information provided and the patients' subjective perception of how well they were informed play roles in establishing trust. However, the study also noted that trust may not be a sensitive patient-reported outcome of the quality of communication, especially in highly vulnerable patients, indicating that the relationship between information and trust is complex (PUBMED:22694893).
Moreover, trust in the doctor was associated with feeling extremely well-informed across different types of medical decisions, including medication, cancer screening, or elective surgery decisions (PUBMED:20881156). This further supports the notion that patients' feelings of being informed are linked to their trust in their healthcare providers.
In the context of radiation oncology, it is important for radiation oncologists to not only provide comprehensive information but also to assess patients' understanding and ensure that they feel well-informed, as this can contribute to building and maintaining trust in the therapeutic relationship (PUBMED:22694893). |
Instruction: High Discrepancy in Abdominal Obesity Prevalence According to Different Waist Circumference Cut-Offs and Measurement Methods in Children: Need for Age-Risk-Weighted Standardized Cut-Offs?
Abstracts:
abstract_id: PUBMED:26745148
High Discrepancy in Abdominal Obesity Prevalence According to Different Waist Circumference Cut-Offs and Measurement Methods in Children: Need for Age-Risk-Weighted Standardized Cut-Offs? Background: Waist circumference (WC) is a good proxy measure of central adiposity. Due to the multiplicity of existing WC cut-offs and different measurement methods, the decision to use one rather than another WC chart may lead to different prevalence estimates of abdominal obesity in the same population. Aim of our study was to assess how much the prevalence of abdominal obesity varies in Italian schoolchildren using the different available WC cut-offs.
Methods: We measured WC at just above the uppermost lateral border of the right ilium in 1062 Italian schoolchildren aged 7-14 years, 499 living in Northern Italy and 563 in Southern Italy. Abdominal obesity was defined as WC ≥90th percentile for gender and age according to nine WC charts.
Results: We found an extremely high variability in the prevalence of abdominal obesity detected in our study-populations according to the different WC charts, ranging in the overall group from 9.1% to 61.4%. In Northern Italy children it varied from 2.4% to 35.7%, and in Southern ones from 15.1% to 84.2%.
Conclusions: On the basis of the chosen WC cut-offs the prevalence of abdominal obesity varies widely, because percentile-charts are strongly influenced by the population status in a particular moment. A further rate of variability may lay on the site of WC measurement and on the statistical method used to calculate WC cut-offs. Risk-weighted WC cut-offs measured in a standardized anatomic site and calculated by the appropriate method are needed to simply identify by WC measurement those children at high risk of cardio-metabolic complications to whom specific and prompt health interventions should be addressed.
abstract_id: PUBMED:24827713
Effectiveness of different waist circumference cut-off values in predicting metabolic syndrome prevalence and risk factors in adults in China. Objective: To study the effectiveness of waist circumference cut-off values in predicting the prevalence of metabolic syndrome (MetS) and risk factors in adults in China.
Methods: A cross-sectional survey was condcuted in 14 provinces (autonomous region, municipality) in China. A total of 47,325 adults aged⋝20 years were selected by multistage stratified sampling, and questionnaire survey and physical and clinical examination were conducted among them. MetS was defined according to the International Diabetes Federation (IDF) criteria and modified IDF criteria.
Results: The age-standardized prevalence of MetS was 24.2% (22.1% in men and 25.8% in women) and 19.5% (22.1% in men and 18.0% in women) according to the IDF criteria and modified IDF criteria respectively. The age-standardized prevalence of pre-MetS was 8.1% (8.6% in men and 7.8% in women) according to the modified IDF criteria. The prevalence of MetS was higher in urban residents than rural residents and in northern China residents than in southern China residents. The prevalence of central obesity was about 30% in both men and women according to the ethnicity-specific cut-off values of waist circumference for central obesity (90 cm for men and 85 cm for women). Multivariate regression analysis revealed no significant difference in risk factors between the two MetS definitions.
Conclusion: Using both the modified IDF criteria and ethnicity-specific cut-off values of waist circumference can provide more useful information about the prevalence of MetS in China. Conclusion Using both the modified IDF criteria and ethnicity-specific cut-off values of waist circumference can provide more useful information about the prevalence of MetS in China.
abstract_id: PUBMED:19713182
Prevalence and risk factors with overweight and obesity among Vietnamese adults: Caucasian and Asian cut-offs. Objective: To determine the prevalence and factors associated with overweight/obesity among adults in Ho Chi Minh City (HCMC) using Caucasian and Asian cut-offs.
Study Design: A cross-sectional survey.
Methods: In 2005, 1,971 adults aged 25-64 years in HCMC were randomly selected using a proportional to population size sampling method to estimate the prevalence of overweight and obesity, measured by body mass index (BMI) and waist circumference. Multivariable logistic models were used to examine associations between overweight/obesity and socioeconomic status, health-related behaviors, and biochemical indices of chronic disease risk.
Results: The prevalence of overweight and obesity using the Caucasian BMI cut-offs were 13.9% and 1.8% respectively, and those with the Asian BMI cut-offs were 27.5% and 5.7%, respectively. The abdominal adiposity rates were higher than the BMI overweight and obesity rates in women, but not in men. Increasing age, low education, high household wealth index, high levels of sitting and reclining time, cholesterol and high blood pressure were significantly associated with overweight and obesity. Current smoking and sedentary leisure time was significantly negatively associated with this status in men.
Conclusion: Associations between overweight/obesity and metabolic disorders were evident using both cut-offs. Asian cut-offs identified more risk factors and therefore could be considered for defining at-risk groups. The results highlight the importance of intervention programs to prevent overweight/obesity in young adults.
abstract_id: PUBMED:24661751
Determination of most suitable cut off point of waist circumference for diagnosis of metabolic syndrome in Kerman. Objectives: Metabolic syndrome is a determining indicator of cardiovascular diseases and diabetes. Abdominal obesity, determined by measuring waist circumference, is one of the most important criteria for diagnosing this syndrome. This criterion varies between men and women and among different races. The present study aims at the assessment of the sensitivity and specificity of the commonly used cut off point of waist circumference, and the estimation of the most suitable cut off point of waist circumference for the diagnosis of metabolic syndrome in the urban society of Kerman.
Methods: 5332 subjects consisting of 2966 women and 2366 men, 20 years old and above were studied in a population based, cross sectional study. Waist circumference, blood pressure, blood sugar, and blood lipids were measured. People with at least two of the NCEP ATP III criteria - high blood pressure (BP>130/80), high triglycerides (TG>150), high glucose (FBG>100) and low HDL (HDL<40 in men and <50 in women) - were taken as population at risk. ROC analysis was used for determining the most suitable cut off point of waist circumference. The prevalence of metabolic syndrome was then assessed based on IDF, NCEP criteria and the proposed criterion, and agreement among the three methods in diagnosing people suffering from metabolic syndrome was examined.
Results: The average±standard deviation of waist circumference in women and in men was 83.90±12.55 and 87.99±11.94 cm respectively. The most suitable cut off point of waist circumference for metabolic syndrome diagnosis was 86 in women and 89 in men. These circumferences had the highest specificity and sensitivity. The prevalence of metabolic syndrome in IDF, NCPE, and the proposed criterion was 30.4%, 27.7%, and 35.2% respectively. The new criterion and the NCEP criterion achieved the highest agreement (kappa factor=83%).
Conclusion: The cuts off point of waist circumference in men and women are close. It is possible, then, to determine a common cut off point of waist circumference for both in Iran. Therefore, the cut point of 90-cm of waist circumference proposed by the National Obesity Committee seems to be appropriate for the Iranian society. These clinical findings should nevertheless be verified by simulation.
abstract_id: PUBMED:21163090
Verification on the cut-offs of waist circumference for defining central obesity in Chinese elderly and tall adults Objective: To describe the characteristics for distribution of waist circumference (WC) and validate the cut-offs of WC in defining the central obesity among Chinese elderly and tall adults.
Methods: Data from the 2002 National Nutrition and Health Survey was used to analyze the characteristics of WC distribution among subjects aged 45 and above and their height beyond the P85 percentile of Chinese adults. Kappa test was used to estimate the consistency of different cut-offs for WC with body mass index (BMI) ≥ 24 in defining obesity. The odds ratios of diabetes and impaired fasting glucose in different cut-offs on WC were calculated by multiple logistic regression. ROC curves analysis was used to determine the cut-offs.
Results: The means of WC were: 80.8 cm in male elderly, 79.4 cm in female elderly, 84.1 cm in tall male and 77.9 cm in tall female, respectively. The WC at 85 cm for male and 80 cm for female elderly had the best consistency with BMI at 24, and the distance of ROC curve was the shortest. The odds ratios for diabetes significantly increased from WC categories of 85-cm (OR = 2.1, 95%CI: 1.6 - 2.8), 90-cm (OR = 3.0, 95%CI: 2.3 - 4.0), and 95-cm (OR = 4.5, 95%CI: 3.4 - 5.8) in male elderly, and 80-cm (OR = 1.9, 95%CI: 1.4 - 2.6), 85-cm (OR = 3.2, 95%CI: 2.4 - 4.3), and 90-cm (OR = 4.8, 95%CI: 3.7 - 6.1) in female elderly (P < 0.01). The odds ratios for impaired fasting glucose also significantly increased from WC categories of 85-cm (OR = 1.6, 95%CI: 1.2 - 2.2), 90-cm (OR = 2.6, 95%CI: 1.9 - 3.5), and 95-cm (OR = 3.5, 95%CI: 2.6 - 4.6) in male elderly, and 80-cm (OR = 2.5, 95%CI: 1.8 - 3.4), 85-cm (OR = 3.2, 95%CI: 2.4 - 4.4), and 90-cm (OR = 4.2, 95%CI: 3.2 - 5.6) in female elderly (P < 0.01). The odds ratios for diabetes (OR = 3.6, 95%CI: 2.1 - 6.4) and impaired fasting glucose (OR = 5.5, 95%CI: 3.0 - 10.1) significantly increased from WC ≥ 95 cm in tall males. The odds ratios for diabetes significantly increased from WC categories of 85-cm (OR = 5.0, 95%CI: 2.7 - 9.4) and 90-cm (OR = 8.0, 95%CI: 4.6 - 14.1), and odds ratio for impaired fasting glucose of WC ≥ 90 cm was 3.7 (95%CI: 2.0 - 6.9) in tall females.
Conclusion: The recommended cut-off points of WC were 85 cm for elderly males and 80 cm for elderly females. The cut-offs of WC were also effective predictors for impaired fasting glucose among tall adults. The cut-offs of WC in the Guidelines for Overweight and Obesity Prevention and Control for Chinese Adults were verified and should be applied as preventive indicators.
abstract_id: PUBMED:20642876
Optimal cut-off values and population means of waist circumference in different populations. Abdominal obesity is a risk factor for cardiometabolic disease, and has become a major public health problem in the world. Waist circumference is generally used as a simple surrogate marker to define abdominal obesity for population screening. An increasing number of publications solely rely on the method that maximises sensitivity and specificity to define 'optimal' cut-off values. It is well documented that the optimal cut-off values of waist circumference vary across different ethnicities. However, it is not clear if the variation in cut-off values is a true biological phenomenon or an artifact of the method for identifying optimal cut-off points. The objective of the present review was to assess the relationship between optimal cut-offs and population waist circumference levels. Among sixty-one research papers, optimal cut-off values ranged from 65·5 to 101·2 cm for women and 72·5 to 103·0 cm for men. Reported optimal cut-off values were highly correlated with population means (correlation coefficient: 0·91 for men and 0·93 for women). Such a strong association was independent of waist circumference measurement techniques or the health outcomes (dyslipidaemia, hypertension or hyperglycaemia), and existed in some homogeneous populations such as the Chinese and Japanese. Our findings raised some concerns about applying the sensitivity and specificity approach to determine cut-off values. Further research is needed to understand whether the differences among populations in waist circumference were genetically or environmentally determined, and to understand whether using region-specific cut-off points can identify individuals with the same absolute risk levels of metabolic and cardiovascular outcomes among different populations.
abstract_id: PUBMED:37968681
Establishing international optimal cut-offs of waist-to-height ratio for predicting cardiometabolic risk in children and adolescents aged 6-18 years. Background: Waist-to-height ratio (WHtR) has been proposed as a simple and effective screening tool for assessing central obesity and cardiometabolic risk in both adult and pediatric populations. However, evidence suggests that the use of a uniform WHtR cut-off of 0.50 may not be universally optimal for pediatric populations globally. We aimed to determine the optimal cut-offs of WHtR in children and adolescents with increased cardiometabolic risk across different countries worldwide.
Methods: We used ten population-based cross-sectional data on 24,605 children and adolescents aged 6-18 years from Brazil, China, Greece, Iran, Italy, Korea, South Africa, Spain, the UK, and the USA for establishing optimal WHtR cut-offs. We performed an external independent test (9,619 children and adolescents aged 6-18 years who came from other six countries) to validate the optimal WHtR cut-offs based on the predicting performance for at least two or three cardiometabolic risk factors.
Results: Based on receiver operator characteristic curve analyses of various WHtR cut-offs to discriminate those with ≥ 2 cardiometabolic risk factors, the relatively optimal percentile cut-offs of WHtR in the normal weight subsample population in each country did not always coincide with a single fixed percentile, but varied from the 75th to 95th percentiles across the ten countries. However, these relatively optimal percentile values tended to cluster irrespective of sex, metabolic syndrome (MetS) criteria used, and WC measurement position. In general, using ≥ 2 cardiometabolic risk factors as the predictive outcome, the relatively optimal WHtR cut-off was around 0.50 in European and the US youths but was lower, around 0.46, in Asian, African, and South American youths. Secondary analyses that directly tested WHtR values ranging from 0.42 to 0.56 at 0.01 increments largely confirmed the results of the main analyses. In addition, the proposed cut-offs of 0.50 and 0.46 for two specific pediatric populations, respectively, showed a good performance in predicting ≥ 2 or ≥ 3 cardiometabolic risk factors in external independent test populations from six countries (Brazil, China, Germany, Italy, Korea, and the USA).
Conclusions: The proposed international WHtR cut-offs are easy and useful to identify central obesity and cardiometabolic risk in children and adolescents globally, thus allowing international comparison across populations.
abstract_id: PUBMED:18706730
Waist circumference cut-off points for the diagnosis of metabolic syndrome in Iranian adults. Aims: Central obesity, a prominent feature of metabolic syndrome (MetS), is commonly assessed by gender- and ethnicity-specific waist circumference (WC) cut-off values. Since 2006, the recommended WC cut-offs by the International Diabetes Federation (IDF) for Europeans are used in Iran because of limited data availability. The purpose of this study was to determine optimal cut-off points for the diagnosis of MetS in Iran.
Methods: A total of 2752 adults (1046 men) were studied. Subjects with two or more of the following risk factors from the IDF criteria were considered to have multiple risk factors: hyperglycemia (FBG>/=100mg/dL or diagnosed diabetes), high blood pressure (SBP>/=130mmHg, DBP>/=85mmHg, or using antihypertensive drugs), low HDL (<50mg/dL for females and <40mg/dL for males), and high TG (>150mg/dL).
Results: The WC cut-off yielding maximum sensitivity plus specificity for predicting the presence of multiple risk factors was 91.5cm in men and 85.5cm in women. Sensitivity and specificity were 77% (86%) and 58% (50%) in men (women), respectively. MetS prevalence was estimated to be approximately 27% in Tehran.
Conclusions: WC cut-offs recommended by the IDF are not appropriate for use in Iran.
abstract_id: PUBMED:32062911
The cut-off points of body mass index and waist circumference for predicting metabolic risk factors in Chinese adults Objective: To assess the association of BMI and waist circumference (WC) with metabolic risk factors, and confirm the appropriate cut-off points of BMI and WC among Chinese adults. Methods: After excluding participants with missing or extreme measurement values, as well as individuals with self-reported histories of cancer, a total of 501 201 adults in baseline and 19 201 adults in the second re-survey from the China Kadoorie Biobank were included. The associations of BMI and WC with metabolic risk factors were estimated. Receiver operating characteristic (ROC) analyses were conducted to assess the appropriate cut-off values of BMI and WC to predict the risk of hypertension, diabetes, dyslipidemia and clustering of risk factors. Results: The prevalence of hypertension, diabetes, dyslipidemia and clustering of risk factors all presented ascending trends with the increasing levels of BMI or WC. Defined as the points on the ROC curve where Youden's index reached the highest, the appropriate overweight cut-off points of BMI were around 24.0 kg/m(2) both in men and women, and the points of WC were around 85 cm in men and 80 to 85 cm in women. With specificity 90% for identification of risk factors, the appropriate obese cut-off points of BMI were around 28.0 kg/m(2) both in men and women, with the range of 27.0 to 28.9 kg/m(2). Conclusions: The cut-off points for overweight and obesity recommended by Coorperative Meta-analysis Group of China Obesity Task Force was verified in the large sample survey conducted more recently. The cut-off points of BMI were 24.0 and 28.0 kg/m(2) for overweight and obesity, and the cut-off point of WC was 85 cm in men and 80 to 85 cm in women for central obesity.
abstract_id: PUBMED:24932537
Evidence of inappropriate cardiovascular risk assessment in middle-age women based on recommended cut-points for waist circumference. Background And Aims: Diverse waist circumference thresholds for assessment of metabolic and cardiovascular risk in Caucasians are recommended by different health professional organizations. We aimed to determine optimal sex-specific thresholds for anthropometric measures showing the strongest association with cardiovascular risk factors in a cohort of middle-aged Germans.
Methods And Results: Statistical analyses are based on data from 426 mothers and 267 fathers of participants of the Ulm Birth Cohort Study undergoing a clinical follow-up examination in 2008 (median age 41 years) using logistic regression analyses. The prevalence of many cardiometabolic risk factors was significantly higher in men than in women; hypertension: 45%/17% (p < 0.0001), apolipoprotein ratio B/A1 > 0.72: 35%/9% (<0.0001), hyperglycemia: 11%/14% (p = 0.3), which is in contrast to the predicted cardiovascular risk of 52%/70% and 24%/36% based on thresholds for waist circumference proposed by International Diabetes Federation and American Heart Association, respectively. We determined optimal thresholds for waist circumference between 90 and 95 cm for men and women. Using a threshold of 92 cm the prevalence of abdominal obesity was 59% in men and 24% in women, which was in agreement with the higher prevalence of overweight and obesity in men than in women (Body Mass Index (BMI) > 25: 64%/35%). The prediction of cardiometabolic risk factors by waist circumference and waist-to-height ratio did not outperform the prediction by BMI. In contrast to BMI, waist circumference was correlated with body height independent of sex.
Conclusion: Currently proposed thresholds for waist circumference spuriously overestimate the cardiovascular risk in women, but not in men in a German population.
Answer: The study by PUBMED:26745148 highlights the significant variability in the prevalence of abdominal obesity among Italian schoolchildren when different waist circumference (WC) cut-offs are used. The prevalence ranged from 9.1% to 61.4% in the overall group, 2.4% to 35.7% in Northern Italy children, and 15.1% to 84.2% in Southern Italy children. This discrepancy suggests that the percentile charts used to define abdominal obesity are highly influenced by the population's status at a particular moment. Additionally, variability may arise from the site of WC measurement and the statistical method used to calculate WC cut-offs. The study concludes that there is a need for risk-weighted WC cut-offs measured at a standardized anatomical site and calculated by an appropriate method. These standardized cut-offs would help to identify children at high risk of cardio-metabolic complications who require specific and prompt health interventions.
This finding is consistent with the broader literature, which indicates that the optimal cut-off values for WC vary across different populations and ethnicities, and that the use of a single fixed cut-off may not be appropriate for all groups (PUBMED:20642876, PUBMED:37968681). For instance, the optimal cut-offs for WC in defining central obesity among Chinese elderly and tall adults were found to be 85 cm for elderly males and 80 cm for elderly females (PUBMED:21163090). Similarly, in Iranian adults, the WC cut-off yielding maximum sensitivity and specificity for predicting the presence of multiple risk factors was 91.5 cm in men and 85.5 cm in women (PUBMED:18706730).
Moreover, the study by PUBMED:37968681 suggests that the optimal cut-offs of waist-to-height ratio (WHtR) for predicting cardiometabolic risk in children and adolescents do not always align with a single fixed percentile and vary from the 75th to 95th percentiles across different countries. The study proposes international WHtR cut-offs of around 0.50 for European and US youths and around 0.46 for Asian, African, and South American youths, indicating the need for region-specific cut-offs. |
Instruction: Ceramic inlays: is the inlay thickness an important factor influencing the fracture risk?
Abstracts:
abstract_id: PUBMED:23639702
Ceramic inlays: is the inlay thickness an important factor influencing the fracture risk? Objectives: It is still unclear whether the inlay thickness is an important factor influencing the fracture risk of ceramic inlays. As high tensile stresses increase the fracture risk of ceramic inlays, the objective of the present finite element method (FEM) study was to biomechanically analyze the correlation between inlay thickness (T) and the induced first principal stress.
Methods: Fourteen ceramic inlay models with varying thickness (0.7-2.0 mm) were generated. All inlays were combined with a CAD model of a first mandibular molar (tooth 46), including the PDL and a mandibular segment which was created by means of the CT data of an anatomical specimen. Two materials were defined for the ceramic inlays (e.max(®) or empress(®)) and an occlusal force of 100 N was applied. The first principal stress was measured within each inlay and the peak values were considered and statistically analyzed.
Results: The stress medians ranged from 20.7 to 22.1 MPa in e.max(®) and from 27.6 to 29.2 MPa in empress(®) inlays. A relevant correlation between the first principal stress and thickness (T) could not be detected, neither for e.max(®) (Spearman: r=0.028, p=0.001), nor for empress(®) (Spearman: r=0.010, p=0.221). In contrast, a very significant difference (p<0.001) between the two inlay materials (M) was verified.
Conclusions: Under the conditions of the present FEM study, the inlay thickness does not seem to be an important factor influencing the fracture risk of ceramic inlays. However, further studies are necessary to confirm this.
abstract_id: PUBMED:23957933
Effects of thickness, processing technique, and cooling rate protocol on the flexural strength of a bilayer ceramic system. Objective: To determine whether the thickness, processing technique, and cooling protocol of veneer ceramic influence the flexural strength of a bilayer ceramic system.
Materials And Methods: Sixty-four bar-shaped specimens (20mm×4mm×1mm) of yttria-stabilized tetragonal zirconia (Vita In-Ceram YZ, Vita) were fabricated (ISO 6872) and randomly divided into 8 groups (n=8) according to the factors "processing technique" (P - PM9 and V - VM9), "thickness" (1mm and 3mm), and "cooling protocol" (S - slow and F - fast). The veneer ceramics were applied only over one side of the bar-shaped specimens. All specimens were mechanically cycled (2×10(6) cycles, 84N, 3.4Hz, in water), with the veneer ceramic under tension. Then, the specimens were tested in 4-point bending (1mm/min, load 100kgf, in water), also with the veneer ceramic under tension, and the maximum load was recorded at first sign of fracture. The flexural strength (σ) was calculated, and the mode of failure was determined by stereomicroscopy (30×). The data (MPa) were analyzed statistically by 3-way ANOVA and Tukey's test (α=0.05).
Results: ANOVA revealed that the factor "thickness" (p=0.0001) was statistically significant, unlike the factors "processing technique" (p=0.6025) and "cooling protocol" (p=0.4199). The predominant mode of failure was cracking.
Significance: The thickness of the veneer ceramic has an influence on the mechanical strength of the bilayer ceramic system, regardless of processing technique and cooling protocol of the veneer ceramic.
abstract_id: PUBMED:34059304
Factors influencing the survival of implant-supported ceramic-ceramic prostheses: A randomized, controlled clinical trial. Objective: The goals of this research are: (1) to determine the clinical survival of ceramic-ceramic 3-unit implant supported fixed dental prostheses (FDPs) compared with control metal-ceramic and; (2) to analyze the effects of design parameters such as connector height, radius of curvature of gingival embrasure, and occlusal veneer thickness.
Materials And Methods: This randomized, controlled clinical trial enrolled 96 participants with 129 3-unit implant-supported FDPs. Participants were randomized to receive different design combinations to include FDP material, thickness of occlusal veneer ceramic, radius of curvature of gingival embrasure and connector height. Participants were recalled for 6 months, 1year and yearly thereafter for the next 5 years. FDPs were examined for evidence of fracture and radiographs were made to assess viability of implants. Fractographic analyses and Kaplan Meier survival analysis was used to analyze the data.
Results: 27 FDPs, representing 21%, exhibited chipping fractures of the veneer during the 5-year observation period. There was no statistically significant effect of type of material, veneer thickness, radius of curvature of gingival embrasure and connector height on occurrence of fracture. Fractographic and occlusal analyses reveal that fractures originated from the occlusal surface and that occlusion was the most important factor in determining survival. Stresses calculated at failure demonstrated lower values compared with in vitro data.
Conclusion: Implant-supported ceramic-ceramic prosthesis is a viable alternative to metal-ceramic. Survival analysis for both materials were comparable and design parameters employed in this study did not affect survival as long as zirconia was used as the core material.
abstract_id: PUBMED:33377341
Effect of occlusal thickness design on the fracture resistance of endocrowns restored with lithium disilicate ceramic and zirconia Objective: This study aimed to investigate the effect of occlusal thickness design on fracture resistance of endocrowns restored with lithium disilicate ceramic and zirconia.
Methods: A total of 24 artificial first mandibular molars were randomly divided into four groups with six teeth in each group as follows: group lithium disilicate ceramic-2 mm (lithium disilicate ceramic, with an occlusal thickness of 2 mm and a retainer length of 4 mm); group lithium disilicate ceramic-4 mm (lithium disilicate ceramic, with an occlusal thickness of 4 mm and a retainer length of 2 mm); group zirconia-2 mm (zirconia, with an occlusal thickness of 2 mm and a retainer length of 4 mm); and group zirconia-4 mm (zirconia, with an occlusal thickness of 4 mm and a retainer length of 2 mm). After adhesive cementation (RelyX Ultimate Clicker), all specimens were subjected to thermocycling (10 000 cycles). The specimens were subjected to fracture resistance testing at a 135° angle to the teeth at a crosshead speed of 0.5 mm·min⁻¹ in a universal testing machine. Data were analyzed with ANOVA and Tukey's HSD test by SPSS 15.0. The failure modes were classified.
Results: The fracture resistances of groups lithium disilicate ceramic- 2 mm, lithium disilicate ceramic-4 mm, zirconia-2 mm, and zirconia-4 mm were (890.54±83.41), (2 320.87±728.57), (2 258.05±557.66), and (3 847.70±495.99) N respectively. Group zirconia-4 mm had the highest fracture resistance, whereas group lithium disilicate ceramic-2 mm had the lowest.
Conclusions: The fracture resistance of molar endocrown with zirconia is higher than that with lithium disilicate ceramic. Increasing the occlusal thickness can improve the fracture resistance but increase the risk of fracture of abutment.
abstract_id: PUBMED:24932124
Thickness of immediate dentin sealing materials and its effect on the fracture load of a reinforced all-ceramic crown. Objectives: The objective of this study is to evaluate, in vitro, the thickness of immediate dentin sealing (IDS) materials on full crown preparations and its effect on the fracture load of a reinforced all-ceramic crown.
Materials And Methods: SIXTY PREMOLARS RECEIVED FULL CROWN PREPARATION AND WERE DIVIDED INTO THE FOLLOWING GROUPS ACCORDING TO THE IDS TECHNIQUE: G1-control; G2-Clearfil SE Bond; and G3-Clearfil SE Bond and Protect Liner F. After the impressions were taken, the preparations were temporized with acrylic resin crowns. IPS empress 2 restorations were fabricated and later cemented on the preparations with Panavia F. 10 specimens from each group were submitted to fracture load testing. The other 10 specimens were sectioned buccolingually before the thicknesses of Panavia F, Clearfil SE Bond and Protect Liner F were measured in 10 different positions using a microscope.
Results: According to analysis of variance and Tukey's test, the fracture load of Group 3 (1300 N) was significantly higher than that of Group 1 (1001 N) (P < 0.01). Group 2 (1189 N) was not significantly different from Groups 1 and 3. The higher thickness of Clearfil SE Bond was obtained in the concave part of the preparation. Protect Liner F presented a more uniform range of values at different positions. The thickness of Panavia F was higher in the occlusal portion of the preparation.
Conclusions: The film thickness formed by the IDS materials is influenced by the position under the crown, suggesting its potential to increase the fracture load of the IPS empress 2 ceramic crowns.
abstract_id: PUBMED:24060349
Time-dependent fracture probability of bilayer, lithium-disilicate-based, glass-ceramic, molar crowns as a function of core/veneer thickness ratio and load orientation. Unlabelled: Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure.
Objective: The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading.
Materials And Methods: Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic).
Results: Predicted fracture probabilities (Pf) for centrally loaded 1.6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8mm/0.8mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4mm/1.2mm).
Conclusion: CARES/Life results support the proposed crown design and load orientation hypotheses.
Significance: The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures.
abstract_id: PUBMED:9588988
Influence of glass-ceramic thickness on Hertzian and bulk fracture mechanisms. Purpose: The objective of this study was to test the hypothesis that bulk fracture of glass-ceramic disks of variable thickness originates at the inner, resin-bonded surface and is dominant over Hertzian fracture at the lower range of thickness values.
Materials And Methods: Eight groups of seven glass-ceramic disks (Dicor, Dentsply), 12 mm in diameter with thicknesses ranging from 0.4 to 2.4 mm, were cast, cerammed (to produce approximately 55 vol% of tetrasilicic fluormica crystals), air abraded, etched, and silane coated according to the manufacturer's instructions. The disks were bonded to an epoxy die substrate (with an elastic modulus comparable to that of dentin) using a light-activated resin cement. The bonded samples were supported on a flat surface and loaded at the top center of each disk until crack initiation occurred. All disks exhibited an initial crack within the bonded surface. Three randomly selected samples for each thickness were loaded beyond the point of crack initiation until Hertzian failure occurred.
Results: Although the crack-initiation force increased with increasing thickness, the failure stress approached a maximum level at a thickness of approximately 1.6 mm. These results suggest that the estimated maximum occlusal load for each patient should be used to select the minimum thickness of ceramic crowns rather than using the arbitrary traditional selection of a 1.5-mm thickness.
Conclusions: The authors conclude that bulk fracture is initiated within the bonded surface of a glass-ceramic specimen (for samples 0.4 to 2.4 mm in thickness) when the glass-ceramic is supported by a substrate with an elastic modulus similar to that of dentin. Furthermore, a Hertzian failure mechanism is unlikely to cause bulk fracture for these conditions.
abstract_id: PUBMED:25037897
The fracture resistance of a CAD/CAM Resin Nano Ceramic (RNC) and a CAD ceramic at different thicknesses. Objectives: This study aimed to investigate the influence of restoration thickness to the fracture resistance of adhesively bonded Lava™ Ultimate CAD/CAM, a Resin Nano Ceramic (RNC), and IPS e.max CAD ceramic.
Methods: Polished Lava™ Ultimate CAD/CAM (Group L), sandblasted Lava™ Ultimate CAD/CAM (Group LS), and sandblasted IPS e.max CAD (Group ES) discs (n=8, Ø=10 mm) with a thickness of respectively 0.5 mm, 1.0 mm, 1.5 mm, 2.0 mm, and 3.0 mm were cemented to corresponding epoxy supporting discs, achieving a final thickness of 3.5 mm. All the 120 specimens were loaded with a universal testing machine at a crosshead speed of 1 mm/min. The load (N) at failure was recorded as fracture resistance. The stress distribution for 0.5 mm restorative discs of each group was analyzed by Finite Element Analysis (FEA). The results of facture resistances were analyzed by one-way ANOVA and regression.
Results: For the same thickness of testing discs, the fracture resistance of Group L was always significantly lower than the other two groups. The 0.5 mm discs in Group L resulted in the lowest value of 1028 (112) N. There was no significant difference between Group LS and Group ES when the restoration thickness ranged between 1.0 mm and 2.0 mm. There was a linear relation between fracture resistance and restoration thickness in Group L (R=0.621, P<0.001) and in Group ES (R=0.854, P<0.001). FEA showed a compressive permanent damage in all groups.
Significance: The materials tested in this in vitro study with the thickness above 0.5 mm could afford the normal bite force. When Lava Ultimate CAD/CAM is used, sandblasting is suggested to get a better bonding.
abstract_id: PUBMED:26051232
Influence of restoration thickness and dental bonding surface on the fracture resistance of full-coverage occlusal veneers made from lithium disilicate ceramic. Objectives: The purpose of this in-vitro study was to evaluate the influence of ceramic thickness and type of dental bonding surface on the fracture resistance of non-retentive full-coverage adhesively retained occlusal veneers made from lithium disilicate ceramic.
Methods: Seventy-two extracted molars were divided into three test groups (N=24) depending on the location of the occlusal veneer preparation: solely within enamel, within enamel and dentin or within enamel and an occlusal composite resin filling. For each test group, occlusal all-ceramic restorations were fabricated from lithium disilicate ceramic blocks (IPS e.max CAD) in three subgroups with different thicknesses ranging from 0.3 to 0.7mm in the fissures and from 0.6 to 1.0mm at the cusps. The veneers were etched (5% HF), silanated and adhesively luted using a self etching primer and a composite luting resin (Multilink Primer A/B and Multilink Automix). After water storage at 37°C for 3 days and thermal cycling for 7500 cycles at 5-55°C, specimens were subjected to dynamic loading in a chewing simulator with 600,000 loading cycles at 10kg combined with thermal cycling. Unfractured specimens were loaded until fracture using a universal testing machine. Statistical analysis was performed using Kruskal-Wallis and Wilcoxon tests with Bonferroni-Holm correction for multiple testing.
Results: Only specimens in the group with the thickest dimension (0.7mm in fissure, 1.0mm at cusp) survived cyclic loading without any damage. Survival rates in the remaining subgroups ranged from 50 to 100% for surviving with some damage and from 12.5 to 75% for surviving without any damage. Medians of final fracture resistance ranged from 610 to 3390N. In groups with smaller ceramic thickness, luting to dentin or composite provided statistically significant (p≤0.05) higher fracture resistance than luting to enamel only. The thickness of the occlual ceramic veneers had a statistically significant (p≤0.05) influence on fracture resistance.
Significance: The results suggest to use a thickness of 0.7-1mm for non-retentive full-coverage adhesively retained occlusal lithium disilicate ceramic restorations.
abstract_id: PUBMED:19167536
The influence of veneering porcelain thickness of all-ceramic and metal ceramic crowns on failure resistance after cyclic loading. Statement Of Problem: In some clinical situations, the length of either a prepared tooth or an implant abutment is shorter than ideal, and the thickness of a porcelain crown must be increased. Thickness of the coping and the veneering porcelain should be considered to prevent mechanical failure of the crown.
Purpose: The purpose of this study was to investigate the influence of veneering porcelain thickness for all-ceramic and metal ceramic crowns on failure resistance after cyclic loading.
Material And Methods: All-ceramic and metal ceramic crowns (n=20) were fabricated on an implant abutment (RN Solid Abutment) for the study. Two different framework designs with 2 different incisal thicknesses of veneering porcelain (2 mm and 4 mm) were used for each all-ceramic and metal ceramic crown system, resulting in 4 experimental groups (n=10) with identically shaped crowns. The all-ceramic crown consisted of alumina (Procera AllCeram) frameworks and veneering porcelain (Cerabien), while metal ceramic crowns were made of high noble metal (Leo) frameworks and veneering porcelain (IPS Classic). All crowns were cemented on the corresponding abutments using a resin cement (Panavia 21). They were subjected to 1000 cycles of thermal cycling (5 degrees C and 55 degrees C; 5-second dwell time). The crowns were tested with a custom-designed cyclic loading apparatus which delivered simultaneous unidirectional cyclic loading at 135 degrees, vertically, at an rpm of 250, with a load of 49 N. Each specimen was loaded for 1.2 x 106 cycles or until it failed. The specimens were thoroughly evaluated for cracks and/or bulk fracture with an optical stereomicroscope (x10) and assigned a score of success, survival, or failure. The specimens without bulk fracture after cyclic loading were loaded along the long axis of the tooth, on the incisal edge, in a universal testing machine at a crosshead speed of 1.5 mm/min, until fracture. Fisher's exact test was used to compare the success and survival rate between the 2 different materials (alpha=.05). Two-way ANOVA was used to analyze data in terms of material, porcelain thickness, and interaction effect. Also, a 2-sample t test was performed to compare between 2 thicknesses within the same material (alpha=.05).
Results: According to the Fisher's exact test, the all-ceramic group showed significantly higher success (P=.003) and survival rates (P=.001) than the metal ceramic group. For the failure load, the 2-way ANOVA showed significant effects for material (P<.001) and porcelain thickness (P=.004), but not a significant interaction effect (P=.198). For the metal ceramic groups, crowns with a 2-mm porcelain thickness showed a significantly greater failure load than crowns with a 4-mm porcelain thickness (P=.004). However, all-ceramic groups did not show a significant difference between the 2 different thicknesses of veneering porcelain (P=.198).
Conclusions: The all-ceramic crowns showed significantly higher success and survival rates after cyclic loading, but lower failure loads than metal ceramic crowns. The thickness of the veneering porcelain affected the failure load of the metal ceramic crowns, but not that of the all-ceramic crowns.
Answer: The inlay thickness does not appear to be an important factor influencing the fracture risk of ceramic inlays. A finite element method (FEM) study analyzed the correlation between inlay thickness and the induced first principal stress in ceramic inlay models with varying thickness (0.7-2.0 mm). The study found no relevant correlation between the first principal stress and thickness for either e.max® or empress® inlays, suggesting that inlay thickness does not significantly impact fracture risk under the conditions of this study (PUBMED:23639702).
However, it is important to note that other studies have found that the thickness of ceramic materials can influence their mechanical strength in different contexts. For example, the thickness of the veneer ceramic was found to have an influence on the mechanical strength of a bilayer ceramic system, regardless of processing technique and cooling protocol (PUBMED:23957933). Additionally, the fracture resistance of molar endocrowns restored with lithium disilicate ceramic and zirconia was influenced by occlusal thickness design, with thicker designs showing higher fracture resistance (PUBMED:33377341).
In the context of implant-supported ceramic-ceramic prostheses, a randomized controlled clinical trial found that design parameters such as connector height, radius of curvature of gingival embrasure, and occlusal veneer thickness did not statistically affect the occurrence of fracture, suggesting that other factors, such as occlusion, may be more critical in determining survival (PUBMED:34059304).
Overall, while the specific study on ceramic inlays (PUBMED:23639702) suggests that inlay thickness is not a significant factor for fracture risk, the influence of thickness on the mechanical properties of ceramic restorations can vary depending on the type of restoration and the context in which it is used. Further research is necessary to fully understand the role of thickness in the fracture risk of ceramic dental restorations. |
Instruction: Prevention of iatrogenic bile duct injuries in difficult laparoscopic cholecystectomies: is the naso-biliary drain the answer?
Abstracts:
abstract_id: PUBMED:31694627
Difficult iatrogenic bile duct injuries following different types of upper abdominal surgery: report of three cases and review of literature. Background: Iatrogenic bile duct injuries (BDIs) are mostly associated with laparoscopic cholecystectomy but may also occur following gastroduodenal surgery or liver resection. Delayed diagnosis of type of injury with an ongoing biliary leak as well as the management in a non-specialized general surgical units are still the main factors affecting the outcome.
Case Presentation: Herein we present three types of BDIs (Bismuth type I, IV and V) following three different types of upper abdominal surgery, ie. Billroth II gastric resection, laparoscopic cholecystectomy and left hepatectomy. All of them were complex injuries with complete bile duct transections necessitating surgical treatment. All were also very difficult to treat mainly because of a delayed diagnosis of type of injury, associated biliary leak and as a consequence severe inflammatory changes within the liver hilum. The treatment was carried out in our specialist hepatobiliary unit and first focused on infection and inflammation control with adequate biliary drainage. This was followed by a delayed surgical repair with the technique which had to be tailored to the type of injury in each case.
Conclusion: We emphasize that staged and individualized treatment strategy is often necessary in case of a delayed diagnosis of complex BDIs presenting with a biliary leak, inflammatory intraabdominal changes and infection. Referral of such patients to expert hepatobiliary centres is crucial for the outcome.
abstract_id: PUBMED:29459806
Iatrogenic lesions of the biliary tract Iatrogenic bile duct injuries (IBDI) represent a serious surgical complication of laparoscopic cholecystectomy (LC). Often it occurs when the bile duct merges with the cystic duct; and they have been ranked by Strasberg and Bismuth, depending on the degree and level of injury. About third of IBDI recognized during LC, to detect bile leakage. No immediate repair is recommended, especially when the lesion is near the confluence or inflammation is associated. The drain should be established to control leakage of bile and prevent biliary peritonitis, before transferring the patient to a specialist in complex hepatobiliary surgery facility. In patients who are not recognized intraoperatively, the IBDI manifest late postoperative fever, abdominal pain, peritonitis or obstructive jaundice. If there is bile leak, percutaneous cholangiography should be done to define the biliary anatomy, and control leakage through percutaneous biliary stent. The repair is performed six to eight weeks after patient stabilization. If there is biliary obstruction, cholangiography and biliary drainage are indicated to control sepsis before repair. The ultimate aim is to restore the flow of bile into the gastrointestinal tract to prevent the formation of calculi, stenosis, cholangitis and biliary cirrhosis. Hepatojejunostomy with Roux-Y anastomosis termino-lateral without biliary stents long term, is the best choice for the repair of most common bile duct injury.
abstract_id: PUBMED:17897905
Classification of iatrogenic bile duct injury. Background: Iatrogenic bile duct injury continues to be an important clinical problem, resulting in serious morbidity, and occasional mortality, to patients. The ease of management, operative risk, and outcome of bile duct injuries vary considerably, and are highly dependent on the type of injury and its location. This article reviews the various classification systems of bile duct injury.
Data Sources: A Medline, PubMed database search was performed to identify relevant articles using the keywords "bile duct injury", "cholecystectomy", and "classification". Additional papers were identified by a manual search of the references from the key articles.
Results: Traditionally, biliary injuries have been classified using the Bismuth's classification. This classification, which originated from the era of open surgery, is intended to help the surgeons to choose the appropriate technique for the repair, and it has a good correlation with the final outcome after surgical repair. However, the Bismuth's classification does not encompass the whole spectrum of injuries that are possible. Bile duct injury during laparoscopic cholecystectomy tends to be more severe than those with open cholecystectomy. Strasberg's classification made Bismuth's classification much more comprehensive by including various other types of extrahepatic bile duct injuries. Our group, Bergman et al, Neuhaus et al, Csendes et al, and Stewart et al have also proposed other classification systems to complement the Bismuth's classification.
Conclusions: None of the classification system is universally accepted as each has its own limitation. Hopefully, a universally accepted comprehensive classification system will be published in the near future.
abstract_id: PUBMED:12074081
Iatrogenic bile duct injury: the scourge of laparoscopic cholecystectomy. Background: Laparoscopic cholecystectomy (LC) has become the first-line surgical treatment of calculous gall-bladder disease and the benefits over open cholecystectomy are well known. In the early years of LC, the higher rate of bile duct injuries compared with open cholecystectomy was believed to be due to the 'learning curve' and would dissipate with increased experience. The purpose of the present paper was to review a tertiary referral unit's experience of bile duct injuries induced by LC.
Methods: A retrospective analysis was performed on all patients referred for management of an iatrogenic bile duct injury from 1981 to 2000. For injuries sustained at LC, details of time between LC and recognition of the injury, time from injury to definitive repair, type of injury, use of intraoperative cholangiography (IOC), definitive repair and postoperative outcome were recorded. The type of injury sustained at open cholecystectomy was similarly classified to allow the severity of injury to be compared.
Results: There were 131 patients referred for management of an iatrogenic bile duct injury that occurred at open cholecystectomy (n = 62), liver resection (n = 5) and at LC (n = 64). Only 39% of bile duct injuries were recognized at the time of LC. Following conversion to open operation, half the subsequent procedures were considered inappropriate. When the injury was not recognized during LC, 70% of patients developed bile leak/peritonitis, almost half of whom were referred, whereas the rest underwent a variety of operative procedures by the referring surgeon. The remainder developed jaundice or abnormal liver function tests and cholangitis. An IOC was performed in 43% of cases, but failed to identify an injury in two-thirds of patients. The bile duct injuries that occurred at LC were of greater severity than with open cholecystectomy. Following definitive repair, there was one death (1.6%). Ninety-two per cent of patients had an uncomplicated recovery and there was one late stricture requiring surgical revision.
Conclusions: The early prediction that the rate of injury during LC would decline substantially with increased experience has not been fulfilled. Bile duct injury that occurs at LC is of greater severity than with open cholecystectomy. Bile duct injury is recognized during LC in less than half the cases. Evidence is accruing that the use of cholangiography reduces the risk and severity of injury and, when correctly interpreted, increases the chance of recognition of bile duct injury during the procedure. Prevention is the key but, should an injury occur, referral to a specialist in biliary reconstructive surgery is indicated.
abstract_id: PUBMED:29883919
Treatment of late identified iatrogenic injuries of the right and left hepatic duct after laparoscopic cholecystectomy without transhepatic stent and Witzel drainage: Case report. Introduction: Most of the case reports about high type iatrogenic hepatic duct injuries reports how to treat and make Roux-en-Y hepaticojejunostomy below the junction of the liver immediately after this condition is recognised during surgical procedure when the injury was made. Hereby we present a case where we made Roux-en-Y hepaticojejunostomy without transhepatic billiary stent and also without Witzel drainage one month after the iatrogenic injury.
Presentation Of Case: A 21-year-old woman suffered from iatrogenic high transectional lesion of both hepatic ducts during laparoscopic cholecystectomy in a local hospital. Iatrogenic injury was not immediately recognized. Ten days later due to patient complaints and large amount of bile in abdominal drain sac, second surgery was performed to evacuate biloma. Symptoms reappeared again, together with bile in abdominal sac, and then patient was sent to our Clinical Center. After performing additional diagnostics, high type (Class E) of iatrogenic hepatic duct injury was diagnosed. A revision surgical procedure was performed. During the exploration we found high transection lesion of right and left hepatic duct, and we decided to do Roux-en-Y hepaticojejunostomy. We created a part of anastomosis between the jejunum and liver capsule with polydioxanone suture (PDS) 4-0 because of poor quality of the remaining parts of the hepatic ducts. We made two separate hepaticojejunal anastomoses (left and right) that we partly connected to the liver capsule, where we had a defect of hepatic ducts, without Witzel enterostomy and transhepatic biliary stent. There were no significant postoperative complications. Magnetic resonance cholangiopancreatography (MRCP) was made one year after the surgical procedure, which showed the proper width of the intrahepatic bile ducts, with no signs of stenosis of anastomoses.
Discussion: In most cases, treatment iatrogenic BDI is based on primary repair of the duct, ductal repair with a stent or creating duct-enteric anastomosis, often used and drainage by Witzel (Witzel enterostomy). Reconstructive hepaticojejunostomy is recommended for major BDIs during cholecystectomy. Considering that the biliary reconstruction with Roux-en-Y hepatojejunostomy is usually made with transhepatic biliary stent or Witzel enterostomy. What is interesting about this case is that these types of drainages were not made. We tried and managed to avoid such types of drainage and proved that in this way, without those types of drainage, we can successfully do duplex hepaticojejunal anastomoses and that they can survive without complications.
Conclusion: Our case indicates that this approach can be successfully used for surgical repair of iatrogenic lesion of both hepatic ducts.
abstract_id: PUBMED:2083927
The therapy of iatrogenic lesions of the bile duct. Forty-three patients were operated on for iatrogenic lesions of the bile duct. Only one patient had a biliary lesion which occurred in the course of distal gastric resection. All other lesions were observed during cholecystectomy. Injury of the bile duct was detected intraoperatively in sixteen cases. In 10 patients, lesions were observed in the postoperative period and in 17 patients, the post-operative diagnosis was made on the basis of symptoms of stenosis of the bile duct. Satisfactory results can be obtained by suturing the common bile duct and splinting with a T-tube where the lesion is partial and detected in the course of surgery. In the case of patients with strictures, an anastomosis (choledochojejunostomy Roux-en-Y loop) should be performed. Strictures involving hepatic bifurcation and the right hepatic duct have a higher incidence of restenosis, and transhepatic splinting of the anastomosis can therefore produce better results. Long-term transhepatic drainage has the advantage that replacement of the drain is relatively straightforward and complete dislocation impossible. Four of our patients died postoperatively, three of multiple septic organ failure due to preoperative biliary peritonitis or cholangitis, and one of a pulmonary embolism. Satisfactory long-term results after correction of an iatrogenic lesion of the bile duct can be obtained if the corrective procedure is undertaken immediately, prior to the onset of biliary cirrhosis.
abstract_id: PUBMED:32008215
Retrograde tracing along "cystic duct" method to prevent biliary misidentification injury in laparoscopic cholecystectomy. Bile duct injury remains the most serious complication of laparoscopic cholecystectomy (LC), the main cause was misidentification of cystic duct (CD). The aim of this study was to evaluate the effectiveness and security of retrograde tracing along "cystic duct" (RTACD) method for the prevention of biliary misidentification injury in LC. The conception of RTACD method was first described and then illustrated by simulation dissection with extrahepatic biliary structure charts. A total of 840 patients undergoing LC were selected. After the "CD" was separated during operation, its authenticity was identified by RTACD method according to its course and origin. The "CD" can be clipped/divided only when it was identified to be true CD. Among 840 patients, the initially separated "CD" was identified as actual CD in 831 cases, common hepatic (bile) duct in six cases, accessory right posterior sectoral duct in two cases, and right haptic duct in one case. LCs were successfully finished in 837 patients, and converted to open cholecystectomy in three cases. The average operation time was 64.23 min (range 25-225 min), and the average blood loss was 8.07 ml (range 2-200 ml). No biliary misidentification injury was found. All patients recovered smoothly. No jaundice or abdominal pain was noted in the patients during 1-19 months follow-up. RTACD method is a safe and effective new technique of preventing biliary misidentification injury.
abstract_id: PUBMED:34778361
A Novel End-to-End Biliary-to-Biliary Anastomosis Technique for Iatrogenic Bile Duct Injury of Strasberg-Bismuth E1-4 Treatment: A Retrospective Study and in vivo Assessment. Background: An iatrogenic bile duct injury (IBDI) is a severe complication that has a great impact on the physical and mental quality of life of the patients, especially for patients with postoperative benign biliary stricture. The effective measures for end-to-end biliary-to-biliary anastomosis intraoperative are essential to prevent the postoperative bile duct stricture, but also a challenge even to the most skilled biliary tract surgeon. Objective: A postoperative benign biliary stricture is an extremely intractable complication that occurs following IBDI. This study aimed to introduce a novel end-to-end biliary-to-biliary anastomosis technique named fish-mouth-shaped (FMS) end-to-end biliary-to-biliary reconstruction and determine the safety and effectiveness for preventing the postoperative benign biliary stricture in both rats and humans. Methods: In this study, 18 patients with biliary injury who underwent an FMS reconstruction procedure were retrospectively analyzed. Their general information, disease of the first hospitalization, operation method, and classification of bile duct injury (BDI) were collected. The postoperative complications were evaluated immediately perioperatively and the long-term complications were followed up at the later period of at least 5 years. An IBDI animal model using 18 male rats was developed for animal-based evaluations. A bile duct diathermy injury model was used to mimic BDI. The FMS group underwent an FMS reconstruction procedure while the control group underwent common end-to-end biliary-to-biliary anastomosis, a sham operation group was also established. The blood samples, liver, spleen, and common bile duct tissues were harvested for further assessments. Results: In the retrospective study, there was no postoperative mortality and no patient developed cholangitis during the 5-years postoperation follow-up. In the study of IBDI animal models, compared with the control group, the FMS reconstruction procedure reduced the occurrence of benign biliary stenosis, liver function damage, and jaundice. The blood tests as well as morphological and pathological observations revealed that rats in the FMS reconstruction group had a better recovery than those in the control group. Conclusions: An FMS reconstruction procedure is a safe and efficient BDI treatment method.
abstract_id: PUBMED:30881079
Iatrogenic bile duct injury: impact and management challenges. Iatrogenic bile duct injuries (BDIs) after laparoscopic cholecystectomy, being one of the most common performed surgical procedures, remain a substantial problem in gastrointestinal surgery with a significant impact on patient's quality of life. The primary aim of this review was to discuss the classification of BDIs, the proposed methods to prevent biliary lesions, the associated risk factors, and the management challenges depending on the timing of recognition of the injury, its extension, the patient's clinical condition, and the availability of experienced hepatobiliary surgeons. Early recognition of BDI is of paramount importance and limiting the diagnosis delay is crucial for an optimal postoperative outcome. The therapeutic management depends on the type and gravity of the biliary lesion, and includes endoscopic, radiologic, and surgical approaches.
abstract_id: PUBMED:17548261
Iatrogenic bile duct injuries from biliary tract surgery. Background: Cholecystectomy is the most commonly performed procedure in general surgery. However, bile duct injury is a rare but still one of the most common complications. These injuries sometimes present variably after primary surgery. Timely detection and appropriate management decrease the morbidity and mortality of the operation.
Methods: Five cases of iatrogenic bile duct injury (IBDI) were managed at the Department of Surgery, First Affiliated Hospital, Xi'an Jiaotong University. All the cases who underwent both open and laparoscopic cholecystectomy had persistent injury to the biliary tract and were treated accordingly.
Results: Recovery of the patients was uneventful. All patients were followed-up at the surgical outpatient department for six months to three years. So far the patients have shown good recovery.
Conclusions: In cases of IBDI it is necessary to perform the operation under the supervision of an experienced surgeon who is specialized in the repair of bile duct injuries, and it is also necessary to detect and treat the injury as soon as possible to obtain a satisfactory outcome.
Answer: The provided abstracts do not specifically mention the use of a naso biliary drain as a preventive measure for iatrogenic bile duct injuries (BDIs) in difficult laparoscopic cholecystectomies. Instead, they discuss various aspects of BDIs, including their classification, management, and prevention strategies.
For instance, the abstract from PUBMED:32008215 introduces the retrograde tracing along "cystic duct" (RTACD) method as a technique to prevent biliary misidentification injury during laparoscopic cholecystectomy (LC). This method involves verifying the authenticity of the "cystic duct" by tracing its course and origin before clipping or dividing it, which helps to avoid misidentification and subsequent injury.
Another abstract from PUBMED:34778361 describes a novel end-to-end biliary-to-biliary anastomosis technique named fish-mouth-shaped (FMS) end-to-end biliary-to-biliary reconstruction, which is used to prevent postoperative benign biliary stricture following IBDI. This technique was evaluated in both human patients and animal models, showing promising results in reducing the occurrence of benign biliary stenosis and improving recovery.
The abstract from PUBMED:30881079 emphasizes the importance of early recognition of BDIs and discusses the management challenges depending on various factors such as the timing of recognition, the extent of the injury, the patient's clinical condition, and the availability of experienced hepatobiliary surgeons. It suggests that the therapeutic management of BDIs may include endoscopic, radiologic, and surgical approaches.
In summary, while the abstracts provide valuable information on the classification, management, and some preventive measures for BDIs, they do not specifically address the use of naso biliary drains as a preventive strategy in difficult laparoscopic cholecystectomies. Therefore, based on the provided abstracts, it cannot be concluded that the naso biliary drain is the answer to preventing iatrogenic bile duct injuries in such cases. |
Instruction: Ewing's sarcoma and primitive neuroectodermal tumor in adults: are they different from Ewing's sarcoma and primitive neuroectodermal tumor in children?
Abstracts:
abstract_id: PUBMED:31655501
Twenty children with non-Wilms renal tumors from a reference center in Central Anatolia, Turkey Background/aim: Non-Wilms renal tumors (NWRTs) are rarely encountered in children. The aim of this study is to determine the treatment strategies, prognosis, outcomes, and survival of children with NWRTs at Erciyes University in Kayseri, Turkey.
Materials And Methods: Medical records of all patients (n = 20) treated for NWRTs over a 23-year period (1995–2018) were reviewed retrospectively.
Results: There was male predominance (female/male: 7/13); the median age at diagnosis was 3.2 years old (0.1–13.5 years old). The major histological groups included mesoblastic nephroma (MBN), (n: 5, 25%), malignant rhabdoid tumor (MRT), (n: 5, 25%), renal cell carcinoma, (n: 3, 15%), inflammatory myofibroblastic tumor (n: 2, 10%), multilocular cystic renal tumors (n: 2, 10%), metanephric adenoma (n: 1, 5%), renal neuroblastoma (n: 1, 5%), and bilateral renal Ewing sarcoma/primitive neuroectodermal tumor (ES/PNET) (n: 1, 5%). All of the patients with NWRTs had radical nephrectomy except the child with bilateral renal ES/PNET. Six children died because of progressive disease; the mortality rate was 30% (n: 6).
Conclusion: We have made the first report of bilateral renal involvement of ES/PNET in the English medical literature. Physicians dealing with pediatric renal masses should be alert to the high mortality rate in children with MRT, MBN, and ES/PNET and they should design substantial management plans for NWRTs.
abstract_id: PUBMED:27662616
Prognostic factors of overall survival in children and adolescents enrolled in dose-finding trials in Europe: An Innovative Therapies for Children with Cancer study. Objectives: Dose-finding trials are fundamental to develop novel drugs for children and adolescents with advanced cancer. It is crucial to maximise individual benefit, whilst ensuring adequate assessment of key study end-points. We assessed prognostic factors of survival in paediatric phase I trials, including two predictive scores validated in adult oncology: the Royal Marsden Hospital (RMH) and the MD Anderson Cancer Center (MDACC) scores.
Methods: Data of patients with solid tumours aged <18 years at enrolment in their first dose-finding trial between 2000 and 2014 at eight centres of the Innovative Therapies for Children with Cancer European consortium were collected. Survival distributions were compared using log-rank test and Cox regression analyses.
Results: Overall, 248 patients were evaluated: median age, 11.2 years (range 1.0-17.9); 46% had central nervous system (CNS) tumours and 54% extra-CNS tumours. Complete responses were observed in 2.1%, partial responses in 7.2% and stable disease in 25.9%. Median overall survival (OS) was 6.3 months (95% confidence interval, 5.2-7.4). Lansky/Karnofsky ≤80%, no school/work attendance, elevated creatinine and RMH score ≥1 correlated with worse OS in the multivariate analysis. The RMH and MDACC scores correlated with OS in adolescents (12-17 years), p = 0.002, but not in children (2-11 years).
Conclusions: Performance status of 90-100% and school/work attendance at enrolment are strong indicators of longer OS in paediatric phase I trials. Adult predictive scores correlate with survival in adolescents. These findings provide a useful orientation about potential prognosis and could lead in the future to more paediatric-adapted eligibility criteria in early-phase trials.
abstract_id: PUBMED:36684168
A case of giant Ewing's sarcoma (EES)/primitive neuroectodermal tumor (PNET) of the cervicothoracic junction in children with incomplete paralysis of both lower limbs: Case report and literature review. Background: Extraosseous Ewing's sarcoma/primary neuroectodermal tumor (EES/PNET) is a rare, malignant, small round blue cell tumor, which usually involves the larynx, kidneys, and esophagus. The most common metastatic sites are lung and bone. The incidence of epidural EES/PNET was 0.9%, and a detailed search of the PubMed literature found only 7 case reports of epidural ESS/PNET at the cervicothoracic junction in children.
Case Description: We report a case of epidural ESS/PNET at the cervicothoracic junction in a child with chest and back pain as the first symptom, which worsened after half a year and developed incomplete paralysis of both lower extremities and urinary incontinence. She underwent emergency surgery, chemotherapy and radiotherapy, and died of lung metastases 8 months after surgery.
Conclusion: Primary epidural tumors are mostly benign, such as spinal meningiomas and neuromas. Contrary to what has been previously thought, we report a case of malignant epidural EES/PNET at the cervicothoracic junction without bone destruction; The rarity of epidural EES/PNET at the cervicothoracic junction in children has led to a lack of data, particularly on prognostic factors and recurrence patterns. Due to the difficulty of early diagnosis and high mortality, spine surgeons must explore and increase their awareness of this disease.
abstract_id: PUBMED:35307693
PRINCIPLES OF DIAGNOSIS AND TREATMENT OF ASKIN'S TUMOR IN CHILDREN: CASE REPORT. The aim of the study was to show principles of diagnosis and treatment of Askin's tumor in children. Diagnostic procedures include physical examination, chest X-ray, CT scan and PET CT, morphological, histological and immunohistochemical examinations, cytogenetic study. Primitive neuroectodermal tumors belong to the group of low differentiated, overly aggressive neoplasms, originating from cells of the parasympathetic autonomic nervous system. Patient F., 9 years old, first consulted by pediatric oncologist in 2014 with complaints of volume formation in the chest on the right side which progressively increases. Diagnosis: PNEP (primitive neuroectodermal tumor) of the soft tissues of the chest on the right side in the 4th intercostal space along the midclavicular line T2aN0M0, stage 2a, standard risk group. We've shown results of diagnostical process, treatment and it's result in our patient. Patients who have received combination therapy, including chemotherapy, surgical removal of the tumor and radiation therapy, have better prognostic results. However, relapses often occur that require more aggressive treatment with high-dose chemotherapy, monoclonal antibodies, and bone marrow transplantation.
abstract_id: PUBMED:27984122
Diagnostic utility of cyclin D1 in the diagnosis of small round blue cell tumors in children and adolescents. Small round blue cell tumors (SRBCTs) of children and adolescents are often diagnostically challenging lesions. With the increasing diagnostic approach based on small biopsies, there is the need of specific immunomarkers that can help in the differential diagnosis among the different tumor histotypes to assure the patient a correct diagnosis for proper treatment. Based on our recent studies showing cyclin D1 overexpression in both Ewing sarcoma/primitive peripheral neuroectodermal tumor (EWS/pPNET) and peripheral neuroblastic tumors (neuroblastoma and ganglioneuroblastoma), we immunohistochemically assessed cyclin D1 immunoreactivity in 128 cases of SRBCTs in children and adolescents to establish its potential utility in the differential diagnosis. All cases of EWS/pPNET and the undifferentiated/poorly differentiated neuroblastomatous component of all peripheral neuroblastic tumors exhibited strong and diffuse nuclear staining (>50% of neoplastic cells) for cyclin D1. In contrast, this marker was absent from rhabdomyosarcoma (regardless of subtype) and lymphoblastic lymphoma (either B- or T-cell precursors), whereas it was only focally detected (<5% of neoplastic cells) in some cases of Wilms tumor (blastemal component) and desmoplastic small round cell tumor. Our findings suggest that cyclin D1 can be exploitable as a diagnostic adjunct to conventional markers in confirming the diagnosis of EWS/pPNET or neuroblastoma/ganglioneuroblastoma. Its use in routine practice may also be helpful for those cases of SRBCT with undifferentiated morphology that are difficult to diagnose after application of the conventional markers.
abstract_id: PUBMED:29207714
Paediatric Peripheral Primitive Neuroectodermal Tumour - A Clinico-Pathological Study from Southern India. Introduction: Primitive Neuroectodermal Tumour (PNET)/Ewing Sarcomas (ES) are aggressive childhood malignancies with neuroectodermal differentiation.
Aim: To study the clinical presentation, morphology, Immun-ohistochemistry (IHC), management and outcome of all the cases of paediatric pPNET/ES reported in our tertiary care centre over a period of six years.
Materials And Methods: This was a retrospective study conducted at Sri Ramachandra Medical College and Research Institute, Chennai, India. All biopsy proven cases of peripheral PNET/ES, in patients less than 18 years of age for a period of six years were included in this study. The corresponding clinical details regarding initial presentation, treatment and follow up were retrieved from the case files and analysed. Survival rate was calculated and Kaplan-Meier survival curve was plotted.
Results: We describe eleven cases of paediatric peripheral PNET/ES. The mean age at presentation was 94.08 (±58.27) months with a male/female ratio of 1.2:1. About 27.3% cases, all male with a mean age of 140 months at presentation, had distant metastasis during initial diagnosis. Biopsy showed small round blue cell morphology on light microscopy. IHC revealed strong membranous staining for CD99 in all cases. All children were treated with neo-adjuvant chemotherapy and then surgery, followed by radiotherapy if indicated. The cases were followed up for a mean duration of 20.82 months (ranging from one to 66 months). Nine children are doing well on follow up (81.8% survival rate). Two cases with metastasis at initial presentation died. Patients with metastatic disease exhibited a mean duration of survival of 9.66 (±7.24) months and those with localized disease exhibited a mean duration of survival of 25 (±22.88) months.
Conclusion: Metastasis at diagnosis is the single most important factor affecting prognosis. This was reflected in the present study where cases with metastasis exhibited a short mean duration of survival when compared to localized disease. It is likely that many cases of PNET/ES were not accurately identified in the past as IHC plays a vital role in the diagnosis of these small round blue cell tumours. IHC in adjunct with molecular studies has improved diagnostic accuracy. Multidisciplinary management and good supportive care when the lesion is localized has lead to improved survival.
abstract_id: PUBMED:11340583
Quantification of angiogenesis stimulators in children with solid malignancies. Humoral angiogenesis stimulators including vascular endothelial growth factor (VEGF) and basic fibroblast growth factor (bFGF) have been implicated in the pathogenesis of solid malignancies. However, it has remained unclear whether both stimulators contribute to the development and progression of solid malignancies of children. The aim of the present study was to determine whether VEGF and bFGF are elevated in body fluids of children with solid malignancies and, if so, whether these elevated levels correlate with clinical parameters. Using enzyme-linked immunosorbent assays (ELISAs), we quantified VEGF and bFGF in serum (n = 107) and urine (n = 57) of healthy children and of children with solid malignancies (serum: n(VEGF) = 69, n(bFGF) = 60; urine: n(VEGF) or n(bFGF) = 13). Finally, we compared patients' pre-therapeutic and post-therapeutic levels. Serum VEGF was elevated in children with several solid tumors (Ewing's sarcoma, primitive neuroectodermal tumours, malignant lymphoma, Langerhans cell histiocytosis and medulloblastoma). In contrast, serum bFGF, urinary bFGF or urinary VEGF were not significantly elevated. Upon successful therapy, elevated pre-therapeutic serum VEGF levels declined to levels present in healthy children. VEGF could contribute to the progression of pediatric solid malignancies, and serum VEGF could be used to monitor therapeutic response. Furthermore, the determination of angiogenesis stimulators could identify patients eligible for anti-angiogenic therapy.
abstract_id: PUBMED:7850367
Chest wall tumors in infants and children. Chest wall tumors are infrequent in infants and children, but a high proportion of these tumors are malignant. They present most frequently as a palpable mass, and less frequently with pain or respiratory distress. Radiographic evaluation should include chest radiographs followed by computed tomographic (CT) scan. In most cases an initial incisional biopsy is performed because of the significant risk of malignancy. The most frequent tumors are the malignant small round cell tumors (Ewing's sarcoma/primitive neuroectodermal tumor [PNET] family) followed by rhabdomyosarcoma, osteosarcoma, chondrosarcoma, and a spectrum of other sarcomas. Initial treatment with chemotherapy, particularly for the malignant small round cell tumors and osteosarcoma, may facilitate resection by decreasing the size of the tumor as well as its vascularity and friability. Cure requires successful local control and adjuvant chemotherapy and is particularly difficult to achieve in children presenting with metastases.
abstract_id: PUBMED:35354472
Peripheral primitive neuroectodermal tumor: a case report. Background: Primitive neuroectodermal tumors are extremely rare and highly aggressive malignant small round cell tumors that arise from the primitive nerve cells of the nervous system or outside it. These tumors share similar histology, immunohistologic characteristics, and cytogenetics with Ewing's sarcoma. Peripheral primitive neuroectodermal tumors of the chest wall are rare malignant tumors seen in children and young adults.
Case Presentation: We report a rare case of peripheral primitive neuroectodermal tumor in a 4-year-old Albanian girl with a mediastinal tumor and an unusual clinical presentation. She was initially treated for acute polyradiculoneuritis (Guillain-Barré syndrome) owing to pain, weakness in the lower limbs, and walking difficulty, as well as severe irritability. During the second week of treatment, the child began to experience dry cough, chest discomfort, and worsening dyspnea. Chest radiography, chest computed tomography, and contrast-enhanced computed tomography demonstrated a large mass in the right hemithorax that was derived from the posterior mediastinum with expansive growth in all directions and that shifted the mediastinal structures in the anterolateral left direction. Consequently, histopathology and immunohistochemical examination of the markers S-100, CD99, and Ki-67 showed that the tumor cells stained positively for S-100 and CD99. The proliferative index measured by Ki-67 was approximately 20%, which suggested primitive neuroectodermal tumor.
Conclusions: Even though other diseases, including leukemia, lymphoma, and neuroblastoma, may be accompanied by musculoskeletal manifestations in children, other solid tumors, such as peripheral primitive neuroectodermal tumors, should be considered in the differential diagnosis in any child presenting with musculoskeletal symptoms.
abstract_id: PUBMED:10755411
Ethnic group and survival from childhood cancer: report from the UK Children's Cancer Study Group. Survival following cancer was analysed in relation to ethnic group among children diagnosed in Britain during 1981-1996 and treated at paediatric oncology centres by members of the UK Children's Cancer Study Group. Survival was analysed for 11 diagnostic groups: acute lymphoblastic leukaemia (ALL), acute non-lymphocytic leukaemia, Hodgkin's disease, non-Hodgkin's lymphoma, astrocytoma, primitive neuroectodermal tumour, neuroblastoma, Wilms' tumour, osteosarcoma, Ewing's sarcoma and rhabdomyosarcoma. There were no significant differences in survival between White and non-White children over the study period as a whole. Among children with ALL, however, the relative risk of death allowing for period of diagnosis, age and white blood count was 1.25 for those of South Asian ethnic origin compared with Whites (P = 0.057).
Answer: Ewing's sarcoma (ES) and primitive neuroectodermal tumors (PNET) in adults may present differently compared to their occurrence in children, although they share similar histological and immunohistochemical characteristics. In children, ES/PNET are aggressive childhood malignancies with neuroectodermal differentiation (PUBMED:29207714). They are part of a group of tumors known as small round blue cell tumors (SRBCTs), which can be challenging to diagnose (PUBMED:27984122). These tumors are often treated with a combination of chemotherapy, surgery, and radiation therapy, and the presence of metastasis at diagnosis is a significant factor affecting prognosis (PUBMED:29207714).
In adults, ES/PNET may have a different clinical course and response to treatment. However, the abstracts provided do not offer specific information on the differences between ES/PNET in adults versus children. The abstracts focus on pediatric cases, discussing the clinical presentation, treatment, and outcomes in children (PUBMED:31655501, PUBMED:36684168, PUBMED:35307693, PUBMED:27984122, PUBMED:29207714, PUBMED:11340583, PUBMED:7850367, PUBMED:35354472, PUBMED:10755411).
For instance, one abstract mentions that ES/PNET are part of the malignant small round cell tumors found in children and that achieving a cure requires successful local control and adjuvant chemotherapy, which is particularly challenging in children presenting with metastases (PUBMED:7850367). Another abstract discusses the importance of angiogenesis stimulators like VEGF in the progression of pediatric solid malignancies, including ES/PNET, and suggests that serum VEGF could be used to monitor therapeutic response (PUBMED:11340583).
In summary, while the abstracts provided do not directly compare ES/PNET in adults to those in children, they do highlight the aggressive nature of these tumors in the pediatric population and the importance of early diagnosis, multidisciplinary management, and the impact of metastasis on prognosis. Differences in the clinical course and treatment response between adults and children with ES/PNET would require further investigation and comparison with adult-specific studies. |
Instruction: Do allegations of emotional maltreatment predict developmental outcomes beyond that of other forms of maltreatment?
Abstracts:
abstract_id: PUBMED:36896409
Differences in developmental problems between victims of different types of child maltreatment. This study examined differences in developmental problems between children who were victims of two child maltreatment dimensions: abuse versus neglect, and physical versus emotional maltreatment. Family demographics and developmental problems were examined in a clinical sample of 146 Dutch children from families involved in a Multisystemic Therapy - Child Abuse and Neglect treatment trajectory. No differences were found in child behavior problems within the dimension abuse versus neglect. However, more externalizing behavior problems (e.g., aggressive problems) were found in children who experienced physical maltreatment compared to children who experienced emotional maltreatment. Further, more behavior problems (e.g., social problems, attention problems, and trauma symptoms) were found in victims of multitype maltreatment compared to victims of any single-type maltreatment. The results of this study increase the understanding of the impact of child maltreatment poly-victimization, and highlight the value of classifying child maltreatment into physical and emotional maltreatment.
abstract_id: PUBMED:29245140
School readiness of maltreated children: Associations of timing, type, and chronicity of maltreatment. Children who have been maltreated during early childhood may experience a difficult transition into fulltime schooling, due to maladaptive development of the skills and abilities that are important for positive school adaptation. An understanding of how different dimensions of maltreatment relate to children's school readiness is important for informing appropriate supports for maltreated children. In this study, the Australian Early Development Census scores of 19,203 children were linked to information on child maltreatment allegations (substantiated and unsubstantiated), including the type of alleged maltreatment, the timing of the allegation (infancy-toddlerhood or preschool), and the total number of allegations (chronicity). Children with a maltreatment allegation had increased odds of poor school readiness in cognitive and non-cognitive domains. Substantiated maltreatment was associated with poor social and emotional development in children, regardless of maltreatment type, timing, or chronicity. For children with unsubstantiated maltreatment allegations, developmental outcomes according to the type of alleged maltreatment were more heterogeneous; however, these children were also at risk of poor school readiness irrespective of the timing and/or chronicity of the alleged maltreatment. The findings suggest that all children with maltreatment allegations are at risk for poor school readiness; hence, these children may need additional support to increase the chance of a successful school transition. Interventions should commence prior to the start of school to mitigate early developmental difficulties that children with a history of maltreatment allegations may be experiencing, with the aim of reducing the incidence of continuing difficulties in the first year of school and beyond.
abstract_id: PUBMED:15970323
Do allegations of emotional maltreatment predict developmental outcomes beyond that of other forms of maltreatment? Objectives: To understand the features of child abuse/neglect (CA/N) allegations in cases with emotional maltreatment (EMT) allegations, as well as the features of the EMT allegations themselves, and to describe any associations of EMT with distinct impairments of children's behavior, emotion and functioning.
Method: The sample consisted of 806 high-risk children, 545 with one or more maltreatment reports to CPS. The Maltreatment Classification System was used to record the number and severity levels of maltreatment allegations, which compared cases with and without EMT. Multiple regression analyses were conducted using 10 outcome scales from the Child Behavior Checklist, Vineland Screener, and Trauma Symptom Checklist. Successive blocks of predictor variables included demographics, maltreatment classification variables, maternal and family characteristics, and study site.
Results: When there were allegations of EMT as well as CA/N in a CPS case-record (by age 8), the CA/N allegations tended to be either more frequent or less severe than those kinds of allegations in cases without EMT. When neglect was alleged to occur with EMT, neglect allegations outnumbered allegations of EMT. However, when sexual abuse allegations were accompanied by EMT allegations, there were more EMT allegations than sexual abuse allegations in the cases. Higher severity ratings for EMT allegations than for physical abuse occurred when cases included any abuse. Distinctive effects of EMT subtypes were found between problems of safety/restriction and self-reported anger symptoms, and between problems of self-esteem/autonomy and posttraumatic stress.
Conclusion: Differences exist between the CA/N allegations in cases with and without EMT. Having few cases containing only EMT allegations made it difficult to assess distinctive harm associated with EMT. Certain types of EMT allegations were associated with increases in children's anger and posttraumatic stress.
abstract_id: PUBMED:25242708
More than words: the emotional maltreatment of children. Emotional maltreatment may be the most complex, prevalent, and damaging form of child maltreatment and can occur simultaneously with other forms of abuse. Children in the first few years of life seem to be at the greatest risk of suffering the most negative outcomes. Medical professionals can help identify and protect victims of emotional maltreatment by carefully observing caregiver-child interactions, paying attention to a family's social history, making referrals to community or counseling programs when necessary, and reporting any suspicions of maltreatment to Child Protective Services. A well-coordinated, multidisciplinary response must be enacted whenever emotional maltreatment is suspected or reported.
abstract_id: PUBMED:27865157
Allegations of maltreatment in custody. Background: Maltreatment in custody overlaps with torture. Concerned governments avoid informing. These governments withhold information and try to impose definitions. Therefore, reports often cannot be verified, with the consequence being classified as "allegation". The misery of a victim influences the recording. Engaged parties modify their reporting according to their intention. The difficulty to verify reports and the position of governments affects the perception and in consequence the presentation.
Methods: Corporeal effects of maltreatment in custody are described. They rely on personal observations, on cases treated in the rehabilitations centres for victims of torture, and personal collections of colleagues. Therefore the material is selective.
Results: One can differentiate between not life-threatening maltreatment (with or without mutilation), life-threatening maltreatment, and maltreatment meant to kill. Examples are described. The possibilities of diagnostic imaging are mentioned. The limits of the given overview are pointed out.
Conclusion: Knowing the possible forms is the basis to recognize allegations. Diagnostic imaging can prove maltreatment in rare cases, only. Reports and observations of maltreatment in custody create emotions. Governments and their organisation react, they withhold information and impose definitions. On the other hand, engaged parties insist that the misery of the victim has priority over the objective description. These positions influence and modify the perception and the use of allegations of maltreatment in custody.
abstract_id: PUBMED:36426806
From Maltreatment to Psychiatric Disorders in Childhood and Adolescence: The Relevance of Emotional Maltreatment. Different forms of maltreatment are thought to incur a cumulative and non-specific toll on mental health. However, few large-scale studies draw on psychiatric diagnoses manifesting in early childhood and adolescence to identify sequelae of differential maltreatment exposures, and emotional maltreatment, in particular. Fine-grained multi-source dimensional maltreatment assessments and validated age-appropriate clinical interviews were conducted in a sample of N = 778 3 to 16-year-olds. We aimed to (a) substantiate known patterns of clinical outcomes following maltreatment and (b) analyse relative effects of emotional maltreatment, abuse (physical and sexual), and neglect (physical, supervisory, and moral-legal/educational) using structural equation modeling. Besides confirming known relationships between maltreatment exposures and psychiatric disorders, emotional maltreatment exerted particularly strong effects on internalizing disorders in older youth and externalizing disorders in younger children, accounting for variance over and above abuse and neglect exposures. Our data highlight the toxicity of pathogenic relational experiences from early childhood onwards, urging researchers and practitioners alike to prioritize future work on emotional maltreatment.
abstract_id: PUBMED:37515917
The evaluation of emotional maltreatment's effect on family dynamics and suicidal behaviors. Background: Emotional maltreatment and poor family functioning are known risks for youth suicide, but few studies have examined these issues as prospective predictors of future attempts.
Objectives: Examine family functioning and suicide risk associated with emotional maltreatment in youth with a lifetime history of major depressive disorder (MDD) and the prospective association of emotional maltreatment and family functioning with future suicide attempts.
Participants And Setting: Participants included 321 youth aged 12-15 years (251 with emotional maltreatment; 70 with no emotional maltreatment) recruited from a metropolitan children's hospital from 2011 to 2018. Prospective analyses included 280 youths (221 with emotional maltreatment; 59 without emotional maltreatment).
Methods: Semi-structured interviews and self-reports assessed family functioning and suicidal thoughts and behaviors in youth with and without emotional maltreatment at baseline, 6-month, 1-year, and 2-year follow-up. Multivariate analyses examined whether emotional maltreatment predicted future suicide attempts, beyond the effect of prior suicide attempts.
Results: Emotionally maltreated youth reported significantly lower scores for family adaptability, cohesion, and family alliance, and higher rates of suicidal ideation and suicide attempts, compared to youth without emotional maltreatment. Youth experiencing multiple forms of abuse were significantly more likely to attempt suicide at future timepoints, however this association was attenuated after controlling for prior suicide attempts.
Conclusion: Youth who experienced emotional maltreatment had a significantly higher percentage of past suicidal thoughts and behaviors and significantly less favorable scores for family functioning associated with an increased suicide risk. Findings support family-focused suicide prevention strategies as a promising approach to reduce youth suicide.
abstract_id: PUBMED:27490515
Childhood emotional maltreatment and mental disorders: Results from a nationally representative adult sample from the United States. Child maltreatment is a public health concern with well-established sequelae. However, compared to research on physical and sexual abuse, far less is known about the long-term impact of emotional maltreatment on mental health. The overall purpose of this study was to examine the association of emotional abuse, emotional neglect, and both emotional abuse and neglect with other types of child maltreatment, a family history of dysfunction, and lifetime diagnoses of several Axis I and Axis II mental disorders. Data were from the National Epidemiological Survey on Alcohol and Related Conditions collected in 2004 and 2005 (n=34,653). The most prevalent form of emotional maltreatment was emotional neglect only (6.2%), followed by emotional abuse only (4.8%), and then both emotional abuse and neglect (3.1%). All categories of emotional maltreatment were strongly related to other forms of child maltreatment (odds ratios [ORs] ranged from 2.1 to 68.0) and a history of family dysfunction (ORs ranged from 2.2 to 8.3). In models adjusting for sociodemographic characteristics, all categories of emotional maltreatment were associated with increased odds of almost every mental disorder assessed in this study (adjusted ORs ranged from 1.2 to 7.4). Many relationships remained significant independent of experiencing other forms of child maltreatment and a family history of dysfunction (adjusted ORs ranged from 1.2 to 3.0). The effects appeared to be greater for active (i.e., emotional abuse) relative to passive (i.e., emotional neglect) forms of emotional maltreatment. Childhood emotional maltreatment, particularly emotionally abusive acts, is associated with increased odds of lifetime diagnoses of several Axis I and Axis II mental disorders.
abstract_id: PUBMED:33731084
The invisible scars of emotional abuse: a common and highly harmful form of childhood maltreatment. Background: Childhood maltreatment (CM) is unfortunately widespread globally and has been linked with an increased risk of a variety of psychiatric disorders in adults, including posttraumatic stress disorder (PTSD). These associations are well established in the literature for some maltreatment forms, such as sexual and physical abuse. However, the effects of emotional maltreatment are much less explored, even though this type figures among the most common forms of childhood maltreatment. Thus, the present study aims to investigate the impact of each type of childhood maltreatment, both individually and conjointly, on revictimization and PTSD symptom severity using a nonclinical college student sample.
Methods: Five hundred and two graduate and undergraduate students participated in the study by completing questionnaires assessing lifetime traumatic experiences in general, maltreatment during childhood and PTSD symptoms. Bivariate and multivariate negative binomial regressions were applied to examine the associations among childhood maltreatment, revictimization, and PTSD symptom severity.
Results: Our results showed that using bivariate models, all types of CM were significantly associated with revictimization and PTSD symptom severity. Multivariate models showed that emotional abuse was the type of maltreatment associated with the highest incidence rates of revictimization and PTSD symptom severity.
Conclusions: These data provide additional evidence of the harmful effects of childhood maltreatment and its long-term consequences for individuals' mental health. Notably, the findings highlight the importance of studying the impacts of emotional abuse, which seems to be a highly prevalent, understudied, and chronic form of maltreatment that is as toxic as other maltreatment forms.
abstract_id: PUBMED:35310289
Peer Victimization Influences Attention Processing Beyond the Effects of Childhood Maltreatment by Caregivers. Background: Different types of maltreatment (emotional, physical, and sexual) lead to distortions in emotion and attention processing. The present study investigated whether the experience of peer victimization in childhood and adolescence has an additional influence on attention processing in adulthood.
Methods: Two non-clinical samples consisting of individuals with different levels of experiences of maltreatment were recruited. In an evaluative conditioning task, images of faces with neutral emotional expression were either associated with short videos of intense negative statements, or associated with neutral videos. Subsequently, these faces were used as stimuli in an emotional Stroop task as well as a dot-probe task.
Results: In both tasks, hierarchical regression analyses revealed that retrospective reports of relational peer victimization made an incremental contribution to the prediction of attentional biases beyond child maltreatment. In the emotional Stroop task, emotional abuse was the strongest predictor for an attentional bias showing delayed responses to negatively associated faces, while peer victimization was associated with faster responses to negatively associated faces. In the dot-probe task, relational peer victimization was the strongest predictor for an attentional bias. When the attentional bias was examined in more detail, though, peer victimization did not show incremental contributions although emotional abuse remained the strongest predictor for facilitated attention toward negatively associated neutral faces.
Conclusion: Experiences of peer victimization leave additional cognitive scars beyond effects of childhood maltreatment by caregivers. It is likely that attentional biases in the aftermath of victimization put individuals at risk for the development of psychopathology.
Answer: Allegations of emotional maltreatment do predict developmental outcomes beyond that of other forms of maltreatment. Studies have shown that emotional maltreatment can have distinct impacts on children's behavior, emotion, and functioning. For instance, emotional maltreatment has been associated with increases in children's anger and posttraumatic stress, suggesting that it can have unique and significant effects on developmental outcomes (PUBMED:15970323). Emotional maltreatment has also been linked to particularly strong effects on internalizing disorders in older youth and externalizing disorders in younger children, accounting for variance over and above abuse and neglect exposures (PUBMED:36426806). Additionally, emotional maltreatment has been associated with increased odds of lifetime diagnoses of several Axis I and Axis II mental disorders, with the effects appearing to be greater for active forms of emotional maltreatment, such as emotional abuse, compared to passive forms, like emotional neglect (PUBMED:27490515). Furthermore, emotional abuse has been identified as the type of maltreatment associated with the highest incidence rates of revictimization and PTSD symptom severity, highlighting its significant long-term consequences for mental health (PUBMED:33731084).
These findings underscore the importance of recognizing and addressing emotional maltreatment as a serious and harmful form of child maltreatment that can have profound and lasting effects on developmental outcomes, distinct from those associated with other forms of maltreatment. |
Instruction: The Midfoot Fusion Bolt: a new perspective?
Abstracts:
abstract_id: PUBMED:25527299
The Midfoot Fusion Bolt: a new perspective? Background: There is no current guideline nor consensus regarding optimal surgical treatment of the midfoot Charcot. Due to the vast diversity of locations, it is difficult to make a general statement. Various different types of screws and plates are currently being used since they have been tested and declared to be most stable. The Midfoot Fusion Bolt is a new device which needs approval since long-term results are lacking. A short summary of currently published papers and results from our own institution are provided.
Objectives: The aim of this study was to investigate short-term results including complications and review published surveys.
Methods: The Midfoot Fusion Bolt is a solid, intramedullary screw. An antegrade as well as a retrograde technique are postulated for insertion. A total of 16 patients/17 feet in two specialized foot and ankle centers were included. BMI, HbA1c, satisfaction rates, complication rates, and expert opinions were recorded.
Results: The bolts were used an average of 21.17 months (range 3-55 months) in 16 patients/17 feet. Between 2009 and 2014, six bolts had to be removed. We encountered 4 cases of postoperative ulceration: 2 cases healed postoperatively, while the other 2 cases led to amputation. The average fusion rate was 92.35 %.
Conclusion: The Midfoot Fusion Bolt is no longer advised for single-device use only since there have been issues in terms of insufficient stability. However, stable conditions could be achieved with additional screws or plates, respectively. Prospective studies and biomechanical testing for general conclusions are still required to make a meaningful assessment.
abstract_id: PUBMED:36722707
Low Preoperative Albumin Associated With Increased Risk of Superficial Surgical Site Infection Following Midfoot, Hindfoot, and Ankle Fusion. Background: This study investigates the effect of malnutrition, defined by hypoalbuminemia, on rates of complication, readmission, reoperation, and mortality following midfoot, hindfoot, or ankle fusion.
Methods: The National Surgical Quality Improvement Program (NSQIP) database was queried from 2005 to 2019 to identify 500 patients who underwent midfoot (n = 233), hindfoot (n = 261), or ankle (n = 117) fusion. Patients were stratified into normal (n = 452) or low (n = 48) albumin group, which was defined by preoperative serum albumin level <3.5 g/dL. Demographics, medical comorbidities, hospital length of stay (LOS), and 30-day complication, readmission, and reoperation rates were compared between groups. The mean age of the cohort was 58.7 (range, 21-89) years.
Results: Hypoalbuminemia patients were significantly more likely to have diabetes (P < .001), be on dialysis (P < .001), and be functionally dependent (P < .001). The LOS was significantly greater among the low albumin group (P < .001). The hypoalbuminemia cohort also exhibited a significantly increased likelihood of superficial infection (P = .048). Readmission (P = .389) and reoperation (P = .611) rates did not differ between the groups.
Conclusion: This study shows that malnourished patients have an increased risk of superficial infection following foot and ankle fusions but are not at an increased risk of readmission or reoperation, suggesting that low albumin confers an elevated risk of surgical site infection.
Levels Of Evidence: Level III, Retrospective cohort study.
abstract_id: PUBMED:32405198
Midfoot arthritis- current concepts review. Midfoot arthritis causes chronic foot pain and significant impairment of daily activities. Although post traumatic arthritis and primary osteoarthritis are the most common pathologies encountered, surgeons need to rule out inflammatory causes and neuropathic aetiology before starting treatment. Steroid Injections are invaluable in conservative management and have diagnostic value in guiding surgical treatment. For the definitive surgical option of fusion there are a variety of fixation devices available. A successful union is linked to a satisfactory outcome which most authors report to be in the range of 90% following the key principles of careful patient selection, pre-operative planning, adequate joint preparation and a stable fixation.
abstract_id: PUBMED:26033061
The medial column Synthes Midfoot Fusion Bolt is associated with unacceptable rates of failure in corrective fusion for Charcot deformity: Results from a consecutive case series. Charcot neuro-osteoarthropathy (CN) of the midfoot presents a major reconstructive challenge for the foot and ankle surgeon. The Synthes 6 mm Midfoot Fusion Bolt is both designed and recommended for patients who have a deformity of the medial column of the foot due to CN. We present the results from the first nine patients (ten feet) on which we attempted to perform fusion of the medial column using this bolt. Six feet had concurrent hindfoot fusion using a retrograde nail. Satisfactory correction of deformity of the medial column was achieved in all patients. The mean correction of calcaneal pitch was from 6° (-15° to +18°) pre-operatively to 16° (7° to 23°) post-operatively; the mean Meary angle from 26° (3° to 46°) to 1° (1° to 2°); and the mean talometatarsal angle on dorsoplantar radiographs from 27° (1° to 48°) to 1° (1° to 3°). However, in all but two feet, at least one joint failed to fuse. The bolt migrated in six feet, all of which showed progressive radiographic osteolysis, which was considered to indicate loosening. Four of these feet have undergone a revision procedure, with good radiological evidence of fusion. The medial column bolt provided satisfactory correction of the deformity but failed to provide adequate fixation for fusion in CN deformities in the foot. In its present form, we cannot recommend the routine use of this bolt.
abstract_id: PUBMED:30196386
Management of Midfoot Fractures and Dislocations. Purpose Of Review: To outline the classic and recent literature of midfoot fractures and dislocations.
Recent Findings: There has been an evolution of implant technology to include mini-fragment fixation, suture fixation, and staples. Their efficacy is still being elucidated in the literature. Also, there has been a recent push for primary fusion, which we will discuss. Open reduction internal fixation of the midfoot remains to be the gold standard treatment, to which all other treatments are compared. It remains to be seen if adjunct fixation techniques are efficacious enough to provide a good result. Further study is needed to determine which patients are likely to progress to debilitating arthrosis and require fusion.
abstract_id: PUBMED:36965749
Progression to Hindfoot Charcot Neuroarthropathy After Midfoot Charcot Correction in Patients With and Without Subtalar Joint Arthrodesis. Charcot neuroarthropathy (CNA) is a disabling and progressive disease that affects the bones and joints of the foot. Successful Charcot reconstruction focuses on restoring anatomic alignment, obtaining multiple joint arthrodesis, selecting stable fixation, preserving foot length, and creating a foot suitable for community ambulation in supportive shoegear. Intramedullary fixation arthrodesis of the medial and lateral columns has been previously reported to produce improvement in midfoot Charcot reconstruction. More recently, a growing trend of stabilization of the subtalar joint (STJ) has been incorporated alongside the medial and lateral column fusion. Our objectives were to retrospectively review patients who underwent midfoot Charcot reconstructive surgery, whether with or without accompanying STJ arthrodesis, and establish which patients progressed to ankle CNA. Of the 72 patients who underwent midfoot Charcot reconstruction, 28 (38.9%) underwent STJ arthrodesis, and 22 converted to ankle CNA (30.6%). Fourteen (63.6%) of 22 ankle CNA cases had not undergone STJ arthrodesis; 8 patients (36.4%) had it. A Fisher exact test was performed to identify the relationship between those without STJ arthrodesis and those progressing to ankle CNA; it revealed statistical significance (p = .001). Performing an STJ arthrodesis with midfoot Charcot reconstructive surgery may be beneficial to aiding in hindfoot stability, establishing a plantigrade foot, and providing further insight into the management of midfoot Charcot.
abstract_id: PUBMED:24262671
Intramedullary medial column support with the Midfoot Fusion Bolt (MFB) is not sufficient for osseous healing of arthrodesis in neuroosteoarthropathic feet. Introduction: To address midfoot instability of Charcot disease a promising intramedullary implant has recently been developed to allow for an arthrodesis of the bones of the medial foot column in an anatomic position. We report on a group of patients with Charcot arthropathy and instability at the midfoot where the Midfoot Fusion Bolt had been employed as an implant for the reconstruction of the collapsed medial foot column.
Material And Methods: A total of 7 patients (median age 56.3 years, range 47-68) were enrolled with severe Charcot deformation at Eichenholtz stages I-II (Sanders and Frykberg types II and III). The medial column was stabilised primarily with an intramedullary rod (Midfoot Fusion Bolt) in stand-alone technique in order to reconstruct the osseous foot geometry. The bolt was inserted in a retrograde mode via the head of MTI and forwarded into the talus. Follow-up time averaged 27 months (range 9-30).
Results: Intraoperative plantigrade reconstruction and restoration of the anatomic foot axes of the medial column was achieved in all cases with the need for revision surgery in 6 out of 7 patients due to soft tissue problems (2 impaired wound healing, 1 postoperative haematoma, 3 early infection). Implant-associated problems were seen in one case intra-operatively with fracture of the first metatarsal shaft and two cases with implant loosening of the MFB and need for implant removal during long time follow-up. Two patients underwent lower leg amputation due to a progressive deep soft tissue infection. One patient healed uneventfully without need for revision surgery. Except for one case recurrent ulcerations were not observed, so far.
Conclusion: Medial column support in midfoot instability of Charcot arthropathy with a single intramedullary rod does not provide enough stability to achieve osseous fusion. MFB loosening was associated with deep infection in a majority of our cases. To prevent early loosening of the intramedullary rod and to increase rotational stability, additional implants as angular stable plates are needed at the medial column and eventually an additional stabilisation of the lateral foot column where manifest instability exists at the time of primary surgical intervention.
abstract_id: PUBMED:29073772
Radiographic Results of Nitinol Compression Staples for Hindfoot and Midfoot Arthrodeses. Background: The purpose of this study was to determine the radiographic union rate after midfoot and hindfoot arthrodeses using a new generation of nitinol staples, and to compare outcomes between a nitinol staple construct and a nitinol staple and threaded compression screw construct.
Methods: A retrospective chart review was performed to identify patients who underwent hindfoot or midfoot arthrodesis using a new generation of nitinol compression staples with or without a partially threaded cannulated screw with minimum 3-month radiographic follow-up. The primary outcome variable was radiographic evidence of arthrodesis on radiographs and, when available, computed tomographic scan in patients who underwent midfoot or hindfoot arthrodesis using nitinol staples. Ninety-six patients and 149 joints were eligible for analysis. Median radiographic follow-up was 5.7 months.
Results: Radiographic union was seen in 93.8% (60/64) of patients and 95.1% (98/103) of joints using the nitinol staple construct. Radiographic union was seen in 90.6% (29/32) of patients and 95.7% (44/46) of joints using the nitinol combined staple and screw construct. There was no significant difference in radiographic union rate or revision surgery between the 2 groups. Seven patients developed nonunion, 4 in the nitinol staple construct group and 3 in the staple and screw group.
Conclusions: New-generation nitinol staples were safe and effective for hindfoot and midfoot arthrodeses, with a high radiographic union rate. The use of a partially threaded screw for additional fixation was not found to either significantly increase or decrease radiographic fusion with nitinol staple fixation.
Level Of Evidence: Level III, comparative cohort study.
abstract_id: PUBMED:29619844
Comparison of Screws to Plate-and-Screw Constructs for Midfoot Arthrodesis. Background: We performed a prospective comparison of screws versus plate-and-screws for midfoot arthrodesis.
Materials: Between 2010 and 2015, a total of 50 patients with midfoot arthritis received screws or plate-and-screws for their midfoot arthrodesis. Function and pain were graded with the Foot and Ankle Ability Measures (FAAM) and visual analog scale (VAS), respectively. Data regarding arthrodesis healing and complications were recorded.
Results: Twenty-five patients received screws for fusion, where 21 achieved full arthrodesis healing by 6 months from surgery. Mean FAAM increased from 46.4 to 82.7 of 100 between initial and final visit. Mean pain decreased from 8.3 to 2.1 of 10 between initial and latest encounter. Twenty-five patients received plate-and-screws for their fusion, where 23 achieved full arthrodesis healing by 6 months from surgery. Mean FAAM increased from 48.2 to 86.3 of 100 between initial and final visit. Mean pain decreased from 8.0 to 1.8 of 10 between initial and latest encounter. These postoperative scores were not significantly different from patients with screws ( P > .05). Three and 6 patients with screws versus plate-and-screws, respectively, developed wound complications ( P = .03). Four and 2 patients with screws versus plate-and-screws, respectively, developed delayed or nonunion of their arthrodesis ( P = .15).
Discussion: A comparison of outcomes of midfoot arthrodesis with screws or plate-and-screws has not been previously reported. Both constructs provide improved function and pain. Using screws might generate less wound complications, but using plate-and-screws might produce higher rates of bony healing.
Level Of Evidence: Level I, randomized, prospective, and comparative cohort study.
abstract_id: PUBMED:33327765
Nonunion Rates in Hind- and Midfoot Arthrodesis in Current, Ex-, and Nonsmokers. Background: Nonunion rates in hind or midfoot arthrodesis have been reported as high as 41%. The most notable and readily modifiable risk factor that has been identified is smoking. In 2018, 14.4% of the UK population were active smokers. We examined the effect of smoking status on union rates for a large cohort of patients undergoing hind- or midfoot arthrodesis.
Methods: In total, 381 consecutive primary joint arthrodeses were identified from a single surgeon's logbook (analysis performed on a per joint basis, with a triple fusion reported as 3 separate joints). Patients were divided based on self-reported smoking status. Primary outcome was clinical union. Delayed union, infection, and the need for ultrasound bone stimulation were secondary outcomes.
Results: Smoking prevalence was 14.0%, and 32.2% were ex-smokers. Groups were comparable for sex, diabetes, and body mass index. Smokers were younger and had fewer comorbidities. Nonunion rates were higher in smokers (relative risk, 5.81; 95% CI, 2.54-13.29; P < .001) with no statistically significant difference between ex-smokers and nonsmokers. Smokers had higher rates of infection (P = .05) and bone stimulator use (P < .001). Among smokers, there was a trend toward slower union with heavier smoking (P = .004).
Conclusion: This large retrospective cohort study confirmed previous evidence that smoking has a considerable negative effect on union in arthrodesis. The 5.81 relative risk in a modifiable risk factor is extremely high. Arthrodesis surgery should be undertaken with extreme caution in smokers. Our study shows that after cessation of smoking, the risk returns to normal, but we were unable to quantify the time frame.
Level Of Evidence: Level III, retrospective cohort study.
Answer: The Midfoot Fusion Bolt (MFB) was introduced as a new device for the surgical treatment of midfoot Charcot, a condition that can cause significant instability and deformity in the foot. The MFB is a solid, intramedullary screw that can be inserted using an antegrade or retrograde technique (PUBMED:25527299). However, the long-term results and approval of this device are still pending, as there is a lack of consensus on the optimal surgical treatment for midfoot Charcot due to the diversity of locations affected (PUBMED:25527299).
Initial short-term results showed an average fusion rate of 92.35%, but there were complications, including postoperative ulceration and cases that led to amputation. Due to issues with insufficient stability, the MFB is no longer advised for single-device use. Instead, stable conditions could be achieved with additional screws or plates, and further prospective studies and biomechanical testing are required for a meaningful assessment (PUBMED:25527299).
Other studies have reported unacceptable rates of failure with the MFB in corrective fusion for Charcot deformity, with many joints failing to fuse and the bolt migrating in several cases, leading to progressive radiographic osteolysis and revisions (PUBMED:26033061). Similarly, another study found that intramedullary medial column support with the MFB was not sufficient for osseous healing in neuroosteoarthropathic feet, with a high rate of revision surgery due to soft tissue problems and implant-associated issues such as loosening (PUBMED:24262671).
In contrast, other fixation methods, such as nitinol compression staples and plate-and-screw constructs, have been studied with varying results. Nitinol staples showed a high radiographic union rate for hindfoot and midfoot arthrodeses (PUBMED:29073772), while a comparison of screws to plate-and-screw constructs for midfoot arthrodesis found that both methods improved function and pain, with screws potentially generating fewer wound complications and plate-and-screws possibly producing higher rates of bony healing (PUBMED:29619844).
In conclusion, while the MFB offered a new perspective for midfoot fusion, its use has been challenged by issues of stability and high rates of complications, leading to a preference for additional or alternative fixation methods to achieve successful outcomes in the surgical management of midfoot Charcot and other mid foot conditions. The progression to hindfoot Charcot neuroarthropathy after midfoot Charcot correction was also studied, with findings suggesting that subtalar joint arthrodesis performed alongside medial and lateral column fusion may be beneficial in aiding hindfoot stability and preventing progression to ankle Charcot neuroarthropathy (PUBMED:36965749).
Furthermore, the impact of patient factors on surgical outcomes has been investigated. For instance, low preoperative albumin levels, indicative of malnutrition, were associated with an increased risk of superficial surgical site infection following midfoot, hindfoot, or ankle fusion, although they did not increase the risk of readmission or reoperation (PUBMED:36722707). Additionally, smoking status has been identified as a significant modifiable risk factor affecting union rates in hind- or midfoot arthrodesis, with smokers having a considerably higher risk of nonunion compared to non-smokers and ex-smokers (PUBMED:33327765).
Overall, while the MFB introduced a novel approach to midfoot fusion, its application has been met with challenges, and the current evidence suggests that a combination of fixation devices and consideration of patient-specific factors, such as nutritional status and smoking, are important for optimizing surgical outcomes in midfoot reconstruction procedures. Further research and long-term studies are needed to establish the most effective and reliable surgical techniques for these complex foot conditions. |
Instruction: Do protective devices prevent needlestick injuries among health care workers?
Abstracts:
abstract_id: PUBMED:8821109
Do protective devices prevent needlestick injuries among health care workers? Objectives: To determine the effectiveness and direct of two protective devices-a shielded 3 ml safety syringe (Safety-Lok; Becton Dickinson and Co., Becton Dickinson Division, Franklin Lakes, N.J.) and the components of a needleless IV system (InterLink; Baxter Healthcare Corp., Deerfield, Ill.)--in preventing needlestick injuries to health care workers.
Design: Twelve-month prospective, controlled, before-and-after trial with a standardized questionnaire to monitor needlestick injury rates.
Setting: Six hospital inpatient units, consisting of three medical units, two surgical units (all of which were similar in patient census, acuity, and frequency of needlesticks), and a surgical-trauma intensive care unit, at a 900-bed urban university medical center.
Participants: All nursing personnel, including registered nurses, licensed practical nurses, nursing aides, and students, as well as medical teams consisting of an attending physician, resident physician, interns, and medical students on the study units.
Intervention: After a 6-month prospective surveillance period, the protective devices were randomly introduced to four of the chosen study units and to the surgical-trauma intensive care unit.
Results: Forty-seven needlesticks were reported throughout the entire study period, 33 in the 6 months before and 14 in the 6 months after the introduction of the protective devices. Nursing staff members who were using hollow-bore needles and manipulating intravenous lines accounted for the greatest number of needlestick injuries in the pre-intervention period. The overall rate of needlestick injury was reduced by 61%, from 0.785 to 0.303 needlestick injuries per 1000 health care worker-days after the introduction of the protective devices (relative risk = 1.958; 95% confidence interval, 1.012 to 3.790; p = 0.046). Needlestick injury rates associated with intravenous line manipulation, procedures with 3 ml syringes, and sharps disposal were reduced by 50%; however, reductions in these subcategories were not statistically significant. No seroconversions to HIV-1 or hepatitis B virus seropositivity occurred among those with needlestick injuries. The direct cost for each needlestick prevented was $789.
Conclusions: Despite an overall reduction in needlestick injury rates, no statistically significant reductions could be directly attributed to the protective devices. These devices are associated with a significant increase in cost compared with conventional devices. Further studies must be concurrently controlled to establish the effectiveness of these devices.
abstract_id: PUBMED:22317004
Interventions to prevent needle stick injuries among health care workers. Needle stick injuries (NSIs) are frequently reported as occupational injuries among health care workers. The health effects of a NSI can be significant when blood-to-blood contact occurs from patient to health care worker. The objective of this study was to evaluate whether the number of NSIs decreased among health care workers at risk in one Dutch academic hospital after introduction of injection needles with safety devices in combination with an interactive workshop. In a cluster three-armed randomized controlled trial, 23 hospital divisions (n=796 health care workers) were randomly assigned to a group that was subjected to the use of a 'safety device plus workshop', to a group that was subjected to a 'workshop only' or to a control group with no intervention. The combined intervention of the introduction of needle safety devices and an interactive workshop led to the highest reduction in the number of self-reported NSIs compared to a workshop alone or no intervention. For practice, the use of relatively simple protective needle safety devices and interactive communication are effective measures for reducing NSI's.
abstract_id: PUBMED:19248745
Sharp object injuries among health care workers in a Chinese province. Health care workers in nine hospitals in Fujian were surveyed between December 2005 and February 2006 regarding the occurrence of sharp object injuries (SOIs). Survey results indicated that 71.3% of the health care workers had sustained SOIs during the past year. The rates of SOIs among surgeons, nurses, anesthesiologists, and clinical laboratory workers were 68.7%, 76.9%, 88.1%, and 40.2%, respectively. Approximately 50% of the SOIs occurred while devices were being used. Disposable syringes caused most of the injuries. A lack of protective and safe devices, heavy workloads, and carelessness contributed to SOIs. SOIs can be reduced among health care workers by decreasing unnecessary manipulation, using safety devices, disposing of used objects properly, and reasonably allocating workloads.
abstract_id: PUBMED:15573049
Impact of safety devices for preventing percutaneous injuries related to phlebotomy procedures in health care workers. Background: Use of protective devices has become a common intervention to decrease sharps injuries in the hospitals; however few studies have examined the results of implementation of the different protective devices available.
Objective: To determine the effectiveness of 2 protective devices in preventing needlestick injuries to health care workers.
Methods: Sharps injury data were collected over a 7-year period (1993-1999) in a 3600-bed tertiary care university hospital in France. Pre- and postinterventional rates were compared after the implementation of 2 safety devices for preventing percutaneous injuries (PIs) related to phlebotomy procedures.
Results: From 1993 to 1999, an overall decrease in the needlestick-related injuries was noted. Since 1996, the incidence of phlebotomy-related PIs has significantly decreased. Phlebotomy procedures accounted for 19.4% of all percutaneous injuries in the preintervention period and 12% in the postintervention period (RR, O.62; 95% CI, 0.51-0.72; P < .001). Needlestick-related injuries incidence rate decreased significantly after the implementation of the 2 safety devices, representing a 48% decline in incidence rate overall.
Conclusions: The implementation of these safety devices apparently contributed to a significant decrease in the percutaneous injuries related to phlebotomy procedures, but they constitute only part of a strategy that includes education of health care workers and collection of appropriate data that allow analysis of residuals percutaneous injuries.
abstract_id: PUBMED:14997076
A review of needle-protective devices to prevent sharps injuries. The risk of occupational transmission of blood-borne pathogens via sharp devices remains a significant hazard to both healthcare and ancillary workers. Previously, education, training, universal precautions and hepatitis B vaccination have been implemented in an attempt to reduce the risk. However, the most recent preventive strategy is needle-protective devices. These have been developed from conventional products but incorporate a safety mechanism that, when activated, covers the needletip and thus assists in the prevention of needlestick injuries and potential seroconversion to blood-borne pathogens. To date, a number of studies have been undertaken to evaluate these products, the majority of which show these devices to be safe and reliable in addition to potentially reducing associated needlestick injuries. However, to encourage the introduction of these devices in the UK, further studies are needed to either support or refute initial findings and to encourage the evaluation and subsequent implementation of needle-protective devices.
abstract_id: PUBMED:25663385
Sharp truth: health care workers remain at risk of bloodborne infection. Background: In 2013, new regulations for the prevention of sharps injuries were introduced in the UK. All health care employers are required to provide the safest possible working environment by preventing or controlling the risk of sharps injuries.
Aims: To analyse data on significant occupational sharps injuries among health care workers in England, Wales and Northern Ireland before the introduction of the 2013 regulations and to assess bloodborne virus seroconversions among health care workers sustaining a blood or body fluid exposure.
Methods: Analysis of 10 years of information on percutaneous and mucocutaneous exposures to blood or other body fluids from source patients infected with a bloodborne virus, collected in England, Wales and Northern Ireland through routine surveillance of health care workers reported for the period 2002-11.
Results: A total of 2947 sharps injuries involving a source patient infected with a bloodborne virus were reported by health care workers. Significant sharps injuries were 67% higher in 2011 compared with 2002. Sharps injuries involving an HIV-, hepatitis B virus- or hepatitis C virus (HCV)-infected source patient increased by 107, 69 and 60%, respectively, between 2002 and 2011. During the study period, 14 health care workers acquired HCV following a sharps injury.
Conclusions: Our data show that during a 10-year period prior to the introduction of new regulations in 2013, health care workers were at risk of occupationally acquired bloodborne virus infection. To prevent sharps injuries, health care service employers should adopt safety-engineered devices, institute safe systems of work and promote adherence to standard infection control procedures.
abstract_id: PUBMED:24932385
Adverse health problems among municipality workers in alexandria (egypt). Background: Solid waste management has emerged as an important human and environmental health issue. Municipal solid waste workers (MSWWs) are potentially exposed to a variety of occupational biohazards and safety risks. The aim of this study was to describe health practices and safety measures adopted by workers in the main municipal company in Alexandria (Egypt) as well as the pattern of the encountered work related ill health.
Methods: A cross-sectional study was conducted between January and April 2013. We interviewed and evaluated 346 workers serving in about 15 different solid waste management activities regarding personal hygiene, the practice of security and health care measures and the impact of solid waste management.
Results: Poor personal hygiene and self-care, inadequate protective and safety measures for potentially hazardous exposure were described. Impact of solid waste management on health of MSWWs entailed high prevalence of gastrointestinal, respiratory, skin and musculoskeletal morbidities. Occurrence of accidents and needle stick injuries amounted to 46.5% and 32.7% respectively. The risk of work related health disorders was notably higher among workers directly exposed to solid waste when compared by a group of low exposure potential particularly for diarrhea (odds ratio [OR] = 2.2, 95% confidence interval [CI] = 1.2-3.8), vomiting (OR = 2.7, 95% CI = 1.1-6.6), abdominal colic (OR = 1.9, 95% CI = 1.1-3.2), dysentery (OR = 3.6, 95% CI = 1.3-10), dyspepsia (OR = 1.8, 95% CI = 1.1-3), low back/sciatic pain (OR = 3.5, 95% CI = 1.8-7), tinnitus (OR = 6.2, 95% CI = 0.3-122) and needle stick injury (OR = 3.4, 95% CI = 2.1-5.5).
Conclusions: Workers exposed to solid waste exhibit significant increase in risk of ill health. Physician role and health education could be the key to assure the MSWWs health safety.
abstract_id: PUBMED:9185365
Occupational exposure to the risk of HIV infection among health care workers in Mwanza Region, United Republic of Tanzania. During 1993, we collected data on knowledge of human immunodeficiency virus (HIV) transmission, availability of equipment, protective practices and the occurrence of prick and splash incidents in nine hospitals in the Mwanza Region in the north-west of the United Republic of Tanzania. Such incidents were common, with the average health worker being pricked five times and being splashed nine times per year. The annual occupational risk of HIV transmission was estimated at 0.27% for health workers. Among surgeons, the risk was 0.7% (i.e. more than twice as high) if no special protective measures were taken. Health workers' knowledge and personal protective practices must therefore be improved and the supply of protective equipment supported. Reduction of occupational risk of HIV infection among health workers should be an integral part of acquired immunodeficiency syndrome (AIDS) control strategies.
abstract_id: PUBMED:20416972
Provision and use of safety-engineered medical devices among home care and hospice nurses in North Carolina. Background: Nurses who provide care in the home are at risk of blood exposure from needlesticks. Using safety-engineered medical devices reduces the risk of needlestick. The objectives of this study were to assess provision of safety devices by home care and hospice agencies as well as the use of these devices by home care and hospice nurses in North Carolina, and to examine the association between provision and use.
Methods: A mail survey was conducted among North Carolina home care and hospice nurses in 2006.
Results: The adjusted response rate was 69% (n = 833). The percentage of nurses who were always provided with safety devices ranged from 51% (blood tube holders) to 83% (winged steel needles). Ninety-five percent of nurses who were always provided with safety devices, but only 15%-50% of nurses who were not always provided with safety devices, used a safety device the last time they used that general type of device. Among nurses who did not use a safety device on that occasion, 60%-80% did not use it because it was not provided by the agency.
Conclusion: This study suggests that limited access is the primary reason for home care/hospice nurses' failure to use safety devices. The policy goal of providing safety devices to health care workers in all situations in which such devices could reduce their risk of needlestick is not being achieved for home care nurses in North Carolina.
abstract_id: PUBMED:31093121
Needlestick injury prevention training among health care workers in the Caribbean. The rate of human immunodeficiency virus (HIV), hepatitis B virus (HBV), and hepatitis C virus (HCV) infections among health care workers that is caused by sharps injuries is higher in the Caribbean and Latin America than in other regions of the world. To respond to and reduce occupational exposures to bloodborne pathogens while also strengthening capacities in the Caribbean, needlestick injury prevention training programs for health care workers were implemented, beginning in 2011. The programs included lectures, workshops, policy reviews, evaluations of safety devices, and workplace assessment. During the training, baseline data from health care workers on their personal history of needlestick injuries and bloodborne pathogen exposure was collected. That baseline data showed that 40% of the participants had sustained sharps injuries during their professional career. In this capacity-building effort, 210 health care workers from five countries have been trained, six health care centers in the Caribbean have been evaluated. and occupational safety and health committees have been established in various countries to monitor and improve safety policies and practices.
Answer: Yes, protective devices have been shown to prevent needlestick injuries among health care workers. A study conducted over a 12-month period at a 900-bed urban university medical center found that the introduction of protective devices, including a shielded safety syringe and a needleless IV system, reduced the overall rate of needlestick injury by 61% among health care workers (PUBMED:8821109). Similarly, a cluster three-armed randomized controlled trial in a Dutch academic hospital demonstrated that the combined intervention of introducing needle safety devices and an interactive workshop led to the highest reduction in the number of self-reported needlestick injuries compared to a workshop alone or no intervention (PUBMED:22317004).
In a Chinese province, it was found that the use of safety devices, proper disposal of used objects, and reasonable allocation of workloads could reduce sharp object injuries (SOIs) among health care workers (PUBMED:19248745). A 7-year study in a French university hospital also reported a significant decrease in needlestick-related injuries after the implementation of two safety devices for preventing percutaneous injuries related to phlebotomy procedures (PUBMED:15573049).
A review of needle-protective devices indicated that the majority of studies show these devices to be safe, reliable, and potentially effective in reducing needlestick injuries (PUBMED:14997076). Furthermore, data analysis before the introduction of new regulations in the UK in 2013 showed that health care workers were at risk of occupationally acquired bloodborne virus infection, and the adoption of safety-engineered devices was recommended to prevent sharps injuries (PUBMED:25663385).
In North Carolina, a survey among home care and hospice nurses revealed that the primary reason for not using safety devices was limited access, suggesting that when provided, nurses were likely to use these devices to prevent needlestick injuries (PUBMED:20416972). Lastly, needlestick injury prevention training programs in the Caribbean, which included evaluations of safety devices, showed that 40% of health care workers had sustained sharps injuries during their professional career, highlighting the need for such preventive measures (PUBMED:31093121). |
Instruction: The timing of surgery for resectable metachronous liver metastases from colorectal cancer: Better sooner than later?
Abstracts:
abstract_id: PUBMED:34512899
Borderline resectable for colorectal liver metastases: Present status and future perspective. Surgical resection for colorectal liver metastases (CRLM) may offer the best opportunity to improve prognosis. However, only about 20% of CRLM cases are indicated for resection at the time of diagnosis (initially resectable), and the remaining cases are treated as unresectable (initially unresectable). Thanks to recent remarkable developments in chemotherapy, interventional radiology, and surgical techniques, the resectability of CRLM is expanding. However, some metastases are technically resectable but oncologically questionable for upfront surgery. In pancreatic cancer, such cases are categorized as "borderline resectable", and their definition and treatment strategies are explicit. However, in CRLM, although various poor prognosis factors have been identified in previous reports, no clear definition or treatment strategy for borderline resectable has yet been established. Since the efficacy of hepatectomy for CRLM was reported in the 1970s, multidisciplinary treatment for unresectable cases has improved resectability and prognosis, and clarifying the definition and treatment strategy of borderline resectable CRLM should yield further improvement in prognosis. This review outlines the present status and the future perspective for borderline resectable CRLM, based on previous studies.
abstract_id: PUBMED:20728416
The timing of surgery for resectable metachronous liver metastases from colorectal cancer: Better sooner than later? A retrospective analysis. Background: The benefit of preoperative chemotherapy in patients with initially resectable liver metastases from colorectal cancer is still a matter of debate.
Aims: We aim to evaluate the role of neoadjuvant chemotherapy on the outcome of patients with colorectal cancer metachronous liver metastases undergoing potentially curative liver resection.
Methods: One-hundred four patients were available for analysis. Tested variables included age, sex, primary tumour TNM stage, location and grading, the number of liver metastases, monolobar or bilobar location, interval time between liver metastases diagnosis and liver resection, Fong Clinical Risk Score (CRS). Neoadjuvant chemotherapy was administered according to the FOLFOX4 regimen.
Results: Forty-four patients underwent liver resection without receiving neoadjuvant chemotherapy (group A); 60 patients received neoadjuvant chemotherapy (group B). At univariate analysis, only the time of liver resection seemed to affect overall survival: patients in group A showed a median survival time significantly superior to that of patients in group B (48 vs. 31 months; p=0.0358).
Conclusions: Our findings suggest that, when feasible, resection of liver metastases should be considered as an initial approach in this setting. Further studies are needed to better delineate innovative therapeutic strategies that may lead to an improved outcome for colorectal cancer patients with surgically resectable liver metastases.
abstract_id: PUBMED:34645173
Advances in researches on neoadjuvant therapy for resectable colorectal liver metastasis Surgery is recognized as the core treatment for colorectal liver metastasis (CRLM), while its recurrence rate remains relatively high, even for resectable CRLM. This hints that the efficacy of treatment involves not only technological factors of surgery, but also biological behavior of tumor. For resectable CRLM, neoadjuvant therapy is beneficial to eliminate the micro-metastasis, reduce postoperative recurrence rate, screen tumor biological behavior and improve prognosis. However, questions about which kind of CRLM patients fits for neoadjuvant therapy and what regimen should be used are still debatable. This paper reviews stratified management of resectable CRLM, choice of neoadjuvant regimen, especially the application value of targeted therapy, based on the latest guidelines and studies.
abstract_id: PUBMED:28234125
What is the Optimal Timing for Liver Surgery of Resectable Synchronous Liver Metastases from Colorectal Cancer? The optimal timing of the surgical strategy for colorectal cancer (CRC) presenting with resectable synchronous liver metastases remains unclear and controversial. The aim of this study was to compare simultaneous with staged resection, with respect to morbidity, mortality, and prognosis, including recurrence. A total of 107 patients who underwent initial hepatic resection for resectable synchronous liver metastasis from colorectal cancer were retrospectively analyzed. The 5-year disease-free survival rates were 16.4 per cent in the simultaneous group, and 24.0 per cent in the staged group (P = 0.5486). The 5-year overall survival rates were 70.7 per cent in the simultaneous group and 67.9 per cent in the staged group (P = 0.8254). Perioperative chemotherapy did not have a significant effect. Tumor depth of CRC (≥pT4) was the only key factor influencing prognosis. Postoperative intestinal anastomotic leakage occurred in nine patients (8.4%). On multivariate analysis, simultaneous surgery was shown to be the only independent risk factor for the occurrence of postoperative intestinal anastomotic leakage (P = 0.0163). In conclusion, neither timing of hepatic resection (simultaneous or staged) nor perioperative chemotherapy represented significant prognostic factors. The simultaneous surgery was the only independent risk factor for intestinal anastomotic leakage. Therefore, we recommend staged hepatic surgery for synchronous CRC and liver metastasis from colorectal cancer.
abstract_id: PUBMED:27275088
Surgical dilemmas in the management of colorectal liver metastases: The role of timing. Colorectal cancer (CRC) is an emerging health problem in the Western World both for its raising tendency as well as for its metastatic potential. Almost half of the patients with CRC will develop liver metastases during the course of their disease. The liver surgeon dealing with colorectal liver metastases faces several surgical dilemmas especially in the setting of the timing of operation. Synchronous resectable metastases should be treated prior or after induction chemotherapy? Furthermore in the case of synchronous colorectal liver metastases which organ should we first deal with, the liver or the colon? All these questions are set in the editorial and impulse for further investigation is put focusing on multidisciplinary approach and individualization of treatment modalities.
abstract_id: PUBMED:34535965
BRAF V600E potentially determines "Oncological Resectability" for "Technically Resectable" colorectal liver metastases. Despite reports on poor survival outcomes after hepatectomy for colorectal liver metastases (CRLM) with BRAF V600E mutation (mBRAF) exist, the role of mBRAF testing for technically resectable cases remains unclear. A single-center retrospective study was performed to investigate the survival outcomes of patients who underwent upfront hepatectomy for solitary resectable CRLM with mBRAF between January 2005 and December 2017 and to compare them with those of unresectable cases with mBRAF. Of 172 patients who underwent initial hepatectomy for solitary resectable CRLM, mBRAF, RAS mutations (mRAS), and wild-type RAS/BRAF (wtRAS/BRAF) were observed in 5 (2.9%), 73 (42.4%), and 93 (54.7%) patients, respectively. With a median follow-up period of 72.8 months, mBRAF was associated with a significantly shorter OS (median, 14.4 months) than wtRAS/BRAF (median, not reached [NR]) (hazard ratio [HR], 27.6; p < 0.001) and mRAS (median, NR) (HR, 9.9; p < 0.001), and mBRAF had the highest HR among all the indicators in the multivariable analysis (HR, 17.0; p < 0.001). The median OS after upfront hepatectomy for CRLM with mBRAF was identical to that of 28 unresectable CRLM with mBRAF that were treated with systemic chemotherapy (median, 17.2 months) (HR, 0.78; p = 0.65). When technically resectable CRLM are complicated with mBRAF, its survival outcome becomes as poor as unresectable cases; therefore, those with mBRAF should be considered as oncologically unresectable. Patients with CRLM should undergo pre-treatment mBRAF testing regardless of technical resectability. Clinical trial registration number: UMIN000034557.
abstract_id: PUBMED:30255329
Is neoadjuvant chemotherapy appropriate for patients with resectable liver metastases from colorectal cancer? Purpose: Neoadjuvant chemotherapy (NAC) for resectable liver metastasis from colorectal cancer (CRLM) is used widely, but its efficacy lacks clear evidence. This study aimed to clarify its worth and develop appropriate treatment strategies for CRLM.
Methods: We analyzed, retrospectively, the clinicopathological factors and outcomes of 137 patients treated for resectable CRLM between 2006 and 2015, with upfront surgery (NAC- group; n = 117) or initial NAC treatment (NAC+ group; n = 20).
Results: The time to surgical failure (TSF) and overall survival (OS) after initial treatment were significantly worse in the NAC+ group than in the NAC- group (P = 0.002 and P = 0.032, respectively). At hepatectomy, the NAC+ group had a lower median prognostic nutrition index (PNI), higher rates of a positive Glasgow Prognostic Score (P = 0.002) and more perioperative blood transfusions (P = 0.027) than the NAC- group. Moreover, the serum albumin (P = 0.006), PNI (P ≤ 0.001) and lymphocyte-to-monocyte ratio (P ≤ 0.001) were significantly decreased and the GPS positive rate was increased from 15 to 35% in the NAC+ group. The OS rates did not differ significantly according to the NAC response (5-year OS rates-CR/PR 67%, SD 60%, PD 38%).
Conclusions: Patients with resectable CRLM should undergo upfront hepatectomy because NAC did not improve OS after initial treatment in these patients.
abstract_id: PUBMED:27778357
Patient selection for the surgical treatment of resectable colorectal liver metastases. Advances in surgery and chemotherapy regimens have increased the long-term survival of patients with colorectal liver metastases (CRLM). Although liver resection remains an essential part of any curative strategy for resectable CRLM, chemotherapy regimens have also improved the long-term outcomes. However, the optimal timing for chemotherapy regimens remains unclear. Thus, this review addressed key points to aid the decision-making process regarding the timing of chemotherapy and surgery for patients with resectable CRLM. J. Surg. Oncol. 2017;115:213-220. © 2016 Wiley Periodicals, Inc.
abstract_id: PUBMED:37232798
The Role of Preoperative Chemotherapy in the Management of Synchronous Resectable Colorectal Liver Metastases: A Meta-Analysis. Background: The indications of preoperative chemotherapy, for initially resectable synchronous colorectal liver metastases, remain controversial. This meta-analysis aimed to assess the efficacy and safety of preoperative chemotherapy in such patients.
Methods: Six retrospective studies were included in the meta-analysis with 1036 patients. Some 554 patients were allocated to the preoperative group, and 482 others were allocated to the surgery group.
Results: Major hepatectomy was more common in the preoperative group than in the surgery group (43.1% vs. 28.8%, p < 0.001). Furthermore, the percentage of patients with more than three liver metastases was higher in the preoperative group compared to the surgery group (12.6% vs. 5.4%, p < 0.002). Preoperative chemotherapy showed no statistically significant impact on overall survival. Combined disease free/relapse survival analysis of patients with high disease burden (liver metastases > 3, maximum diameter > 5 cm, clinical risk score ≥ 3) demonstrated that there is a 12% lower risk of recurrence in favor of preoperative chemotherapy. Combined analysis showed a statistically significant (77% higher probability) of postoperative morbidity in patients who received preoperative chemotherapy (p = 0.002).
Conclusions: Preoperative chemotherapy should be suggested in patients with high disease burden. The number of cycles of preoperative chemotherapy should be low (3-4) to avoid increased postoperative morbidity. However more prospective studies are needed to clarify the exact role of preoperative chemotherapy in patients with synchronous resectable colorectal liver metastases.
abstract_id: PUBMED:35196945
Chemotherapy in resectable or potentially resectable colon cancer with liver metastases. Introduction: The treatment of colorectal cancer liver metastases has seen significant improvement in recent years and, for certain patients, the long-term survival and even cure are possible. Despite this improvement, many more questions are yet to be answered: the optimal combination, duration, sequence of therapies, role of biologics and the timing of surgical resection are debated in the literature, with conflicting trial results.
Areas Covered: In this review, the authors highlight the current trial evidence for systemic chemotherapy and biologic therapy for colorectal cancer liver metastases in both the pre and post-resection setting.
Expert Opinion: The treatment of colorectal liver metastases requires a multidisciplinary approach. The role of adjuvant chemotherapy with 5 fluorouracil and oxaliplatin in stage 3 colon cancer is well established. However, the options for patients with resectable or borderline liver metastases, either in the neoadjuvant or adjuvant settings, require further study. For patients with borderline resectable metastases, the combination of triplet chemotherapy with 5 fluorouracil, oxaliplatin and irinotecan (FOLFOXIRI) offers the best conversion rate. The role of biologic agents such as bevacizumab and EGFR inhibitors in these settings is less clear based on current evidence.
Answer: The timing of surgery for resectable metachronous liver metastases from colorectal cancer is a subject of debate. Some studies suggest that when feasible, resection of liver metastases should be considered as an initial approach, as delaying surgery for neoadjuvant chemotherapy may not improve overall survival and could potentially lead to worse outcomes. For instance, one retrospective analysis indicated that patients who underwent liver resection without receiving neoadjuvant chemotherapy had a median survival time significantly superior to those who received neoadjuvant chemotherapy before surgery (48 vs. 31 months; p=0.0358) (PUBMED:20728416). This suggests that better outcomes might be achieved by proceeding with surgery sooner rather than later.
However, the role of neoadjuvant chemotherapy is still widely used and debated, with some studies showing no significant improvement in overall survival for patients with resectable colorectal liver metastases who received neoadjuvant chemotherapy compared to those who underwent upfront surgery (PUBMED:30255329). Furthermore, a meta-analysis assessing the efficacy and safety of preoperative chemotherapy for initially resectable synchronous colorectal liver metastases found no statistically significant impact on overall survival, although it did suggest that preoperative chemotherapy might be beneficial for patients with a high disease burden (PUBMED:37232798).
In conclusion, while some evidence points towards the benefit of immediate surgery for resectable metachronous liver metastases from colorectal cancer, the decision should be individualized based on patient factors, disease burden, and multidisciplinary team discussions. Further studies are needed to clarify the optimal timing and therapeutic strategies for these patients (PUBMED:20728416, PUBMED:30255329, PUBMED:37232798). |
Instruction: Is preoperative vaginal cleansing necessary for control of infection after first trimester vacuum curettage?
Abstracts:
abstract_id: PUBMED:15954874
Is preoperative vaginal cleansing necessary for control of infection after first trimester vacuum curettage? Background: Traditionally, the vagina is cleansed, before a curettage is performed. A previous study, comparing cleansing with chlorhexidine solution and cleansing with saline solution before vacuum aspiration in the first trimester, did not show any difference in the frequency of postoperative pelvic inflammatory disease. We wanted to investigate whether this was true also for vaginal cleansing with chlorhexidine, compared to no vaginal cleansing at all.
Methods: Consecutive women having surgical first trimester legal abortions were randomized to vulvar and vaginal cleansing with chlorhexidine or vulvar cleansing only. The frequency of postabortion pelvic inflammatory disease was evaluated with patient questionnaires and study of medical records.
Results: Of the 486 patients included in the study, vaginal cleansing was performed on 246 and no vaginal cleansing on 240. The frequency of probable pelvic inflammatory disease was 2.4% with cleansing and 2.1% without cleansing (no significant difference).
Conclusions: Under certain conditions, preoperative vaginal cleansing can be safely omitted.
abstract_id: PUBMED:7270090
Vaginal application of a chemotherapeutic agent before legal abortion. A way of reducing infectious complication? In an attempt to reduce the incidence of infectious complications after first trimester legal abortion 199 healthy, early pregnant women were treated with chloroquinaldol (SterosanR vaginal jelly) during six days before their vacuum aspiration. A group of 291 women served as control. In the treatment group 18 women (9%) had a postoperative gynecological infection while this condition was found in 37 women (12.8%) in the control group. This difference is not statistically significant. The authors conclude that preoperative prophylactic treatment with Sterosan vaginal jelly does not seem to reduce postoperative infectious complications after first trimester legal abortions.
abstract_id: PUBMED:6644256
Suction curettage. In the decade after the legalization of elective abortion by a Supreme Court decision in January 1973, suction curettage has been widely used and accepted as a safe outpatient procedure for first-trimester abortion. Evaluation of the patient for feasibility of the procedure requires careful assessment of gestational age and the determination of the absence of an ectopic pregnancy and conditions that might contraindicate local anesthesia. Counseling clarifies the patient's options and ensures her understanding of the implications of abortion so that she can give an informed consent. Suction curettage is performed under local anesthesia using a sterile plastic cannula or curette inserted through a progressively dilated cervix with aspiration of the uterine contents by an electric pump. The procedure is completed by the physician's examination of the aspirate for the presence of placental villi. Postoperative instructions include contraception and monitoring for hemorrhage and infection prior to a return visit in 10 to 14 days. Complications can be reduced by careful selection of patients with appropriate duration of pregnancy, the use of gentle operative technique, antibiotics for prophylaxis of infection, and a continued maintenance of experience and procedural skill by the physician.
abstract_id: PUBMED:4758593
Report on 5641 outpatient abortions by vacuum suction curettage. A technique of ambulatory abortion for first trimester pregnancies by vacuum suction curettage under local anesthesia with intracervical block is described. The apparatus and relevant problems are discussed. A shortened speculum devised by the author and considered an improvement for this procedure, and a simplified sterile field are described. The complication rate of 0.48% based on 5641 reported cases is very low: there were no deaths, two cases of uterine perforation, 14 of incomplete abortion, 20 of infection, 1 of depression, no cervical lacerations; 27 patients were hospitalized. The advantages of this method are safety, simplicity, minimal blood loss and immediate recovery. It is preferable to the usual dilatation and curettage, does not require general anesthesia and can be used in small clinics or in hospitals on an ambulatory basis.
abstract_id: PUBMED:22381604
Immediate versus delayed medical treatment for first-trimester miscarriage: a randomized trial. Objective: To compare immediate vs delayed medical treatment for first-trimester miscarriage.
Study Design: Randomized open-label trial in a university hospital gynecologic emergency department. Between April 2003 and April 2006, 182 women diagnosed with spontaneous abortion before 14 weeks' gestation were assigned to immediate medical treatment (oral mifepristone, followed 48 hours later by vaginal misoprostol, n = 91) or sequential management (1 week of watchful waiting followed, if necessary, by the above-described medical treatment, n = 91). Vacuum aspiration was performed in case of treatment failure, hemorrhage, pain, infection, or patient request.
Results: Compared with immediate medical treatment, sequential management resulted in twice as many vacuum aspirations overall (43.5% vs 19.1%; P < .001), 4 times as many emergent vacuum aspirations (20% vs 4.5%; P = .001), and twice as many unplanned visits to the emergency department (34.1% vs 16.9%; P = .009).
Conclusion: Delaying medical treatment of first-trimester miscarriage increases the rate of unplanned surgical uterine evacuation.
abstract_id: PUBMED:19407521
First trimester surgical abortion. First trimester surgical abortion is a very common, effective, and safe procedure. When a woman presents requesting pregnancy termination, counseling regarding pregnancy options and procedural risks, as well as a careful preoperative assessment are vital to a successful outcome. If a patient decides to undergo a surgical abortion, either an electric or manual vacuum aspiration may be performed, based upon provider preference. Complications of first trimester surgical abortion occur in only 0.5% of all cases and include: failed abortion, incomplete abortion, hematometria, hemorrhage, infection, and uterine perforation.
abstract_id: PUBMED:754991
Vacuum aspiration at therapeutic abortion: influence of two different negative pressures on blood loss during and after operation. In 167 women blood loss was measured during and after vacuum aspiration for therapeutic abortion in the first trimester of pregnancy. Two different negative pressures were used -49.1 kPa (-0.5 kp/cm2) and -78.5 kPa (-0.8 kp/cm2) in a randomized series. It was found that for the whole first trimester (8--13 weeks) blood loss during and after operation was relatively small and not influenced by the pressure used. In no case blood loss exceeded 500 ml at operation or 60 ml during the first post-operative week. In 6 women post-operative infections were diagnosed.
abstract_id: PUBMED:26534897
Dilatation and curettage increases the risk of subsequent preterm birth: a systematic review and meta-analysis. Study Question: Could dilatation and curettage (D&C), used in the treatment of miscarriage and termination of pregnancy, increase the risk of subsequent preterm birth?
Summary Answer: A history of curettage in women is associated with an increased risk of preterm birth in a subsequent pregnancy compared with women without such history.
What Is Known Already: D&C is one of the most frequently performed procedures in obstetrics and gynaecology. Apart from the acknowledged but relatively rare adverse effects, such as cervical tears, bleeding, infection, perforation of the uterus, bowel or bladder, or Asherman syndrome, D&C has been suggested to also lead to an increased risk of preterm birth in the subsequent pregnancy.
Study Design, Size, Duration: In the absence of randomized data, we conducted a systematic review and meta-analysis of cohort and case-control studies.
Participants/materials, Setting, Methods: We searched OVID MEDLINE and OVID EMBASE form inception until 21 May 2014. We selected cohort and case-control studies comparing subsequent preterm birth in women who had a D&C for first trimester miscarriage or termination of pregnancy and a control group of women without a history of D&C.
Main Results And The Role Of Chance: We included 21 studies reporting on 1 853 017 women. In women with a history of D&C compared with those with no such history, the odds ratio (OR) for preterm birth <37 weeks was 1.29 (95% CI 1.17; 1.42), while for very preterm birth the ORs were 1.69 (95% CI 1.20; 2.38) for <32 weeks and 1.68 (95% CI 1.47; 1.92) for <28 weeks. The risk remained increased when the control group was limited to women with a medically managed miscarriage or induced abortion (OR 1.19, 95% CI 1.10; 1.28). For women with a history of multiple D&Cs compared with those with no D&C, the OR for preterm birth (<37 weeks) was 1.74 (95% CI 1.10; 2.76). For spontaneous preterm birth, the OR was 1.44 (95% CI 1.22; 1.69) for a history of D&C compared with no such history.
Limitations, Reasons For Caution: There were no randomized controlled trials comparing women with and without a history of D&C and subsequent preterm birth. As a consequence, confounding may be present since the included studies were either cohort or case-control studies, not all of which corrected the results for possible confounding factors.
Wider Implications Of The Findings: This meta-analysis shows that D&C is associated with an increased risk of subsequent preterm birth. The increased risk in association with multiple D&Cs indicates a causal relationship. Despite the fact that confounding cannot be excluded, these data warrant caution in the use of D&C for miscarriage and termination of pregnancy, the more so since less invasive options are available.
Study Funding/competing Interests: This study was funded by ZonMw, a Dutch organization for Health Research and Development, project number 80-82310-97-12066.
abstract_id: PUBMED:16625583
Expectant care versus surgical treatment for miscarriage. Background: Miscarriage is a common complication of early pregnancy that can have both medical and psychological consequences like depression and anxiety. The need for routine surgical evacuation with miscarriage has been questioned because of potential complications such as cervical trauma, uterine perforation, hemorrhage, or infection.
Objectives: To compare the safety and effectiveness of expectant management versus surgical treatment for early pregnancy loss.
Search Strategy: We searched the Cochrane Pregnancy and Childbirth Group Trials Register (December 2005), the Cochrane Central Register of Controlled Trials (The Cochrane Library 2004, Issue 3), PubMed (1966 to March 2005), POPLINE (inception to March 2005), and LILACS (1982 to March 2005) and reference lists of reviews.
Selection Criteria: Randomized trials comparing expectant care and surgical treatment (vacuum aspiration or dilation and curettage (D & C)) for miscarriage were eligible for inclusion.
Data Collection And Analysis: Two authors independently assessed trial quality and extracted data. We contacted study authors for additional information.
Main Results: Five trials were included in this review with 689 total participants. The expectant-care group was more likely to have an incomplete miscarriage (RR 5.37; 95% CI 2.57 to 11.22). However, the time frames for declaring the process incomplete varied across the studies. The need for unplanned surgical treatment (such as vacuum aspiration or D&C) was greater for the expectant-care group (RR 4.78; 95% CI 1.99 to 11.48). The expectant-care group had more days of bleeding (WMD 1.59; 95% CI 0.74 to 2.45) and a greater amount of bleeding (WMD 1.00; 95% CI 0.60 to 1.40). Post-procedure diagnosis of infection was lower in the expectant-care group (RR 0.29; 95% CI 0.09 to 0.87). Information on psychological outcomes and pregnancy was too limited to draw conclusions.
Authors' Conclusions: Expectant management led to a higher risk of incomplete miscarriage, need for surgical emptying of the uterus, and bleeding. None of these were serious. In contrast, surgical evacuation was associated with a significantly higher risk of infection. Given the lack of clear superiority of either approach, the woman's preference should play a dominant role in decision making. Medical management has added choices for women and their clinicians, but these were not reviewed here.
abstract_id: PUBMED:17141703
Rates of complication in first-trimester manual vacuum aspiration abortion done by doctors and mid-level providers in South Africa and Vietnam: a randomised controlled equivalence trial. Background: We assessed whether the safety of first-trimester manual vacuum aspiration abortion done by health-care providers who are not doctors (mid-level providers) is equivalent to that of procedures done by doctors in South Africa and Vietnam, where mid-level providers are government trained and accredited to do first-trimester abortions.
Methods: We did a randomised, two-sided controlled equivalence trial to compare rates of complication in abortions done by the two groups of providers. An a-priori margin of equivalence of 4.5% with 80% power and 95% CI (alpha=0.05) was used. 1160 women participated in South Africa and 1734 in Vietnam. Women presenting for an induced abortion at up to 12 weeks' gestation were randomly assigned to a doctor or a mid-level provider for manual vacuum aspiration and followed-up 10-14 days later. The primary outcome was complication of abortion. Complications were recorded during the abortion procedure, before discharge from the clinic, and at follow-up. Per-protocol and intention-to-treat analyses were done. This trial is registered at with the identifier .
Findings: In both countries, rates of complication satisfied the predetermined statistical criteria for equivalence: rates per 100 patients in South Africa were 1.4 (eight of 576) for mid-level providers and 0 for doctors (difference 1.4, 95% CI 0.4 to 2.7); in Vietnam, rates were 1.2 (ten of 824) for mid-level providers and 1.2 (ten of 812) for doctors (difference 0.0, 95% CI -1.2 to 1.1). There was one immediate complication related to analgesics. Delayed complications were caused by retained products and infection.
Interpretation: With appropriate government training, mid-level health-care providers can provide first trimester manual vacuum aspiration abortions as safely as doctors can.
Answer: Based on the available abstracts, preoperative vaginal cleansing does not appear to be necessary for controlling infection after first trimester vacuum curettage. A study that compared vulvar and vaginal cleansing with chlorhexidine to vulvar cleansing only found no significant difference in the frequency of postabortion pelvic inflammatory disease between the two groups (2.4% with cleansing vs. 2.1% without cleansing), suggesting that under certain conditions, preoperative vaginal cleansing can be safely omitted (PUBMED:15954874). Another study that treated women with chloroquinaldol (SterosanR vaginal jelly) before vacuum aspiration did not find a statistically significant difference in postoperative gynecological infection rates compared to a control group, leading to the conclusion that preoperative prophylactic treatment with Sterosan vaginal jelly does not seem to reduce postoperative infectious complications after first trimester legal abortions (PUBMED:7270090).
Furthermore, the abstracts indicate that first trimester surgical abortion, which includes vacuum curettage, is generally a safe procedure with a low complication rate when performed under appropriate conditions and with proper patient selection (PUBMED:6644256, PUBMED:4758593, PUBMED:19407521). However, it is important to note that while preoperative vaginal cleansing may not be necessary for infection control, the use of antibiotics for prophylaxis of infection is mentioned as a measure to reduce complications (PUBMED:6644256). Additionally, the abstracts do not provide specific details on the "certain conditions" under which preoperative vaginal cleansing can be omitted, so it is important to consider individual clinical settings and practices when making decisions about preoperative procedures. |
Instruction: Is there a role for cholecystectomy in gallbladder carcinoma discovered to be unresectable for cure at laparotomy?
Abstracts:
abstract_id: PUBMED:18836852
Is there a role for cholecystectomy in gallbladder carcinoma discovered to be unresectable for cure at laparotomy? Background: Palliative operative resection in patients with locally advanced cancer of the gallbladder (GBC) found not to be amenable to radical resection for cure at exploration has received little attention. This article evaluates the benefits, if any, of cholecystectomy with biliary drainage in such patients.
Methods: Available records of locally advanced but nonmetastatic GBC patients treated in the Department of Surgical Oncology, B.H.U., Varanasi, India, during the last 8 years were retrospectively reviewed. Of these, 30 patients (group I) with GBC (T(3-4),N(0-1),M(0)) treated with cholecystectomy +/- biliary bypass were selected and compared with equal number of controls matched for age (+/-5 years), sex, histopathology, stage, residence, and postoperative chemotherapy who underwent biopsy +/- biliary bypass only (group II) followed by chemotherapy during the same period. Survival rates were calculated by using Kaplan-Meier curves. Follow-up ranged from 1-15 months.
Results: The median survival was 7 and 2 months for groups I and II (P < 0.0001), respectively. The 30-day postoperative mortality and morbidity was 3% vs. 12% and 13% vs. 16% in groups I and II, respectively.
Conclusions: Results suggest that a better median survival can be achieved after cholecystectomy in locally advanced unresectable GBC compared with only bypass and biopsy procedures. These findings may justify a palliative cholecystectomy in selected patients with locally advanced GBC.
abstract_id: PUBMED:18850247
Is There a Role for Cholecystectomy in Gallbladder Carcinoma Discovered to be Unresectable for Cure at Laparotomy? N/A
abstract_id: PUBMED:7735536
Two cases of early gallbladder cancer incidentally discovered by laparoscopic cholecystectomy. In case 1, a 68-year-old woman with asymptomatic gallstones underwent laparoscopic cholecystectomy after radical mastectomy for breast cancer. Histological examination revealed gallbladder carcinoma with involvement confined to the mucosa and positive margin at the cystic duct. We resected the remnant cystic duct with open laparotomy. In case 2, a 63-year-old woman with a diagnosis of early rectal cancer underwent laparoscopic cholecystectomy for silent gallstones following transanal resection of the rectal tumor. Pathologic analysis illustrated a gallbladder carcinoma with wide mucosal spread and minimal invasion into the subserosal layer. No additional treatment was warranted. As laparoscopic cholecystectomy has become widely used, an increase in the number of resected cases of early gallbladder cancer can be expected, especially among asymptomatic gallstone patients. Additional treatment should be determined through meticulous microscopic investigation of the specimen, with special attention to the depth of invasion and the range of mucosal spread.
abstract_id: PUBMED:7474696
Experience in 735 cases of laparoscopic cholecystectomy In the period from December, 1991 to December, 1993, 735 successful operations of laparoscopic cholecystectomy (LC) were conducted in the III Surgical Clinic of the Semmelweis Medical University, Budapest, Hungary. Among the patients were 154 females and 581 males 14 to 82 years of ago. Conversion of laparotomy was undertaken in 58 (7.9%) patients because of technical difficulties and the development of complications. Intraoperative complications were encountered in 197 (26.8%) and postoperative complications in 38 (5.17%) patients.
abstract_id: PUBMED:11960212
Radical second resection provides survival benefit for patients with T2 gallbladder carcinoma first discovered after laparoscopic cholecystectomy. Port site recurrence or peritoneal seeding is a fatal complication following laparoscopic cholecystectomy for gallbladder carcinoma. The aims of this retrospective analysis were to determine the association of gallbladder perforation during laparoscopic cholecystectomy with port site/peritoneal recurrence and to determine the role of radical second resection in the management of gallbladder carcinoma first diagnosed after laparoscopic cholecystectomy. A total of 28 patients undergoing laparoscopic cholecystectomy for gallbladder carcinoma were analyzed, of whom 10 had a radical second resection. Five patients had recurrences; port site/peritoneum recurrence in 3 and distant metastasis in 2. The incidence of port site/peritoneal recurrence was higher in patients with gallbladder perforation (3/7, 43%) than in those without (0/21, 0%) (p = 0.011). The outcome after laparoscopic cholecystectomy was worse in 7 patients with gallbladder perforation (cumulative 5-year survival of 43%) than in those without (cumulative 5-year survival of 100%) (p <0.001). Among 13 patients with a pT2 tumor, the outcome after radical second resection (cumulative 5-year survival of 100%) was better than that after laparoscopic cholecystectomy alone (cumulative 5-year survival of 50%) (p = 0.039), although there was no survival benefit of radical second resection in the 15 patients with a pT1 tumor (p = 0.65). In conclusion, gallbladder perforation during laparoscopic cholecystectomy is associated with port site/peritoneal recurrence and worse patient survival. Radical second resection may be beneficial for patients with pT2 gallbladder carcinoma first discovered after laparoscopic cholecystectomy.
abstract_id: PUBMED:25908288
Surgical management of incidental gallbladder cancer discovered during or after laparoscopic cholecystectomy Objective: To analyze the surgical management of incidental gallbladder cancer (IGBC) discovered during or after laparoscopic cholecystectomy (LC) and to evaluate the associated factors of survival.
Methods: A retrospective analysis of patients with IGBC between January 2002 and December 2013 was performed. A total of 10 080 consecutive patients underwent LC operation for presumed gallbladder benign disease in Chinese People's Liberation Army General hospital. And among them, 83 patients were histologically diagnosed as IGBC. Data covering clinical characteristics, surgery records, local pathological stage, histological features and factors for long term survival were reviewed. The survival analysis was performed using Kaplan-Meier method, and the results were examined using the log-rank test.For multivariate statistical analyses of prognostic factors, a Cox proportional hazards model was carried out.
Results: A total of 83 patients with IGBC:68.7% females (57/83), median age of 61 years (range 34-83 years). There were 47 cases accepted the initial simple LC, 18 cases converted to open extended radical cholecystectomy, 16 cases with radical second resection, and 2 cases with re-laparotomy; the 5-year survival rates for each group were 89.4%, 38.9%, 87.5%, and 0, respectively. The 5-year survival rates in T1a, T1b, T2, and T3 stage patients were 95.7% (22/23), 90.0% (18/20), 75.0% (15/20), and 40.0% (8/20), respectively. Univariate analysis for prognostic factors associated with cancer-specific death showed that depth of invasion, lymph-node status, vascular or neural invasion, tumor differentiation, extent of resection, bile spillage during prior LC and type of surgery were statistically significant.In multivariate analysis, depth of invasion, extent of resection and bile spillage were the most important prognostic factors related to both cancer-specific mortality and disease relapse (P < 0.05).
Conclusions: Simple LC is appropriate for T1a patients with clear margin and unbroken gallbladder. An extended radical resection in patients with T1b or more is highly recommended, and provided as a potentially curative R0 resection only if it is necessary.
abstract_id: PUBMED:23059504
The role of staging laparoscopy in primary gall bladder cancer--an analysis of 409 patients: a prospective study to evaluate the role of staging laparoscopy in the management of gallbladder cancer. Objective: To evaluate the role of staging laparoscopy (SL) in the management of gallbladder cancer (GBC).
Methods: A prospective study of primary GBC patients between May 2006 and December 2011. The SL was performed using an umbilical port with a 30-degree telescope. Early GBC included clinical stage T1/T2. A detectable lesion (DL) was defined as one that could be detected on SL alone, without doing any dissection or using laparoscopic ultrasound (surface liver metastasis and peritoneal deposits). Other metastatic and locally advanced unresectable disease qualified as undetectable lesions (UDL).
Results: Of the 409 primary GBC patients who underwent SL, 95 had disseminated disease [(surface liver metastasis (n = 29) and peritoneal deposits (n = 66)]. The overall yield of SL was 23.2% (95/409). Of the 314 patients who underwent laparotomy, an additional 75 had unresectable disease due to surface liver metastasis (n = 5), deep parenchymal liver metastasis (n = 4), peritoneal deposits (n = 1), nonlocoregional lymph nodes (n = 47), and locally advanced unresectable disease (n = 18), that is, 6-DL and 69-UDL. The accuracy of SL for detecting unresectable disease and DL was 55.9% (95/170) and 94.1% (95/101), respectively. Compared with early GBC, the yield was significantly higher in locally advanced tumors (n = 353) [25.2% (89/353) vs 10.7% (6/56), P = 0.02]. However, the accuracy in detecting unresectable disease and a DL in locally advanced tumors was similar to early GBC [56.0%, (89/159) and 94.1%, (89/95) vs 54.6% (6/11) and 100% (6/6), P = 1.00].
Conclusions: In the present series with an overall resectability rate of 58.4%, SL identified 94.1% of the DLs and thereby obviated a nontherapeutic laparotomy in 55.9% of patients with unresectable disease and 23.2% of overall GBC patients. It had a higher yield in locally advanced tumors than in early-stage tumors; however, the accuracy in detecting unresectable disease and a DL were similar.
abstract_id: PUBMED:8366605
Traditional open cholecystectomy During the past 5 years, traditional open cholecystectomy was performed on 344 patients with gallbladder stones at our hospital. Using the data obtained, we studied the indications and results of traditional open cholecystectomy. In the cases of gallbladder stones with a history of cholecystitis, cholecystectomy was performed as a rule by laparoscopic cholecystectomy. However, traditional open cholecystectomy was chosen in cases which were complicated by perforation, pericholecystic abscess, internal biliary fistra, cirrhosis, or suspicious carcinoma of gallbladder. Investigation of the time of operation revealed that early operation tended to be performed easily. However, cases with gallbladder stones were often complicated with carcinoma of alimentary tracts, therefore before operation we must examine the alimentary tract. Cases in which gallbladder could not be visualized under ERCP, and those accompanied by pericholecystic abscess under US were difficult to operate on by laparoscopic cholecystectomy. Thus many cases required laparotomy. Postoperative complications of open cholecystectomy were rare, it was concluded that traditional cholecystectomy is one of the most valuable procedures for the treatment of gallbladder stones.
abstract_id: PUBMED:9690533
Gallbladder carcinoma discovered during laparoscopic cholecystectomy: aggressive reresection is beneficial. Background: This study attempted to determine whether aggressive surgical therapy is warranted after gallbladder carcinoma is discovered during or after laparoscopic cholecystectomy.
Methods: The clinical course and outcome of 42 consecutive patients with laparoscopically discovered gallbladder carcinoma seen over a 5-year period at a tertiary referral center were examined. Nine of the patients had TNM classified T2 tumors and 32 had full thickness penetration of the gallbladder including 16 T3 and 16 T4 tumors.
Results: At reexploration, 22 of the patients were found to have lymph node, peritoneal, or bilateral liver disease precluding reresection. Nineteen patients underwent liver resection (13 trisegmentectomies and 6 bisegmentectomies), portal lymphadenectomy, and hepaticojejunostomy as the definitive surgical procedure for the carcinoma. There was 1 perioperative death, and the median hospital stay was 11 days (range, 6-28 days). At last follow-up 17 of the 22 nonresected patients had died, with a median survival of 5 months. With a follow-up of 16 months, only 3 of the resected patients had died.
Conclusions: The authors conclude that aggressive resection of gallbladder carcinoma discovered laparoscopically is safe and effective.
abstract_id: PUBMED:15055850
Management of unsuspected gallbladder carcinoma discovered during or following laparoscopic cholecystectomy. Laparoscopic cholecystectomy (LC) is widely used in the treatment of symptomatic cholelithiasis. Gallbladder carcinoma (GBC) discovered during or after LC presents a management problem because of the difficulty of intraoperative staging. We conducted a retrospective, 8-year review of 10 patients with GBC discovered during or after LC. Of 3050 patients undergoing LC for cholelithiasis, 10 (0.3%) had GBC. The histological diagnoses of the 10 GBC patients included well-differentiated adenocarcinomas (n = 5), papillary adenocarcinomas (n = 2), moderately differentiated adenocarcinomas (n = 2), and poorly differentiated adenocarcinoma (n = 1). Of these patients, four had TNM classified T1 tumors, three had T2 tumors, one had T2N0M0 tumor, one had T2N1M0 tumor, and one had T3N0M0 tumor. In three of them (T2N0M0, T2N1M0, and T3N0M0), the procedure was converted to open cholecystectomy, wedge resection of liver bed tissue in segment 5, and lymph node dissection after frozen-section biopsy of the laparoscopically removed gallbladder revealed GBC. Patients were closely followed at regular intervals until death or May 2000. The median follow-up period was 24.5 months. One patient died of carcinomatosis 6 months after undergoing the open procedure. The remaining nine patients did not have any recurrence during the follow-up period. No patient had a port site recurrence of GBC. Based on our limited experience, early GBC (T1a or T1s) can successfully be managed by simple cholecystectomy, either by LC or the open method.
Answer: Yes, there is a role for cholecystectomy in gallbladder carcinoma (GBC) discovered to be unresectable for cure at laparotomy. A study evaluated the benefits of cholecystectomy with biliary drainage in patients with locally advanced GBC that was not amenable to radical resection for cure at exploration. The study compared 30 patients who underwent cholecystectomy with or without biliary bypass (group I) with a matched control group of patients who underwent biopsy with or without biliary bypass only (group II), followed by chemotherapy. The results showed that the median survival was significantly better in the group that underwent cholecystectomy (7 months) compared to the group that did not (2 months). Additionally, the 30-day postoperative mortality and morbidity were lower in the cholecystectomy group (3% vs. 12% and 13% vs. 16%, respectively). These findings suggest that palliative cholecystectomy can lead to better median survival in patients with locally advanced unresectable GBC compared to only bypass and biopsy procedures, which may justify the use of palliative cholecystectomy in selected patients (PUBMED:18836852). |
Instruction: Does quality of care for cardiovascular disease and diabetes differ by gender for enrollees in managed care plans?
Abstracts:
abstract_id: PUBMED:17434752
Does quality of care for cardiovascular disease and diabetes differ by gender for enrollees in managed care plans? Purpose: To assess gender differences in the quality of care for cardiovascular disease and diabetes for enrollees in managed care plans.
Methods: We obtained data from 10 commercial and 9 Medicare plans and calculated performance on 6 Health Employer Data and Information Set (HEDIS) measures of quality of care (beta-blocker use after myocardial infarction [MI], low-density lipoprotein cholesterol [LDL-C] check after a cardiac event, and in diabetics, whether glycosylated hemoglobin [HgbA1c], LDL cholesterol, nephropathy, and eyes were checked) and a 7th HEDIS-like measure (angiotensin-converting enzyme [ACE] inhibitor use for congestive heart failure). A smaller number of plans provided HEDIS scores on 4 additional measures that require medical chart abstraction (control of LDL-C after cardiac event, blood pressure control in hypertensive patients, and HgbA1c and LDL-C control in diabetics). We used logistic regression models to adjust for age, race/ethnicity, socioeconomic status, and plan.
Main Findings: Adjusting for covariates, we found significant gender differences on 5 of 11 measures among Medicare enrollees, with 4 favoring men. Similarly, among commercial enrollees, we found significant gender differences for 8 of 11 measures, with 6 favoring men. The largest disparity was for control of LDL-C among diabetics, where women were 19% less likely to achieve control among Medicare enrollees (relative risk [RR] = 0.81; 95% confidence interval [CI] = 0.64-0.99) and 16% less likely among commercial enrollees (RR = 0.84; 95%CI = 0.73-0.95).
Conclusion: Gender differences in the quality of cardiovascular and diabetic care were common and sometimes substantial among enrollees in Medicare and commercial health plans. Routine monitoring of such differences is both warranted and feasible.
abstract_id: PUBMED:17448685
Gender disparities in the quality of cardiovascular disease care in private managed care plans. Background: Studies have shown that women with cardiovascular disease (CVD) are screened and treated less aggressively than men and are less likely to undergo cardiac procedures. Research in this area has primarily focused on the acute setting, and there are limited data on the ambulatory care setting, particularly among the commercially insured. To that end, the objective of this study is to determine if gender disparities in the quality of CVD care exist in commercial managed care populations.
Methods: Using a national sample of commercial health plans, we analyzed member-level data for 7 CVD quality indicators from the Healthcare Effectiveness Data and Information Set (HEDIS) collected in 2005. We used hierarchical generalized linear models to estimate these HEDIS measures as a function of gender, controlling for race/ethnicity, socioeconomic status, age, and plans' clustering effects.
Results: Results showed that women were less likely than men to have low-density lipoprotein (LDL) cholesterol controlled at <100 mg/dL in those who have diabetes (odds ratio [OR], 0.81; 95% confidence interval [CI], 0.76-0.86) or a history of CVD (OR, 0.72; CI 95%, 0.64-0.82). The difference between men and women in meeting the LDL control measures was 5.74% among those with diabetes (44.3% vs. 38.5%) and 8.53% among those with a history of CVD (55.1% vs. 46.6%). However, women achieved higher performance than men in controlling blood pressure (OR, 1.12; 95% CI, 1.02-1.21), where the rate of women meeting this quality indicator exceeded that of men by 1.94% (70.8% for women vs. 68.9% for men).
Conclusions: Gender disparities in the management and outcomes of CVD exist among patients in commercial managed care plans despite similar access to care. Poor performance in LDL control was seen in both men and women, with a lower rate of control in women suggesting the possibility of less intensive cholesterol treatment in women. The differences in patterns of care demonstrate the need for interventions tailored to address gender disparities.
abstract_id: PUBMED:17481918
Gender disparities in cardiovascular disease care among commercial and medicare managed care plans. Background: Gender disparities in cardiovascular care have been documented in studies of patients, but little is known about whether these disparities persist among managed health care plans. This study examined 1) the feasibility of gender-stratified quality of care reporting by commercial and Medicare health plans; 2) possible gender differences in performance on prevention and treatment of cardiovascular disease in US health plans; and 3) factors that may contribute to disparities as well as potential opportunities for closing the disparity gap.
Methods: We evaluated plan-level performance on Healthcare Effectiveness Data and Information Set (HEDIS) measures using a national sample of commercial health plans that voluntarily reported gender-stratified data and for all Medicare plans with valid member-level data that allowed the computation of gender-stratified performance data. Key informant interviews were conducted with a subset of commercial plans. Participating commercial plans in this study tended to be larger and higher performing than other plans who routinely report on HEDIS performance.
Results: Nearly all Medicare and commercial plans had sufficient numbers of eligible members to allow for stable reporting of gender-stratified performance rates for diabetes and hypertension, but fewer commercial plans were able to report gender-stratified data on measures where eligibility was based on recent cardiac events. Over half of participating commercial plans showed a disparity of >/=5% in favor of men for cholesterol control measures among persons with diabetes and persons with a recent cardiovascular procedure or heart attack, whereas no commercial plans showed such disparities in favor of women. These gender differences favoring men were even larger for Medicare plans, and disparities were not linked to health plan performance or region.
Conclusions And Discussion: Eliminating gender disparities in selected cardiovascular disease preventive quality of care measures has the potential to reduce major cardiac events including death by 4,785-10,170 per year among persons enrolled in US health plans. Health plans should be encouraged to collect and monitor quality of care data for cardiovascular disease for men and women separately as a focus for quality improvement.
abstract_id: PUBMED:16107622
Trends in the quality of care and racial disparities in Medicare managed care. Background: Since 1997, all managed-care plans administered by Medicare have reported on quality-of-care measures from the Health Plan Employer Data and Information Set (HEDIS). Studies of early data found that blacks received care that was of lower quality than that received by whites. In this study, we assessed changes over time in the overall quality of care and in the magnitude of racial disparities in nine measures of clinical performance.
Methods: In order to compare the quality of care for elderly white and black beneficiaries enrolled in Medicare managed-care plans who were eligible for at least one of nine HEDIS measures, we analyzed 1.8 million individual-level observations from 183 health plans from 1997 to 2003. For each measure, we assessed whether the magnitude of the racial disparity had changed over time with the use of multivariable models that adjusted for the age, sex, health plan, Medicaid eligibility, and socioeconomic position of beneficiaries on the basis of their area of residence.
Results: During the seven-year study period, clinical performance improved on all measures for both white enrollees and black enrollees (P<0.001). The gap between white beneficiaries and black beneficiaries narrowed for seven HEDIS measures (P<0.01). However, racial disparities did not decrease for glucose control among patients with diabetes (increasing from 4 percent to 7 percent, P<0.001) or for cholesterol control among patients with cardiovascular disorders (increasing from 14 percent to 17 percent; change not significant, P=0.72).
Conclusions: The measured quality of care for elderly Medicare beneficiaries in managed-care plans improved substantially from 1997 to 2003. Racial disparities declined for most, but not all, HEDIS measures we studied. Future research should examine factors that contributed to the narrowing of racial disparities on some measures and focus on interventions to eliminate persistent disparities in the quality of care.
abstract_id: PUBMED:26035044
Quality of Care for Chronic Conditions Among Disabled Medicaid Enrollees: An Evaluation of a 1915 (b) and (c) Waiver Program. Importance: Examining the impact of Medicaid-managed care home-based and community-based service (HCBS) alternatives to institutional care is critical given the recent rapid expansion of these models nationally.
Objective: We analyzed the effects of STAR+PLUS, a Texas Medicaid-managed care HCBS waiver program for adults with disabilities on the quality of chronic disease care.
Design, Setting, And Participants: We compared quality before and after a mandatory transition of disabled Medicaid enrollees older than 21 years from fee-for-service (FFS) or primary care case management (PCCM) to STAR+PLUS in 28 counties, relative to enrollees in counties remaining in the FFS or PCCM models.
Measures And Analysis: Person-level claims and encounter data for 2006-2010 were used to compute adherence to 6 quality measures. With county as the independent sampling unit, we employed a longitudinal linear mixed-model analysis accounting for administrative clustering and geographic and individual factors.
Results: Although quality was similar among programs at baseline, STAR+PLUS enrollees experienced large and sustained improvements in use of β-blockers after discharge for heart attack (49% vs. 81% adherence posttransition; P<0.01) and appropriate use of systemic corticosteroids and bronchodilators after a chronic obstructive pulmonary disease event (39% vs. 68% adherence posttransition; P<0.0001) compared with FFS/PCCM enrollees. No statistically significant effects were identified for quality measures for asthma, diabetes, or cardiovascular disease.
Conclusion: In 1 large Medicaid-managed care HCBS program, the quality of chronic disease care linked to acute events improved while that provided during routine encounters appeared unaffected.
abstract_id: PUBMED:29929865
Mapping the Gaps: Gender Differences in Preventive Cardiovascular Care among Managed Care Members in Four Metropolitan Areas. Background: Prior research documents gender gaps in cardiovascular risk management, with women receiving poorer quality routine care on average, even in managed care systems. Although population health management tools and quality improvement efforts have led to better overall care quality and narrowing of racial/ethnic gaps for a variety of measures, we sought to quantify persistent gender gaps in cardiovascular risk management and to assess the performance of routinely used commercial population health management tools in helping systems narrow gender gaps.
Methods: Using 2013 through 2014 claims and enrollment data from more than 1 million members of a large national health insurance plan, we assessed performance on seven evidence-based quality measures for the management of coronary artery disease and diabetes mellitus, a cardiac risk factor, across and within four metropolitan areas. We used logistic regression to adjust for region, demographics, and risk factors commonly tracked in population health management tools.
Findings: Low-density lipoprotein (LDL) cholesterol control (LDL < 100 mg/dL) rates were 5 and 15 percentage points lower for women than men with diabetes mellitus (p < .0001), and coronary artery disease (p < .0001), respectively. Adjusted analyses showed women were more likely to have gaps in LDL control, with an odds ratio of 1.31 (95% confidence interval, 1.27-1.38) in diabetes mellitus and 1.88 (95% confidence interval, 1.65-2.10) in coronary artery disease.
Conclusions: Given our findings that gender gaps persist across both clinical and geographic variation, we identified additional steps health plans can take to reduce disparities. For measures where gaps have been consistently identified, we recommend that gender-stratified quality reporting and analysis be used to complement widely used algorithms to identify individuals with unmet needs for referral to population health and wellness behavior support programs.
abstract_id: PUBMED:13678806
Improving women's quality of care for cardiovascular disease and diabetes: the feasibility and desirability of stratified reporting of objective performance measures. Despite growing recognition of significant morbidity and mortality among women from cardiovascular disease, management of primary and secondary cardiac risk factors continues to be suboptimal for many women. Although there is a good deal of room to improve the care for cardiovascular disease and diabetes in men, existing gender differences in performance suggest much can be gained by specifically assessing and monitoring quality of care for these conditions in women. In this paper, we describe recent work showing gender differences in quality of ambulatory care in managed care plans with some plans having substantial gender differences on widely used measures of the quality of primary and secondary prevention of cardiac disease. We then discuss potential benefits of and barriers to routine reporting of objective measures of the quality of care, such as Health Plan Employer Data and Information Set (HEDIS) measures, by health plans.
abstract_id: PUBMED:19320605
Comparing quality of care between a consumer-directed health plan and a traditional plan: an analysis of HEDIS measures related to management of chronic diseases. A cross-sectional, retrospective medical and pharmaceutical claims data analysis was conducted to determine if Healthcare Effectiveness Data and Information Set (HEDIS) measures related to care for chronic conditions differed between enrollees in a traditional comprehensive major medical plan (CMM) and a consumer-directed health plan (CDHP). Eleven HEDIS measures for 2006 were compared for CMM and CDHP enrollees in a health plan. Measures included care for persons with diabetes, asthma, depression, cardiovascular disease, and low back pain, and for persons taking persistent medications for specific conditions. In the CMM population, 1,238,949 members were eligible to be included; 131,763 members in the CDHP population were eligible. Statistical significance testing was performed. As measured by HEDIS, CDHP enrollees received higher quality of care than did CMM enrollees in areas related to low back pain, and eye exams and nephropathy screening for persons with diabetes. No significant differences were found between CDHP enrollees and CMM enrollees for measures describing medication management for persons with depression and asthma, annual monitoring for persons taking persistent medications, cholesterol management for persons with cardiovascular disease, or HbA1c testing and low-density lipoprotein screening for persons with diabetes. Enrollees in CDHPs who have chronic conditions received care at levels of quality equal to or better than CMM enrollees. The potential for increased financial responsibility in the CDHP plan did not appear to deter those enrollees from pursuing necessary care. Future research should control for the demographic factors thought to influence both selection into a plan design and quality of care.
abstract_id: PUBMED:21851137
Classification of health plans based on relative resource use and quality of care. Objective: To examine variation among commercial health plans in resource use and quality of care for patients with diabetes mellitus or cardiovascular disease.
Study Design: Cohort study using Healthcare Effectiveness Data and Information Set data submitted to the National Committee for Quality Assurance in 2008.
Methods: Composite measures were estimated for diabetes and cardiovascular disease resource use and quality of care. A "value" classification approach was defined. Obtained were descriptive statistics, Pearson product moment correlations between resource use and quality of care, and 90% confidence intervals around each health plan's composite measures of resource use and quality of care. Health plans were classified based on their results.
Results: For patients with diabetes, the correlation between combined medical care services resource use and composite quality of care is negative (-0.201, p = .008); the correlation between ambulatory pharmacy services resource use and composite quality of care is positive (0.162, p = .03). For patients with cardiovascular disease, no significant correlation was found between combined medical care services resource use and composite quality of care (-0.007, p = .94) or ambulatory pharmacy services resource use (0.170, p = .06).
Conclusions: Measures of resource use and quality of care provide important information about the value of a health plan. Although our analysis did not determine causality, the statistically weak or absent correlations between resource use and quality of care suggest that health plans and practices can create higher value by improving quality of care without large increases in resource use or by maintaining the same quality of care with decreased resource use.
abstract_id: PUBMED:27111865
Quality of Care for White and Hispanic Medicare Advantage Enrollees in the United States and Puerto Rico. Importance: Geographic, racial, and ethnic variations in quality of care and outcomes have been well documented among the Medicare population. Few data exist on beneficiaries living in Puerto Rico, three-quarters of whom enroll in Medicare Advantage (MA).
Objective: To determine the quality of care provided to white and Hispanic MA enrollees in the United States and Puerto Rico.
Design, Setting, And Participants: A cross-sectional study of MA enrollees in 2011 was conducted, including white enrollees in the United States (n = 6 289 374), Hispanic enrollees in the United States (n = 795 039), and Hispanic enrollees in Puerto Rico (n = 267 016). The study was conducted from January 1, 2011, to December 31, 2011; data analysis took place from January 19, 2015, to January 2, 2016.
Main Outcomes And Measures: Seventeen performance measures related to diabetes mellitus (including hemoglobin A1c control, retinal eye examination, low-density lipoprotein cholesterol control, nephropathy screening, and blood pressure control), cardiovascular disease (including low-density lipoprotein cholesterol control, blood pressure control, and use of a β-blocker after myocardial infarction), cancer screening (colorectal and breast), and appropriate medications (including systemic corticosteroids and bronchodilators for chronic obstructive pulmonary disease [COPD] and disease-modifying antirheumatic drugs).
Results: Of the 7.35 million MA enrollees in the United States and Puerto Rico in our study, 1.06 million (14.4%) were Hispanic. Approximately 25.1% of all Hispanic MA enrollees resided in Puerto Rico, which was more than those residing in any state. For 15 of the 17 measures assessed, Hispanic MA enrollees in Puerto Rico received worse care compared with Hispanics in the United States, with absolute differences in performance rates ranging from 2.2 percentage points for blood pressure control in diabetes mellitus (P = .03) to 31.3 percentage points for use of disease-modifying antirheumatic drug therapy (P < .01). Adjusted performance differences between Hispanic MA enrollees in Puerto Rico and Hispanic MA enrollees in the United States exceeded 20 percentage points for 3 measures: use of disease-modifying antirheumatic drug therapy (-23.8 percentage points [95% CI, -30.9 to -16.8]), use of systemic corticosteroid in COPD exacerbation (-21.3 percentage points [95% CI, -27.5 to -15.1]), and use of bronchodilator therapy in COPD exacerbation (-22.7 percentage points [95% CI, -27.7 to -17.6]).
Conclusions And Relevance: We found modest differences in care between white and Hispanic MA enrollees in the United States but substantially worse care for enrollees in Puerto Rico compared with their US counterparts. Major efforts are needed to improve care delivery on the island to a level equivalent to the United States.
Answer: Yes, the quality of care for cardiovascular disease and diabetes does differ by gender for enrollees in managed care plans. Studies have found significant gender differences in the quality of care, with men often receiving more favorable care than women. For example, among Medicare enrollees, significant gender differences were found on 5 of 11 measures, with 4 favoring men. Among commercial enrollees, significant gender differences were observed for 8 of 11 measures, with 6 favoring men. The largest disparity was observed for control of LDL-C among diabetics, where women were less likely to achieve control compared to men (PUBMED:17434752).
Another study showed that women were less likely than men to have LDL cholesterol controlled in those who have diabetes or a history of CVD. However, women achieved higher performance than men in controlling blood pressure (PUBMED:17448685). Similarly, nearly all Medicare and commercial plans showed disparities in cholesterol control measures among persons with diabetes and persons with a recent cardiovascular procedure or heart attack, favoring men. These disparities were not linked to health plan performance or region (PUBMED:17481918).
Furthermore, gender disparities in cardiovascular care have been documented in studies of patients, with women receiving poorer quality routine care on average, even in managed care systems. Gender gaps in cardiovascular risk management persist, with women more likely to have gaps in LDL control (PUBMED:29929865).
In summary, there is evidence of gender disparities in the quality of care for cardiovascular disease and diabetes among enrollees in managed care plans, with men often receiving better care than women in several quality measures. These findings highlight the need for interventions tailored to address gender disparities in healthcare. |
Instruction: Perceived neighborhood environment and physical activity in 11 countries: do associations differ by country?
Abstracts:
abstract_id: PUBMED:23672435
Perceived neighborhood environment and physical activity in 11 countries: do associations differ by country? Background: Increasing empirical evidence supports associations between neighborhood environments and physical activity. However, since most studies were conducted in a single country, particularly western countries, the generalizability of associations in an international setting is not well understood. The current study examined whether associations between perceived attributes of neighborhood environments and physical activity differed by country.
Methods: Population representative samples from 11 countries on five continents were surveyed using comparable methodologies and measurement instruments. Neighborhood environment × country interactions were tested in logistic regression models with meeting physical activity recommendations as the outcome, adjusted for demographic characteristics. Country-specific associations were reported.
Results: Significant neighborhood environment attribute × country interactions implied some differences across countries in the association of each neighborhood attribute with meeting physical activity recommendations. Across the 11 countries, land-use mix and sidewalks had the most consistent associations with physical activity. Access to public transit, bicycle facilities, and low-cost recreation facilities had some associations with physical activity, but with less consistency across countries. There was little evidence supporting the associations of residential density and crime-related safety with physical activity in most countries.
Conclusion: There is evidence of generalizability for the associations of land use mix, and presence of sidewalks with physical activity. Associations of other neighborhood characteristics with physical activity tended to differ by country. Future studies should include objective measures of neighborhood environments, compare psychometric properties of reports across countries, and use better specified models to further understand the similarities and differences in associations across countries.
abstract_id: PUBMED:24929197
Association between perceived urban built environment attributes and leisure-time physical activity among adults in Hangzhou, China. Background: Neighborhood built environment may influence residents' physical activity, which in turn, affects their health. This study aimed to determine the associations between perceived built environment and leisure-time physical activity in Hangzhou, China.
Methods: 1440 participants aged 25-59 were randomly selected from 30 neighborhoods in three types of administrative planning units in Hangzhou. International Physical Activity Questionnaire long form and NEWS-A were used to obtain individual-level data. The China Urban Built Environment Scan Tool was used to objectively assess the neighborhood-level built environment. Multi-level regression was used to explore the relationship between perceived built environment variables and leisure-time physical activities. Data was collected in Hangzhou from June to December in 2012, and was analyzed in May 2013.
Results: Significant difference between neighborhood random variations in physical activity was identified (P=0.0134); neighborhood-level differences accounted for 3.0% of the variability in leisure-time physical activity. Male residents who perceived higher scores on access to physical activity destinations reported more involvement in leisure-time physical activity. Higher scores on perception of esthetic quality, and lower on residential density were associated with more time in leisure-time walking in women.
Conclusions: The present study demonstrated that perceived urban built environment attributes significantly correlate with leisure-time physical activity in Hangzhou, China.
abstract_id: PUBMED:33168094
Measuring the association of objective and perceived neighborhood environment with physical activity in older adults: challenges and implications from a systematic review. Background: A supportive environment is a key factor in addressing the issue of health among older adults. There is already sufficient evidence that objective and self-reported measures of the neighborhood environment should be taken into account as crucial components of active aging, as they have been shown to influence physical activity; particularly in people aged 60+. Thus, both could inform policies and practices that promote successful aging in place. An increasing number of studies meanwhile consider these exposures in analyzing their impact on physical activity in the elderly. However, there is a wide variety of definitions, measurements and methodological approaches, which complicates the process of obtaining comparable estimates of the effects and pooled results. The aim of this review was to identify and summarize these differences in order to emphasize methodological implications for future reviews and meta analyzes in this field and, thus, to create a sound basis for synthesized evidence.
Methods: A systematic literature search across eight databases was conducted to identify peer-reviewed articles examining the association of objective and perceived measures of the neighborhood environment and objectively measured or self-reported physical activity in adults aged ≥ 60 years. Two authors independently screened the articles according to predefined eligibility criteria, extracted data, and assessed study quality. A qualitative synthesis of the findings is provided.
Results: Of the 2967 records retrieved, 35 studies met the inclusion criteria. Five categories of methodological approaches, numerous measurement instruments to assess the neighborhood environment and physical activity, as well as several clusters of definitions of neighborhood, were identified.
Conclusions: The strength of evidence of the associations of specific categories of environmental attributes with physical activity varies across measurement types of the outcome and exposures as well as the physical activity domain observed and the operationalization of neighborhood. The latter being of great importance for the targeted age group. In the light of this, future reviews should consider these variations and stratify their summaries according to the different approaches, measures and definitions. Further, underlying mechanisms should be explored.
abstract_id: PUBMED:31064351
The perceived neighborhood environment is associated with health-enhancing physical activity among adults: a cross-sectional survey of 13 townships in Taiwan. Background: Many environmental factors have been associated with physical activity. The environment is considered a key factor in terms of the rate of engagement in physical activity. This study examined the perceived effect of environmental factors on different levels of health-enhancing physical activity among Taiwanese adults.
Methods: Data were collected from 549 adults aged at least 18 years from the northern, central, southern and eastern regions of Taiwan. Physical activity was measured using the International Physical Activity Questionnaire (IPAQ) showcard version, and participants were divided into three categories: those who performed low-, moderate-, or high-levels of physical activity, as suggested by the IPAQ scoring protocol. The perceived neighborhood environment in relation to physical activity was adapted from the Physical Activity Neighborhood Environment Scale. A multinomial logistic regression was conducted to ascertain associations between individual perceptions of the neighborhood environment and different physical activity levels.
Results: Respondents who perceived their neighborhood environment as having easy access to services and stores, and higher traffic safety were more likely to be moderate level of physical activity (odds ratio [OR]: 1.90, 95% confidence interval [CI]: 1.07-3.37; OR: 1.77, 95% CI: 1.12-2.80). The perception of having easy access to services and stores and seeing many physically active people in the neighborhood were both positively associated with a high level of physical activity (OR: 2.25, 95% CI: 1.01-5.01; OR: 2.40, 95% CI: 1.11-5.23).
Conclusions: Different perceived neighborhood environmental factors were associated with moderate and high levels of physical activity, respectively. These findings highlight the importance of an activity-friendly neighborhood environment to stimulate engagement in physical activity among adults in Taiwan. Therefore, policies and programs should focus on improving friendliness and diversity in neighborhoods to facilitate individuals' transitions from inactive to active lifestyles.
abstract_id: PUBMED:35603774
Racial and Ethnic Differences in the Relationship Between Neighborhood Environment and Physical Activity Among Middle-Aged and Older Adults. Objectives:To examine the associations between neighborhood environment-perceived neighborhood social cohesion and perceived neighborhood physical environment-and physical activity (PA) and whether these associations differ by race/ethnicity. Methods: We analyzed data from the Health and Retirement Study, a longitudinal study of US adults aged 50+ from 2006 to 2014 (N = 17,974), using multivariate mixed-effects linear models. PA was repeatedly measured using metabolic equivalent of task estimated values accounting for the vigor and frequency of self-reported PA. Results: In multivariate models, higher levels of PA were positively associated with higher rated neighborhood social cohesion and neighborhood physical environment scores. The effects of social cohesion were stronger among non-Hispanic Whites than among non-Hispanic Black and Hispanic/Latinx participants, while race/ethnicity did not moderate the association between PA and physical environment. Discussion: Intervention strategies that address social and physical barriers of neighborhoods could promote PA in older adults. Key implications for future research are discussed.
abstract_id: PUBMED:35359780
Perceived Neighborhood Environment Impacts on Health Behavior, Multi-Dimensional Health, and Life Satisfaction. The impacts of perceived neighborhood environment on adults' health and life satisfaction have drawn increasing academic attention. However, previous studies usually examine multi-dimensional (physical, mental, and perceived) health and life satisfaction separately, and few studies dealt with them simultaneously. Moreover, limited research revealed the mechanisms behind the effects of perceived neighborhood environment on health and life satisfaction, as well as how such effects are moderated by socio-demographics. Therefore, employing the 2016 China Family Panel Study Dataset and using structural equation modeling, this study delves into the complicated relationships among perceived neighborhood environment, health behavior, health outcomes (i.e., body mass index, self-rated health status, and depression), and life satisfaction. Notably, it considers mediation and moderation simultaneously. It finds: (1) Better perceived neighborhood environment significantly promotes physical activity and reduces sedentary behavior, smoking, and drinking; (2) Health behavior fully mediates the effects of perceived neighborhood environment on health; (3) Perceived neighborhood environment significantly affects life satisfaction both directly and indirectly (through health behavior and health outcomes); (4) Socio-demographics moderate the above relationships. This study disentangles the complicated impacts of perceived neighborhood environment on adults' multi-dimensional health and life satisfaction, thus providing policy makers and practitioners with nuanced knowledge for intervention.
abstract_id: PUBMED:28544991
Associations of neighborhood social environment attributes and physical activity among 9-11 year old children from 12 countries. We investigated whether associations of neighborhood social environment attributes and physical activity differed among 12 countries and levels of economic development using World Bank classification (low/lower-middle-, upper-middle- and high- income countries) among 9-11 year old children (N=6161) from the International Study of Childhood Obesity, Lifestyle, and the Environment (ISCOLE). Collective efficacy and perceived crime were obtained via parental/guardian report. Moderate-to-vigorous physical activity (MVPA) was assessed with waist-worn Actigraph accelerometers. Neighborhood environment by country interactions were tested using multi-level statistical models, adjusted for covariates. Effect estimates were reported by country and pooled estimates calculated across World Bank classifications for economic development using meta-analyses and forest plots. Associations between social environment attributes and MVPA varied among countries and levels of economic development. Associations were more consistent and in the hypothesized directions among countries with higher levels economic development, but less so among countries with lower levels of economic development.
abstract_id: PUBMED:26582209
Neighborhood and family perceived environments associated with children's physical activity and body mass index. Background: A growing body of research has been examining neighborhood environment related to children's physical activity and obesity. However, there is still not enough evidence from Latin America.
Objective: To investigate the association of neighborhood and family perceived environments, use of and distance to public open spaces with leisure-time physical activity (LTPA) and body mass index (BMI) in Argentinean school-aged children.
Methods: School-based, cross-sectional study with 1777 children (9 to 11years) and their parents, in Cordoba city during 2011. Children were asked about LTPA and family perceived environment. Parents were asked about neighborhood perceived environment, children's use of public open spaces and distance. Weight and height were measured for BMI. We modeled children's LTPA and BMI z-score with structural equation models with latent variables for built, social and safety neighborhood environments.
Results: Parents' perceived neighborhood environment was not related with children's LTPA and BMI. Children's perceived autonomy and family environment were positively associated with LTPA. Use of unstructured open spaces and, indirectly, the distance to these, was associated with LTPA among girls. Greater distance to parks reduced their use by children.
Conclusions: Policies to increase children's LTPA should include access to better public open spaces, increasing options for activity. A family approach should be incorporated, reinforcing its role for healthy development.
abstract_id: PUBMED:25840351
Changes in the perceived neighborhood environment in relation to changes in physical activity: A longitudinal study from childhood into adolescence. The aim was to investigate how physical activity and the perceived neighborhood environment in children change when they enter adolescence. Also the relation between changes in the perceived environment and changes in children's physical activity was investigated. In total, 321 children and one of their parents filled out a physical activity questionnaire and the NEWS-Y at two time points (last grade of elementary school and 2 years later). Children also wore an activity monitor. Changes in children's physical activity were dependent on the physical activity domain. Only less than half of children's perceived neighborhood factors changed and about half of the parental perceived neighborhood factors changed. Most of these factors changed towards higher activity friendliness. Changes in the perceived environment were only limitedly related to changes in children's physical activity.
abstract_id: PUBMED:31815626
The neighborhood social environment and physical activity: a systematic scoping review. Background: Investigating the association of the neighborhood social environment on physical activity is complex. A systematic scoping review was performed to (1) provide an inventory of studies assessing the influence of the neighborhood social environment on physical activity since 2006; (2) describe methodologies employed; and (3) formulate recommendations for the field.
Methods: Two databases were searched using terms related to 'physical activity,' 'neighborhood,' and 'social environment' in January 2017. Eligibility criteria included: 1) physical activity as an outcome; 2) neighborhood social environment as a predictor; 3) healthy population (without diagnosed clinical condition or special population); 4) observational or experimental design. Of 1352 studies identified, 181 were included. Textual data relevant to the social environment measurement and analysis were extracted from each article into qualitative software (MAXQDA) and coded to identify social environmental constructs, measurement methods, level of measurement (individual vs. aggregated to neighborhood), and whether authors explicitly recognized the construct as the social environment. The following measures were generated for each construct: number of unique measurements; % of times measured at an aggregate level; % of times authors referred to the construct as the social environment. Social environmental constructs were then grouped into larger descriptive dimensions.
Results/findings: Fifty-nine social environmental constructs were identified and grouped into 9 dimensions: Crime & Safety (n = 133 studies; included in 73% of studies); Economic & Social Disadvantage (n = 55, 33%); Social Cohesion & Capital (n = 47, 26%); Social Relationships (n = 22, 12%); Social Environment (n = 16, 9%); Disorder & Incivilities (n = 15, 8%); Sense of Place/Belonging (n = 8, 4%); Discrimination/Segregation (n = 3, 2%); Civic Participation & Engagement (n = 2, 1%). Across all articles, the social environment was measured using 176 different methods, was measured at an aggregate-level 38% of the time, and referred to as the social environment 23% of the time.
Conclusions: Inconsistent terminology, definitions, and measurement of the social environment and the lack of explicit language identifying constructs as the social environment make it challenging to compare results across studies and draw conclusions. Improvements are needed to increase our understanding of social environmental correlates and/or determinants of physical activity and facilitate cross-disciplinary conversations necessary to effectively intervene to promote physical activity.
Trial Registration: PROSPERO CRD42017059580.
Answer: Yes, associations between perceived neighborhood environment and physical activity do differ by country. A study examining this relationship across 11 countries found that significant neighborhood environment attribute × country interactions suggested differences in how each neighborhood attribute was associated with meeting physical activity recommendations (PUBMED:23672435). For instance, land-use mix and sidewalks had the most consistent associations with physical activity across the 11 countries, while access to public transit, bicycle facilities, and low-cost recreation facilities had some associations with physical activity but with less consistency across countries. There was little evidence supporting the associations of residential density and crime-related safety with physical activity in most countries (PUBMED:23672435).
These findings indicate that while some neighborhood features may have a universal impact on physical activity, such as land-use mix and sidewalks, other features may have varying levels of influence depending on the country. This suggests that cultural, social, and infrastructural differences between countries can affect how the neighborhood environment influences physical activity. Therefore, it is important for future studies to consider these differences and to use objective measures of neighborhood environments, compare psychometric properties of reports across countries, and use better-specified models to further understand the similarities and differences in associations across countries (PUBMED:23672435). |
Instruction: Silver nanoparticles and polymeric medical devices: a new approach to prevention of infection?
Abstracts:
abstract_id: PUBMED:25492173
In situ formation of antimicrobial silver nanoparticles and the impregnation of hydrophobic polycaprolactone matrix for antimicrobial medical device applications. Bacterial infection associated with medical devices remains a challenge to modern medicine as more patients are being implanted with medical devices that provide surfaces and environment for bacteria colonization. In particular, bacteria are commonly found to adhere more preferably to hydrophobic materials and many of which are used to make medical devices. Bacteria are also becoming increasingly resistant to common antibiotic treatments as a result of misuse and abuse of antibiotics. There is an urgent need to find alternatives to antibiotics in the prevention and treatment of device-associated infections world-wide. Silver nanoparticles have emerged as a promising non-drug antimicrobial agent which has shown effectiveness against a wide range of both Gram-negative and Gram-positive pathogen. However, for silver nanoparticles to be clinically useful, they must be properly incorporated into medical device materials whose wetting properties could be detrimental to not only the incorporation of the hydrophilic Ag nanoparticles but also the release of active Ag ions. This study aimed at impregnating the hydrophobic polycaprolactone (PCL) polymer, which is a FDA-approved polymeric medical device material, with hydrophilic silver nanoparticles. Furthermore, a novel approach was employed to uniformly, incorporate silver nanoparticles into the PCL matrix in situ and to improve the release of Ag ions from the matrix so as to enhance antimicrobial efficacy.
abstract_id: PUBMED:28481077
Silver-Decorated Polymeric Micelles Combined with Curcumin for Enhanced Antibacterial Activity. Because of the mounting prevalence of complicated infections induced by multidrug-resistant bacteria, it is imperative to develop innovative and efficient antibacterial agents. In this work, we design a novel polymeric micelle for simultaneous decorating of silver nanoparticles and encapsulating of curcumin as a combination strategy to improve the antibacterial efficiency. In the constructed combination system, silver nanoparticles were decorated in the micellar shell because of the in situ reduction of silver ions, which were absorbed by the poly(aspartic acid) (PAsp) chains in the shell. Meanwhile, natural curcumin was encapsulated into the poly(ε-caprolactone) (PCL) core of the micelle through hydrophobic interaction. This strategy could prevent aggregation of silver nanoparticles and improve the water solubility of curcumin at the same time, which showed enhanced antibacterial activity toward Gram-negative P.aeruginosa and Gram-positive S.aureus compared with sliver-decorated micelle and curcumin-loaded micelle alone, due to the cooperative antibacterial effects of the silver nanoparticles and curcumin. Furthermore, the achieved combinational micelles had good biocompatibility and low hemolytic activity. Thus, our study provides a new pathway in the rational design of combination strategy for efficiently preventing the ubiquitous bacterial infections.
abstract_id: PUBMED:32172435
Simvastatin decreases the silver resistance of E. faecalis through compromising the entrapping function of extracellular polymeric substances against silver. Enterococcus faecalis (E. faecalis) is a Gram-positive bacterium closely related to many refractory infections of human and shows the resistant ability against the antibacterial effects of silver. Simvastatin is a semisynthetic compound derived from lovastatin and a hydroxymethyl glutaryl coenzyme A(HMG-COA) reductase inhibitor showing certain inhibitive effects on bacteria. The main purpose of this study was to establish and characterize the Ag+/silver nanoparticles (AgNPs)-resistant E. faecalis, and further evaluate the function of extracellular polymeric substances (EPS) in the silver resistance and the effect of simvastatin on the silver-resistance of E. faecalis. The results showed that the established silver-resistant E. faecalis had strong resistance against both Ag+ and AgNPs and simvastatin could decrease the silver-resistance of both original and Ag+/AgNPs-resistant E. faecalis. The Transmission electron microscopy (TEM), High-angle annular dark-field (HAADF) and mapping images showed that the silver ions or particles aggregated and confined in the EPS on surface areas of the cell membrane when the silver-resistant E. faecalis were incubated with Ag+ or AgNPs. When the simvastatin was added, the silver element was not confined in the EPS and entered the bacteria. These findings may indicate that the silver resistance of E. faecalis was derived from the entrapping function of EPS, but simvastatin could compromise the function of EPS to decrease the silver resistant ability of E. faecalis.
abstract_id: PUBMED:32515970
New Silver(I) Coordination Compound Loaded into Polymeric Nanoparticles as a Strategy to Improve In Vitro Anti-Helicobacter pylori Activity. Helicobacter pylori inhabits the gastric epithelium and can promote the development of gastric disorders, such as peptic ulcers, acute and chronic gastritis, mucosal lymphoid tissue (MALT), and gastric adenocarcinomas. To use nanotechnology as a tool to increase the antibacterial activity of silver I [Ag(I)] compounds, this study suggests a new strategy for H. pylori infections, which have hitherto been difficult to control. [Ag (PhTSC·HCl)2] (NO3)·H2O (compound 1) was synthesized, characterized, and loaded into polymeric nanoparticles (PN1). PN1 had been developed by nanoprecipitation with poly(ε-caprolactone) polymer and poloxamer 407 surfactant. System characterization assays showed that the PNs had adequate particle sizes and ζ-potentials. Transmission electron microscopy confirmed the formation of polymeric nanoparticles (PNs). Compound 1 had a minimum inhibitory concentration for H. pylori of 3.90 μg/mL, which was potentiated to 0.781 μg/mL after loading. The minimum bactericidal concentration of 7.81 μg/mL was potentiated 5-fold to 1.56 μg/mL in PN. Compound 1 loaded in PN1 displayed better activity for H. pylori biofilm formation and mature biofilm. PN1 reduced the toxicity of compound 1 to MRC-5 cells. Loading compound 1 into PN1 inhibited the mutagenicity of the free compound. In vivo, the system allowed survival of Galleria mellonella larvae at a concentration of 200 μg/mL. This is the first demonstration of the antibacterial activity of a silver complex enclosed in polymeric nanoparticles against H. pylori.
abstract_id: PUBMED:31344881
Synthesis and Application of Silver Nanoparticles (Ag NPs) for the Prevention of Infection in Healthcare Workers. Silver is easily available and is known to have microbicidal effect; moreover, it does not impose any adverse effects on the human body. The microbicidal effect is mainly due to silver ions, which have a wide antibacterial spectrum. Furthermore, the development of multidrug-resistant bacteria, as in the case of antibiotics, is less likely. Silver ions bind to halide ions, such as chloride, and precipitate; therefore, when used directly, their microbicidal activity is shortened. To overcome this issue, silver nanoparticles (Ag NPs) have been recently synthesized and frequently used as microbicidal agents that release silver ions from particle surface. Depending on the specific surface area of the nanoparticles, silver ions are released with high efficiency. In addition to their bactericidal activity, small Ag NPs (<10 nm in diameter) affect viruses although the microbicidal effect of silver mass is weak. Because of their characteristics, Ag NPs are useful countermeasures against infectious diseases, which constitute a major issue in the medical field. Thus, medical tools coated with Ag NPs are being developed. This review outlines the synthesis and utilization of Ag NPs in the medical field, focusing on environment-friendly synthesis and the suppression of infections in healthcare workers (HCWs).
abstract_id: PUBMED:36484825
Tryptone-stabilized silver nanoparticles' potential to mitigate planktonic and biofilm growth forms of Serratia marcescens. Several microbial pathogens are capable of forming biofilms. These microbial communities pose a serious challenge to the healthcare sector as they are quite difficult to combat. Given the challenges associated with the antibiotic-based management of biofilms, the research focus has now been shifted towards finding alternate treatment strategies that can replace or complement the antibacterial properties of antibiotics. The field of nanotechnology offers several novel and revolutionary approaches to eradicate biofilm-forming microbes. In this study, we evaluated the antibacterial and antibiofilm efficacy of in-house synthesized, tryptone-stabilized silver nanoparticles (Ts-AgNPs) against the superbug Serratia marcescens. The nanoparticles were of spherical morphology with an average hydrodynamic diameter of 170 nm and considerable colloidal stability with a Zeta potential of - 24 ± 6.15 mV. Ts-AgNPs showed strong antibacterial activities with a minimum inhibitory concentration (MIC50) of 2.5 µg/mL and minimum bactericidal concentration (MBC) of 12.5 µg/mL against S. marcescens. The nanoparticles altered the cell surface hydrophobicity and inhibited biofilm formation. The Ts-AgNPs were also effective in distorting pre-existing biofilms by degrading the extracellular DNA (eDNA) component of the extracellular polymeric substance (EPS) layer. Furthermore, reduction in quorum-sensing (QS)-induced virulence factors produced by S. marcescens indicated that Ts-AgNPs attenuated the QS pathway. Together, these findings suggest that Ts-AgNPs are an important anti-planktonic and antibiofilm agent that can be explored for both the prevention and treatment of infections caused by S. marcescens.
abstract_id: PUBMED:15537697
Silver nanoparticles and polymeric medical devices: a new approach to prevention of infection? Objectives: Implantable devices are major risk factors for hospital-acquired infection. Biomaterials coated with silver oxide or silver alloy have all been used in attempts to reduce infection, in most cases with controversial or disappointing clinical results. We have developed a completely new approach using supercritical carbon dioxide to impregnate silicone with nanoparticulate silver metal. This study aimed to evaluate the impregnated polymer for antimicrobial activity.
Methods: After impregnation the nature of the impregnation was determined by transmission electron microscopy. Two series of polymer discs were then tested, one washed in deionized water and the other unwashed. In each series, half of the discs were coated with a plasma protein conditioning film. The serial plate transfer test was used as a screen for persisting activity. Bacterial adherence to the polymers and the rate of kill, and effect on planktonic bacteria were measured by chemiluminescence and viable counts. Release rates of silver ions from the polymers in the presence and absence of plasma was measured using inductively coupled plasma mass spectrometry (ICP-MS).
Results: Tests for antimicrobial activity under various conditions showed mixed results, explained by the modes and rates of release of silver ions. While washing removed much of the initial activity there was continued release of silver ions. Unexpectedly, this was not blocked by conditioning film.
Conclusions: The methodology allows for the first time silver impregnation (as opposed to coating) of medical polymers and promises to lead to an antimicrobial biomaterial whose activity is not restricted by increasing antibiotic resistance.
abstract_id: PUBMED:34281254
Silver Nanoparticles and Their Antibacterial Applications. Silver nanoparticles (AgNPs) have been imposed as an excellent antimicrobial agent being able to combat bacteria in vitro and in vivo causing infections. The antibacterial capacity of AgNPs covers Gram-negative and Gram-positive bacteria, including multidrug resistant strains. AgNPs exhibit multiple and simultaneous mechanisms of action and in combination with antibacterial agents as organic compounds or antibiotics it has shown synergistic effect against pathogens bacteria such as Escherichia coli and Staphylococcus aureus. The characteristics of silver nanoparticles make them suitable for their application in medical and healthcare products where they may treat infections or prevent them efficiently. With the urgent need for new efficient antibacterial agents, this review aims to establish factors affecting antibacterial and cytotoxic effects of silver nanoparticles, as well as to expose the advantages of using AgNPs as new antibacterial agents in combination with antibiotic, which will reduce the dosage needed and prevent secondary effects associated to both.
abstract_id: PUBMED:22894535
Effect of biologically synthesised silver nanoparticles on Staphylococcus aureus biofilm quenching and prevention of biofilm formation. The development of green experimental processes for the synthesis of nanoparticles is a need in the field of nanotechnology. In the present study, the authors reported rapid synthesis of silver nanoparticles using fresh leaves extract of Cymbopogan citratus (lemongrass) with increased stability. The synthesised silver nanoparticles were found to be stable for several months. UV-visible spectrophotometric analysis was carried out to assess the synthesis of silver nanoparticles. The synthesised silver nanoparticles were further characterised by using nanoparticle tracking analyser (NTA), transmission electron microscope (TEM) and energy-dispersive x-ray spectra (EDX). The NTA results showed that the mean size was found to be 32 nm. Silver nanoparticles with controlled size and shape were observed under TEM micrograph. The EDX of the nanoparticles confirmed the presence of elemental silver. These silver nanoparticles showed enhanced quorum quenching activity against Staphylococcus aureus biofilm and prevention of biofilm formation which can be seen under inverted microscope (40X). In the near future, silver nanoparticles synthesised using green methods may be used in the treatment of infections caused by a highly antibiotic resistant biofilm.
abstract_id: PUBMED:30096228
Superior Bactericidal Efficacy of Fucose-Functionalized Silver Nanoparticles against Pseudomonas aeruginosa PAO1 and Prevention of Its Colonization on Urinary Catheters. Pseudomonas aeruginosa, a Gram-negative rod-shaped bacterium is a notorious pathogen causing chronic infections. Its ability to form antibiotic-resistant biofilm has raised the need for the development of alternative treatment approaches. An ideal alternate can be silver nanoparticles known for their strong yet tunable bactericidal activity. However, their use in commercial in vivo medicine could not see the light of the day because of the unwanted toxicity of silver in the host cells at higher concentrations. Thus, strategies which can modulate the bacterial cell-silver nanoparticle interactions thereby reducing the amount of nanoparticles required to kill a typical number of bacterial cells are utmost welcomed. The current work showcases one such strategy by functionalizing the silver nanoparticles with l-fucose to increase their interactions with the LecB lectins present on P. aeruginosa PAO1. The advantage of this approach lies in the higher bactericidal and antibiofilm activity of fucose-functionalized silver nanoparticles (FNPs) as compared to the citrate-capped silver nanoparticles (CNPs) of similar size and concentrations. The superior bactericidal potential of FNPs as demonstrated by fluorescence-assisted cell sorting, confocal laser scanning microscopy, and transmission electron microscopy analyses may be attributed to the higher reactive oxygen species generation and oxidative membrane damage. Additionally, FNPs prevented the formation of biofilms by downregulating the expression of various virulence genes at lower concentrations as compared to CNPs. The practical applicability of the approach was demonstrated by preventing bacterial colonization on artificial silicone rubber surfaces. These results can be extrapolated in the treatment of catheter-associated urinary tract infections caused by P. aeruginosa. In conclusion, the present work strongly advocates the use of antivirulence targets and their corresponding binding residues for the augmentation of the bactericidal effect of silver nanoparticles.
Answer: Silver nanoparticles (AgNPs) have been increasingly studied for their antimicrobial properties and potential application in medical devices to prevent infections. The development of medical devices impregnated with AgNPs is a response to the challenge of bacterial infections associated with implanted medical devices, which provide surfaces for bacterial colonization. Bacteria tend to adhere more to hydrophobic materials, many of which are used in medical devices, and are also becoming increasingly resistant to antibiotics (PUBMED:25492173).
Studies have shown that AgNPs can be effectively incorporated into medical device materials, such as the hydrophobic polycaprolactone (PCL) polymer, which is FDA-approved for medical device use. This incorporation can enhance the antimicrobial efficacy of the devices by improving the release of active Ag ions (PUBMED:25492173). Additionally, silver-decorated polymeric micelles combined with curcumin have demonstrated enhanced antibacterial activity against both Gram-negative and Gram-positive bacteria, suggesting a synergistic effect when AgNPs are used in combination with other antibacterial agents (PUBMED:28481077).
Furthermore, research has indicated that AgNPs can be used to decrease the silver resistance of bacteria such as Enterococcus faecalis by compromising the protective function of extracellular polymeric substances (EPS) against silver (PUBMED:32172435). Silver(I) coordination compounds loaded into polymeric nanoparticles have also shown improved in vitro anti-Helicobacter pylori activity, suggesting that AgNPs can be a strategy for infections that are difficult to control (PUBMED:32515970).
The synthesis and application of AgNPs have been explored for preventing infections in healthcare workers, with AgNPs being recognized for their broad-spectrum microbicidal effect and lower likelihood of resistance development compared to antibiotics (PUBMED:31344881). Tryptone-stabilized AgNPs have demonstrated potential in mitigating planktonic and biofilm growth forms of Serratia marcescens, indicating their role in both prevention and treatment of infections (PUBMED:36484825).
In summary, the use of AgNPs in polymeric medical devices represents a promising new approach to the prevention of infection, offering a potential solution to the growing problem of antibiotic-resistant bacteria and device-associated infections. The incorporation of AgNPs into medical devices could lead to materials with persistent antimicrobial activity, which is not restricted by increasing antibiotic resistance ( increasing antibiotic resistance (PUBMED:15537697). Moreover, AgNPs have been shown to exhibit multiple mechanisms of action and can enhance the efficacy of antibiotics when used in combination, potentially reducing the dosage needed and preventing secondary effects associated with high doses of antibiotics (PUBMED:34281254).
Biologically synthesized AgNPs have also been reported to quench biofilms and prevent biofilm formation of Staphylococcus aureus, which is significant as biofilms are notoriously resistant to conventional antibiotic treatments (PUBMED:22894535). Additionally, fucose-functionalized AgNPs have demonstrated superior bactericidal efficacy against Pseudomonas aeruginosa and have been effective in preventing bacterial colonization on urinary catheters, showcasing the potential of functionalized nanoparticles in medical applications (PUBMED:30096228).
Overall, the integration of AgNPs into polymeric medical devices is a promising strategy for infection prevention, offering a multifaceted approach to combating bacterial colonization and biofilm formation on medical devices. This approach leverages the unique properties of AgNPs to enhance the antimicrobial capabilities of medical polymers, potentially leading to safer and more effective medical devices. |
Instruction: Persistent early organ failure: defining the high-risk group of patients with severe acute pancreatitis?
Abstracts:
abstract_id: PUBMED:18362837
Persistent early organ failure: defining the high-risk group of patients with severe acute pancreatitis? Objectives: The aim of our study was to evaluate the clinical significance of prolonged organ failure during the first week of severe acute pancreatitis and the potential correlation with final outcome.
Methods: Of 234 patients with acute pancreatitis admitted to our department between January 2002 and December 2006, 64 patients with predicted severe acute pancreatitis were studied according to the presence and also the duration of organ failure early in the course of the disease.
Results: Transient (<48 h duration) or persistent (>48 h duration) early organ failure (EOF) was present in 33 of 64 patients (51.5%). All 9 deaths (9/55 patients; 16.5% mortality) were recorded among patients who developed pancreatic necrosis, and the combination ofEOF and necrosis was present in most (8/9) patients with fatal outcome (P = 0.009). Persistent EOF was significantly associated with development of infected necrosis (P = 0.037) and worse outcome (P=0.028) as well. Multivariate analysis with backward elimination identified the duration of EOF as an independent factor affecting outcome.
Conclusions: Persistent organ failure early in the course of acute pancreatitis is a major determinant of outcome. In combination with pancreatic necrosis, survival rate is strongly compromised.
abstract_id: PUBMED:36072851
Clinical utility of the pancreatitis activity scoring system in severe acute pancreatitis. Objective: To analyze clinical utility of pancreatitis activity scoring system (PASS) in prediction of persistent organ failure, poor prognosis, and in-hospital mortality in patients with moderately severe acute pancreatitis (MSAP) or severe acute pancreatitis (SAP) admitted to the intensive care unit (ICU). Methods: The study included a total of 140 patients with MSAP and SAP admitted to the ICU of Shandong Provincial Hospital from 2015 to 2021. The general information, biochemical indexes and PASS scores of patients at ICU admission time were collected. Independent risk factors of persistent organ failure, poor prognosis and in-hospital mortality were analyzed by binary logistic regression. Through receiver operating characteristic curve (ROC), the predictive ability of lactic acid, procalcitonin, urea nitrogen, PASS, and PASS in combination with urea nitrogen for the three outcomes was compared. The best cut-off value was determined. Results: Binary logistic regression showed that PASS might be an independent risk factor for patients with persistent organ failure (odds ratio [OR]: 1.027, 95% confidence interval [CI]: 1.014-1.039), poor prognosis (OR: 1.008, 95% CI: 1.001-1.014), and in-hospital mortality (OR: 1.009, 95% CI: 1.000-1.019). PASS also had a good predictive ability for persistent organ failure (area under the curve (AUC) = 0.839, 95% CI: 0.769-0.910) and in-hospital mortality (AUC = 0.780, 95% CI: 0.669-0.891), which was significantly superior to lactic acid, procalcitonin, urea nitrogen and Ranson score. PASS (AUC = 0.756, 95% CI: 0.675-0.837) was second only to urea nitrogen (AUC = 0.768, 95% CI: 0.686-0.850) in the prediction of poor prognosis. Furthermore, the predictive power of urea nitrogen in combination with PASS was better than that of each factor for persistent organ failure (AUC = 0.849, 95% CI: 0.779-0.920), poor prognosis (AUC = 0.801, 95% CI: 0.726-0.876), and in-hospital mortality (AUC = 0.796, 95% CI: 0.697-0.894). Conclusion: PASS was closely correlated with the prognosis of patients with MSAP and SAP. This scoring system may be used as a common clinical index to measure the activity of acute pancreatitis and evaluate disease prognosis.
abstract_id: PUBMED:29183667
Early prediction of persistent organ failure by serum apolipoprotein A-I and high-density lipoprotein cholesterol in patients with acute pancreatitis. Background: Early identification of acute pancreatitis (AP) patients at high-risk of developing persistent organ failure (persistent OF) is a vital clinical goal. This research intends to assess the ability of apolipoprotein A-I (APO A-I) and high-density lipoprotein cholesterol (HDL-C) to predict persistent OF.
Methods: Between January 2011 and September 2016, a total of 102 adult AP patients with organ failure, local complications or deterioration of former comorbidities disease during hospitalization were included in this study retrospectively. Serum lipids were tested and computed the correlation with clinical outcomes or scoring systems. The AUCs to predict persistent OF were also calculated and compared with each other.
Results: Serum APO A-I and HDL-C levels were negatively associated with scoring systems. Meanwhile, serum lipids were negatively correlated with poor clinical outcomes. The AUCs of APO A-I, HDL-C, the combination of APO A-I and BISAP, or the combination of APO A-I and MCTSI to predict persistent OF among Moderately severe acute pancreatitis (MSAP) and Severe acute pancreatitis (SAP) patients were 0.886, 0.811, 0.912, and 0.900 or among those with organ failure were 0.915, 0.859, 0.933, and 0.933, respectively.
Conclusions: The concentrations of APO A-I, HDL-C, and the combinations of APO A-I and scoring systems have high predictive value to predict persistent OF.
abstract_id: PUBMED:34305014
Dynamic nomogram for persistent organ failure in acute biliary pancreatitis: Development and validation in a retrospective study. Background: Persistent organ failure (POF) increases the risk of death in patients with acute biliary pancreatitis (ABP). Currently, there is no early risk assessment tool for POF in patients with ABP.
Aims: To establish and validate a dynamic nomogram for predicting the risk of POF in ABP.
Methods: This was a retrospective study of 792 patients with ABP, with 595 cases in the development group and 197 cases in the validation group. Least absolute shrinkage and selection operator regression screened the predictors of POF, and logistic regression established the model (P < 0.05). A dynamic nomogram showed the model. We evaluated the model's discrimination, calibration, and clinical effectiveness; used the bootstrap method for internal validation; and conducted external validation in the validation group.
Results: Neutrophils, haematocrit, serum calcium, and blood urea nitrogen were predictors of POF in ABP. In the development group and validation group, the areas under the receiver operating characteristic curves (AUROCs) were 0.875 and 0.854, respectively, and the Hosmer-Lemeshow test (P > 0.05) and calibration curve showed good consistency between the actual and prediction probability. Decision curve analysis showed that the dynamic nomogram has excellent clinical value.
Conclusion: This dynamic nomogram helps with the early identification and screening of high-risk patients with POF in ABP.
abstract_id: PUBMED:32782957
Hemoconcentration is associated with early faster fluid rate and increased risk of persistent organ failure in acute pancreatitis patients. Background: Controversies existed surrounding the use of hematocrit to guide early fluid therapy in acute pancreatitis (AP). The association between hematocrit, early fluid therapy, and clinical outcomes in ward AP patients needs to be investigated.
Methods: Data from prospectively maintained AP database and retrospectively collected details of fluid therapy were analyzed. Patients were stratified into three groups: Group 1, hematocrit < 44% both at admission and at 24 h thereafter; Group 2: regardless of admission level, hematocrit increased and >44% at 24 h; Group 3: hematocrit >44% on admission and decreased thereafter during first 24 h. "Early" means first 24 h after admission. Baseline characteristics, early fluid rates, and clinical outcomes of the three groups were compared.
Results: Among the 628 patients, Group 3 had a higher hematocrit level, greater baseline predicted severity, faster fluid rate, and more fluid volume in the first 24 h compared with Group 1 or 2. Group 3 had an increased risk for persistent organ failure (POF; odds ratio 2, 95% confidence interval [1.1-3.8], P = 0.03) compared with Group 1 after adjusting for difference in baseline clinical severity scores, there was no difference between Group 2 and Group 3 or Group 1. Multivariate regression analyses revealed that hemoconcentration and early faster fluid rate were risk factors for POF and mortality (both P < 0.05).
Conclusions: Hemoconcentration is associated with faster fluid rate and POF in ward AP patients. Randomized trials comparing standardized early fast and slow fluid management is warranted.
abstract_id: PUBMED:33987740
Occurrence and Risk Factors of Infected Pancreatic Necrosis in Intensive Care Unit-Treated Patients with Necrotizing Severe Acute Pancreatitis. Background: In patients with severe acute pancreatitis (SAP), infected pancreatic necrosis (IPN) is associated with a worsened outcome. We studied risk factors and consequences of IPN in patients with necrotizing SAP.
Methods: The study consisted of a retrospective cohort of 163 consecutive patients treated for necrotizing SAP at a university hospital intensive care unit (ICU) between 2010 and 2018.
Results: All patients had experienced at least one persistent organ failure and approximately 60% had multiple organ failure within the first 24 h from admission to the ICU. Forty-seven (28.8%) patients had IPN within 90 days. Independent risk factors for IPN were more extensive anatomical spread of necrotic collections (unilateral paracolic or retromesenteric (OR 5.7, 95% CI 1.5-21.1) and widespread (OR 21.8, 95% CI 6.1-77.8)) compared to local collections around the pancreas, postinterventional pancreatitis (OR 13.5, 95% CI 2.4-76.5), preceding bacteremia (OR 4.8, 95% CI 1.3-17.6), and preceding open abdomen treatment for abdominal compartment syndrome (OR 3.6, 95% CI 1.4-9.3). Patients with IPN had longer ICU and overall hospital lengths of stay, higher risk for necrosectomy, and higher readmission rate to ICU.
Conclusions: Wide anatomical spread of necrotic collections, postinterventional etiology, preceding bacteremia, and preceding open abdomen treatment were identified as independent risk factors for IPN.
abstract_id: PUBMED:25214968
Prevalence and risk factors of organ failure in patients with severe acute pancreatitis. Background: This study was undertaken to determine the prevalence of organ failure and its risk factors in patients with severe acute pancreatitis (SAP).
Methods: A retrospective analysis was made of 186 patients with SAP who were had been hospitalized in the intensive care unit of Jinzhong First People's Hospital between March 2000 and October 2009. The patients met the diagnostic criteria of SAP set by the Surgical Society of the Chinese Medical Association in 2006. The variables collected included age, gender, etiology of SAP, the number of comorbidit, APACHEII score, contrast-enhanced CT (CECT) pancreatic necrosis, CT severity index (CTSI) , abdominal compartment syndrome (ACS) , the number of organ failure, and the number of death. The prevalence and mortality of organ failure were calculated. The variables were analyzed by unconditional multivariate logistic regression to determine the independent risk factors for organ failure in SAP.
Results: Of 186 patients, 96 had organ failure. In the 96 patients, 47 died. There was a significant association among the prevalence of organ failure and age, the number of comorbidity, APACHEII score, CECT pancreatic necrosis, CTSI, and ACS. An increase in age, the number of comorbidity, APACHEII score, CECT pancreatic necrosis were correlated with increased number of organ failure. Age, the number of comorbidity, APACHEII score, CECT pancreatic necrosis, CTSI and ACS were assessed by unconditional multivariate logistic regression.
Conclusions: Organ failure occurred in 51.6% of the 186 patients with SAP. The mortality of SAP with organ failure was 49.0%. Age, the number of comorbidity, APACHEII score, CECT pancreatic necrosis, CTSI and ACS are independent risk factors of organ failure.
abstract_id: PUBMED:31912265
Early Rapid Fluid Therapy Is Associated with Increased Rate of Noninvasive Positive-Pressure Ventilation in Hemoconcentrated Patients with Severe Acute Pancreatitis. Background/aims: Hematocrit is a widely used biomarker to guide early fluid therapy for patients with acute pancreatitis (AP), but there is controversy over whether early rapid fluid therapy (ERFT) should be used in hemoconcentrated patients. This study investigated the association of hematocrit and ERFT with clinical outcomes of patients with AP.
Methods: Data from prospectively maintained AP database and retrospectively collected fluid management details were stratified according to actual severity defined by revised Atlanta classification. Hemoconcentration and "early" were defined as hematocrit > 44% and the first 6 h of general ward admission, respectively, and "rapid" fluid rate was defined as ≥ 3 ml/kg/h. Patients were allocated into 4 groups for comparisons: group A, hematocrit ≤ 44% and fluid rate < 3 ml/kg/h; group B, hematocrit ≤ 44% and fluid rate ≥ 3 ml/kg/h; group C, hematocrit > 44% and fluid rate < 3 ml/kg/h; and group D, hematocrit > 44% and fluid rate ≥ 3 ml/kg/h. Primary outcome was rate of noninvasive positive-pressure ventilation (NPPV).
Results: A total of 912 consecutive AP patients were analyzed. ERFT has no impact on clinical outcomes of hemoconcentrated, non-severe or all non-hemoconcentrated AP patients. In hemoconcentrated patients with severe AP (SAP), ERFT was accompanied with increased risk of NPPV (odds ratio 5.96, 95% CI 1.57-22.6). Multivariate regression analyses confirmed ERFT and hemoconcentration were significantly and independently associated with persistent organ failure and mortality in patients with SAP.
Conclusions: ERFT is associated with increased rate of NPPV in hemoconcentrated patients with SAP.
abstract_id: PUBMED:34829360
Monitoring Approach of Fatality Risk Factors for Patients with Severe Acute Pancreatitis Admitted to the Intensive Care Unit. A Retrospective, Monocentric Study. Acute pancreatitis is an unpredictable disease affecting the pancreas and it is characterized by a wide range of symptoms and modified lab tests, thus there is a continuing struggle to classify this disease and to find risk factors associated with a worse outcome. The main objective of this study was to identify the risk factors associated with the fatal outcome of the intensive care unit's patients diagnosed and admitted for severe acute pancreatitis, the secondary objective was to investigate the prediction value for the death of different inflammatory markers at the time of their admission to the hospital. This retrospective study included all the patients with a diagnosis of acute pancreatitis admitted to the Intensive Care Unit of the Emergency County Hospital Timisoara between 1 January 2016 and 31 May 2021. The study included 53 patients diagnosed with severe acute pancreatitis, out of which 21 (39.6%) survived and 32 (60.4%) died. For the neutrophils/lymphocytes ratio, a cut-off value of 12.4 was found. When analyzing age, we found out that age above 52 years old can predict mortality, and for the platelets/lymphocytes ratio, a cut-off value of 127 was found. Combining the three factors we get a new model for predicting mortality, with an increased performance, AUROC = 0.95, p < 0.001. Multiple persistent organ failure, age over 50, higher values of C reactive protein, and surgery were risk factors for death in the patients with severe acute pancreatitis admitted to the intensive care unit. The model design from the neutrophils/lymphocytes ratio, platelets/lymphocytes ratio, and age proved to be the best in predicting mortality in severe acute pancreatitis.
abstract_id: PUBMED:31997054
Superoxide Dismutase Predicts Persistent Circulation Failure and Mortality in the Early Stage of Acute Pancreatitis. Objectives: Oxidative stress is an important event in the pathogenesis of acute pancreatitis. Superoxide dismutase is a major antioxidant enzyme in the body. The aim of this study was to investigate the changes in superoxide dismutase activity early in the onset of acute pancreatitis and its value in predicting the risk of organ failure and mortality.
Methods: Data for 2549 patients hospitalized from 2013 to 2017 were extracted from the prospective database, and we selected 854 adult patients who were admitted within 24 h of disease onset with complete data. Serum superoxide dismutase activities on the first, second, and third days of hospital admission for patients with different severities, organ failure, and mortality were compared. The areas under the curve for the prediction of organ failure, pancreatic necrosis, and mortality were estimated using receiver operating characteristic curves.
Results: Among the 854 adult patients, superoxide dismutase activities were significantly different among patients with mild acute pancreatitis, moderately severe acute pancreatitis, and severe acute pancreatitis (P = 0.005). Superoxide dismutase activity was significantly decreased in patients with persistent renal failure (77.8 ± 37.2), persistent circulatory failure (66.2 ± 14.9), and mortality (64.3 ± 16.0). The accuracy of superoxide dismutase with regard to predicting persistent circulatory failure and mortality was high, and the areas under the receiver operating characteristic curves were 0.83 and 0.84, respectively.
Conclusions: Superoxide dismutase activity was negatively correlated with the severity and clinical outcome of AP. Superoxide dismutase activity is highly accurate at predicting persistent circulation failure and mortality in the early stage of AP.
Answer: Persistent early organ failure (EOF) is a significant determinant of outcome in patients with severe acute pancreatitis (SAP). A study found that transient or persistent EOF was present in over half of the patients with predicted SAP, and all deaths were recorded among patients who developed pancreatic necrosis. The combination of EOF and necrosis was present in most patients with a fatal outcome, indicating that persistent EOF, especially when combined with pancreatic necrosis, strongly compromises survival rates (PUBMED:18362837).
The Pancreatitis Activity Scoring System (PASS) has been shown to be an independent risk factor for persistent organ failure, poor prognosis, and in-hospital mortality in patients with SAP admitted to the ICU. PASS also demonstrated good predictive ability for persistent organ failure and in-hospital mortality, suggesting that it may be used as a clinical index to measure the activity of acute pancreatitis and evaluate disease prognosis (PUBMED:36072851).
Serum apolipoprotein A-I (APO A-I) and high-density lipoprotein cholesterol (HDL-C) levels have been found to be negatively associated with scoring systems and poor clinical outcomes, with high predictive value for persistent organ failure among patients with SAP (PUBMED:29183667).
A dynamic nomogram incorporating neutrophils, hematocrit, serum calcium, and blood urea nitrogen as predictors has been developed to help with the early identification of high-risk patients with persistent organ failure in acute biliary pancreatitis (PUBMED:34305014).
Hemoconcentration has been associated with faster fluid rate and an increased risk for persistent organ failure in patients with SAP. This suggests that hematocrit levels could be a factor in guiding early fluid therapy (PUBMED:32782957).
Risk factors for infected pancreatic necrosis (IPN) in patients with necrotizing SAP include the anatomical spread of necrotic collections, postinterventional pancreatitis, preceding bacteremia, and preceding open abdomen treatment for abdominal compartment syndrome. Patients with IPN had longer ICU stays and a higher risk for necrosectomy (PUBMED:33987740).
In summary, patients with severe acute pancreatitis who exhibit persistent early organ failure, especially when combined with pancreatic necrosis or infected pancreatic necrosis, are at high risk for poor outcomes. Scoring systems like PASS, serum biomarkers like APO A-I and HDL-C, and dynamic nomograms can be useful tools for early identification of high-risk patients and guiding clinical decision-making. |
Instruction: Colorectal Cancer in Young African Americans: Is It Time to Revisit Guidelines and Prevention?
Abstracts:
abstract_id: PUBMED:27278956
Colorectal Cancer in Young African Americans: Is It Time to Revisit Guidelines and Prevention? Background & Aims: Previous studies have suggested an increase in the incidence of colorectal cancer (CRC) in young adults (younger than 50 years). Among older people, African Americans have disproportionally higher CRC incidence and mortality. We assessed whether this CRC disparity also applies to CRC diagnosed among young people.
Methods: Using the Surveillance, Epidemiology, and End Results cancer registries, a population-based cancer registry covering 25.6 % of the United States' African American population, we identified patients diagnosed with CRC between the years of 2000-2012. The age-adjusted rates for non-Hispanic whites (NHW), African Americans, and Asian-Pacific Islanders (API) were calculated for the age categories 20-24, 25-29, 30-34, 35-39, and 40-44.
Results: CRC age-adjusted incidence is increasing among all three racial groups and was higher for African Americans compared to NHW and API across all years 2000-2012 (P < 0.001). Stage IV CRC was higher in African Americans compared with NHW, while there was higher stage III CRC in API compared with NHWs.
Conclusion: CRC incidence is increasing among the young in all racial groups under study. This increase in frequency of CRC is true among young African American adults who display highly advanced tumors in comparison with other races. While the present attention to screening seems to have decreased CRC prevalence in individuals older than 50, special attention needs to be addressed to young African American adults as well, to counter the observed trend, as they have the highest incidence of CRC among young population groups by race/ethnicity.
abstract_id: PUBMED:31873947
Adherence to the World Cancer Research Fund/American Institute for Cancer Research cancer prevention guidelines and colorectal cancer incidence among African Americans and whites: The Atherosclerosis Risk in Communities study. Background: Adherence to the World Cancer Research Fund (WCRF)/American Institute for Cancer Research (AICR) cancer prevention recommendations is associated with colorectal cancer (CRC) risk in whites, but only 1 previous study has reported on this link in African Americans. This study assessed the association between the 2018 WCRF/AICR guidelines and CRC incidence in African Americans (26.5%) and whites (73.5%) in the Atherosclerosis Risk in Communities prospective cohort (n = 13,822).
Methods: A total of 368 incident CRC cases (268 among whites and 100 among African Americans) were identified between the baseline (1987) and 2012. A baseline adherence score was created for 7 WCRF/AICR guidelines (each contributing 0, 0.5, or 1 point to the score, with higher scores corresponding to greater adherence). Adherence scores were also categorized as tertiles (0.0-3.0, 3.5-4.0, and 4.5-7.0). Cox proportional hazards regression was used to calculate hazard ratios (HRs) and 95% confidence intervals (CIs) for the total cohort and with stratification by race.
Results: After adjustments for age, sex, race, center, smoking, education, intake of aspirin, calcium, total calories, diabetes status, and, in women, hormone replacement therapy, greater adherence was associated with decreased CRC risk. The HRs per 1-unit increment in score were 0.88 (95% CI, 0.80-0.97) for the whole cohort, 0.89 (95% CI, 0.73-1.09) for African Americans, and 0.88 (95% CI, 0.77-0.99) for whites. Similar associations between higher adherence scores and decreased cancer risk were observed for men and women and for colon cancer but not for rectal cancer.
Conclusions: Greater adherence to the cancer prevention recommendations appears to be associated with decreased CRC risk for both African Americans and whites.
abstract_id: PUBMED:32368438
Community health advisors assessing adherence to national cancer screening guidelines among African Americans in South Los Angeles. We partnered with African American churches in South Los Angeles (LA) and trained Community Health Advisors (CHAs) to assess cancer screening. The purpose of this analysis is to report adherence to national cancer screening guidelines among African Americans in South LA, to assess relationships between adherence to colorectal cancer and other cancer screening guidelines, and to explore regional differences in screening rates. Between 2016 and 2018, 44 CHAs surveyed 777 African Americans between 50 and 75 years of age. Among 420 South LA residents, 64% of men and 70% of women were adherent to colorectal cancer screening guidelines. Adherence to mammography screening guidelines was 73%. Adherence to cervical cancer screening guidelines among women 50 to 65 years of age without hysterectomy was 80%. Fifty-nine percent of men had ever discussed the Prostate Specific Antigen (PSA) test with a physician. Adherence to colorectal cancer screening guidelines was significantly higher among respondents who were adherent to other cancer screening guidelines compared to their peers who were not adherent to other cancer screening guidelines (all p < 0.05). The fact that 22% of women who were adherent to breast cancer screening, 32% of women adherent to cervical cancer screening and 16% of men who had discussed the PSA test with a physician were not adherent to colorectal cancer screening guidelines suggests that providers should redouble their efforts to review all screening guidelines with their patients and to make appropriate recommendations. Regional differences in screening rates within South Los Angeles should inform future screening promotion efforts.
abstract_id: PUBMED:31289848
Distribution of colorectal cancer in young African Americans: implications for the choice of screening test. Background: We recently reported on a left-sided predominance of colorectal cancers in the young (under age 50). Given the predilection of young African Americans for the disease, we wondered if there may be a difference in the biology of colorectal carcinogenesis between this group and Caucasians.
Objective: Compare the distribution of colorectal cancer in African American patients and Caucasians under age 50, and describe implications for screening in these groups.
Patients: Colorectal cancer patients diagnosed under the age of 50 between the years 2000 and 2016. All races other than African American and Caucasian and all patients with hereditary colon cancer or inflammatory bowel disease were excluded.
Outcome Measures: race, age at diagnosis (5 subgroups: < 20, 20-29, 30-39, 40-44, and 45-49 years) and cancer location; right (cecum, ascending colon, hepatic flexure, transverse colon, splenic flexure), left (descending colon and sigmoid colon), or rectal.
Results: 759 patients were included; 695 (91.6%) were Caucasian and 64 (8.4%) were African American. Most cases were diagnosed between ages 40 and 49 (African American = 75%, Caucasian = 69.5%). Rectal cancer was most common in both races, although significantly more common in Caucasian than in African American patients (64.2% vs 39.1%). Right colon cancer was more commonly found in African Americans (37.5%) compared with Caucasians (18%) (p = 0.0002). The ratio of rectal to right-sided colon cancer in African Americans was 1:1 compared with 3.6:1 in Caucasians.
Limitations: Relatively low number of African American patients CONCLUSION: The high rate of right-sided cancer in young African American patients means that they should be screened with colonoscopy. The increased incidence of right-sided cancers may represent a different biology of carcinogenesis in African Americans and deserves further study.
abstract_id: PUBMED:38211983
Guidelines for prevention and treatment of colorectal adenoma with integrated Chinese and western medicine The Guidelines for prevention and treatment of colorectal adenoma with integrated Chinese and western medicine are put forward by Nanjing University of Chinese Medicine and approved by China Association of Chinese Medicine. According to the formulation processes and methods of relevant clinical practice guidelines, the experts in clinical medicine and methodology were organized to discuss the key problems to be addressed in the clinical prevention and treatment of colorectal adenoma(CRA) and provided answers following the evidence-based medicine method, so as to provide guidance for clinical decision-making. CRA is the major precancerous disease of colorectal cancer. Although the prevention and treatment with integrated Chinese and western medicine have been applied to the clinical practice of CRA, there is still a lack of high-quality guidelines. Four basic questions, 15 clinical questions, and 10 outcome indicators were determined by literature research and Delphi questionnaire. The relevant randomized controlled trial(RCT) was retrieved from CNKI, Wanfang, VIP, SinoMed, PubMed, EMbase, Cochrane Library, Web of Science, and 2 clinical trial registries, and finally several RCTs meeting the inclusion criteria were included. The data extracted from the RCT was imported into RevMan 5.3 for evidence synthesis, and the evidence was evaluated based on the Grading of Recommendations, Assessment, Development, and Evaluations(GRADE). The final recommendations were formed by the nominal group method based on the evidence summary table. The guidelines involve the diagnosis, screening, treatment with integrated Chinese and western medicine, prevention, and follow-up of colorectal adenoma, providing options for the clinical prevention and treatment of CRA.
abstract_id: PUBMED:27815663
Measuring Factors Associated with Colorectal Cancer Screening among Young Adult African American Men: A Psychometric Study. The Male Role Norms, Knowledge, Attitudes, and Perceptions associated with Colorectal Cancer Screening (MKAP-CRCS) survey was developed to assess the attitudes, knowledge, male role norms, perceived barriers, and perceived subjective norms associated with screening for colorectal cancer (CRC) among young adult African American men. There is a critical need for exploring the complex factors that may shape attitudes towards CRC screening among men who are younger (i.e., ages 19-45) than those traditionally assessed by clinicians and health promotion researchers (age 50 and older). Psychometrically sound measures are crucial for eliciting valid and reliable data on these factors. The current study, therefore, assessed the psychometric properties of the MKAP-CRCS instrument using an online sample of young adult African American men (N = 157) across the United States. Exploratory principal component factor analyses revealed that the MKAP-CRCS measure yielded construct valid and reliable scores, suggesting that the scale holds promise as an appropriate tool for assessing factors associated with CRC screening among younger African American men. Strengths and limitations of this study, along with directions for future research are discussed, including the need for more research examining the relationship between masculinity and CRC screening among African American men.
abstract_id: PUBMED:38238256
Improving Concordance Between Clinicians With Australian Guidelines for Bowel Cancer Prevention Using a Digital Application: Randomized Controlled Crossover Study. Background: Australia's bowel cancer prevention guidelines, following a recent revision, are among the most complex in the world. Detailed decision tables outline screening or surveillance recommendations for 230 case scenarios alongside cessation recommendations for older patients. While these guidelines can help better allocate limited colonoscopy resources, their increasing complexity may limit their adoption and potential benefits. Therefore, tools to support clinicians in navigating these guidelines could be essential for national bowel cancer prevention efforts. Digital applications (DAs) represent a potentially inexpensive and scalable solution but are yet to be tested for this purpose.
Objective: This study aims to assess whether a DA could increase clinician adherence to Australia's new colorectal cancer screening and surveillance guidelines and determine whether improved usability correlates with greater conformance to guidelines.
Methods: As part of a randomized controlled crossover study, we created a clinical vignette quiz to evaluate the efficacy of a DA in comparison with the standard resource (SR) for making screening and surveillance decisions. Briefings were provided to study participants, which were tailored to their level of familiarity with the guidelines. We measured the adherence of clinicians according to their number of guideline-concordant responses to the scenarios in the quiz using either the DA or the SR. The maximum score was 18, with higher scores indicating improved adherence. We also tested the DA's usability using the System Usability Scale.
Results: Of 117 participants, 80 were included in the final analysis. Using the SR, the adherence of participants was rated a median (IQR) score of 10 (7.75-13) out of 18. The participants' adherence improved by 40% (relative risk 1.4, P<.001) when using the DA, reaching a median (IQR) score of 14 (12-17) out of 18. The DA was rated highly for usability with a median (IQR) score of 90 (72.5-95) and ranked in the 96th percentile of systems. There was a moderate correlation between the usability of the DA and better adherence (rs=0.4; P<.001). No differences between the adherence of specialists and nonspecialists were found, either with the SR (10 vs 9; P=.47) or with the DA (13 vs 15; P=.24). There was no significant association between participants who were less adherent with the DA (n=17) and their age (P=.06), experience with decision support tools (P=.51), or academic involvement with a university (P=.39).
Conclusions: DAs can significantly improve the adoption of complex Australian bowel cancer prevention guidelines. As screening and surveillance guidelines become increasingly complex and personalized, these tools will be crucial to help clinicians accurately determine the most appropriate recommendations for their patients. Additional research to understand why some practitioners perform worse with DAs is required. Further improvements in application usability may optimize guideline concordance further.
abstract_id: PUBMED:30020112
Colorectal cancer in young African Americans: clinical characteristics and presentations. Purpose: Colorectal cancer (CRC) is the third most common cancer in the USA, and the incidence in young adults has been increasing over the past decade. We studied the clinical characteristics and presentations of CRC in young African American (AA) adults because available data on how age and ethnicity influence its pattern of presentation is limited.
Patients And Methods: We conducted a retrospective study of 109 young adults (75 African Americans) below 50 years, who were diagnosed with CRC between 1 January 1997 and 31 December 2016. Proximal CRC was defined as lesions proximal to the splenic flexure. Independent t-tests and χ-test or Fisher's exact test were performed where appropriate to determine the differences between AA and non-AA patients.
Results: The mean age at diagnosis was 42 years (range: 20-49 years). Compared with non-AAs, AAs had more frequent proximal CRC (38.7 vs. 14.7%, P=0.003), lower hemoglobin (10.5 vs. 12.7 g/dl, P<0.001), and more frequent weight loss (21.3 vs. 2.9% P=0.014). Non-AAs presented more frequently with rectal bleeding (52.9 vs. 32.0% P=0.037). There was no statistically significant difference in histology, stage, grade, tumor size, and carcinoembryonic antigen level between groups. When we stratified between proximal and distal disease among patients with CRC, we found larger tumor size in distal disease, which presented more with rectal bleeding and bowel habit changes. Proximal disease presented more as abdominal pain and weight loss.
Conclusion: There should be a higher index of suspicion for CRC in young AA adults presenting with anemia, abdominal pain, and weight loss. Early screening colonoscopy should be advocated in AAs because of the predominance of proximal disease.
abstract_id: PUBMED:25506049
Male Role Norms, Knowledge, Attitudes, and Perceptions of Colorectal Cancer Screening among Young Adult African American Men. Racial disparities in health among African American men (AAM) in the United States are extensive. In contrast to their White counterparts, AAM have more illnesses and die younger. AAM have colorectal cancer (CRC) incidence and mortality rates 25% and 50% higher, respectively, than White men. Due to CRC's younger age at presentation and high incidence among AAM, CRC screening (CRCS) is warranted at the age of 45 rather than 50, but little is known about younger AAM's views of CRCS. Employing survey design, the purpose of the study was to describe the male role norms (MRN), knowledge, attitudes, perceived subjective norms, and perceived barriers associated with screening for CRC among a non-random sample of 157 young adult AAM (ages 19-45). Sixty-seven percent of the study sample received a passing knowledge score (85% or better), yet no significant differences were found among the three educational levels (i.e., low, medium, high). More negative attitudes toward CRCS correlated with the participants' strong perceptions of barriers, but no extremely negative or positive MRN and perceived subjective norms were found. The factors significantly associated with attitudes were family history of cancer (unsure), work status, and perceived barriers. Findings from this study provide a solid basis for developing structured health education interventions that address the salient factors shaping young adult AAM's view of CRC and early detection screening behaviors.
abstract_id: PUBMED:21360028
Attitudes to colorectal cancer screening after reading the prevention information. Some of the biggest barriers to increase colorectal cancer screening are negative attitudes to screening tests. These negative attitudes might be evoked through reading cancer prevention information and impede the decision to get screened. Forty-two adults aged ≥50 years completed a 12-item attitude questionnaire after reading colorectal cancer prevention information. African-Americans perceived that others had higher cancer risk than themselves whereas Caucasians did not show the difference. Regardless of ethnicity and adherence to screening guidelines, no participants had strong feelings of fear and embarrassment. However, non-adherent Caucasians had higher anxiety than adherent Caucasians. The degree of negativity was not associated with intention to get screened in non-adherent participants. Adequate health literacy of participants may account for flat responses in negative attitudes. Further research in individuals with limited health literacy is recommended. Moreover, additional education about self-relevance of cancer risk is considered necessary to increase cancer awareness in African-Americans.
Answer: The question of whether it is time to revisit guidelines and prevention for colorectal cancer (CRC) in young African Americans is supported by several studies that highlight the unique risks and characteristics of CRC in this population.
A study using the Surveillance, Epidemiology, and End Results cancer registries found that CRC incidence is increasing among young adults in all racial groups, with African Americans having a higher incidence compared to non-Hispanic whites and Asian-Pacific Islanders. Moreover, African Americans presented with more advanced stage IV CRC. This suggests a need for special attention to young African American adults in CRC prevention efforts (PUBMED:27278956).
Another study assessing adherence to the World Cancer Research Fund/American Institute for Cancer Research cancer prevention guidelines found that greater adherence was associated with decreased CRC risk for both African Americans and whites. This indicates that adherence to prevention guidelines can be beneficial across racial groups (PUBMED:31873947).
Research involving Community Health Advisors in South Los Angeles showed that adherence to CRC screening guidelines was higher among respondents who were also adherent to other cancer screening guidelines. However, there were still significant numbers of individuals who were adherent to breast or cervical cancer screening but not to CRC screening, suggesting that providers should review all screening guidelines with their patients (PUBMED:32368438).
A study comparing the distribution of CRC in African American patients and Caucasians under age 50 found that African Americans had a higher rate of right-sided colon cancer, which has implications for screening as colonoscopy is the preferred method for detecting right-sided cancers. This difference may indicate a different biology of carcinogenesis in African Americans and underscores the importance of tailored screening approaches (PUBMED:31289848).
The psychometric study of the MKAP-CRCS instrument for assessing factors associated with CRC screening among young adult African American men highlighted the need for exploring complex factors that may shape attitudes towards CRC screening in this demographic (PUBMED:27815663).
Lastly, a study on the clinical characteristics and presentations of CRC in young African American adults found that they had more frequent proximal CRC, lower hemoglobin, and more frequent weight loss compared to non-African Americans. This suggests that early screening colonoscopy should be advocated in African Americans due to the predominance of proximal disease (PUBMED:30020112). |
Instruction: Does mastering have an effect on disability pensioning independent of health, and may it explain divides of education in the Oslo Health Survey?
Abstracts:
abstract_id: PUBMED:17454919
Does mastering have an effect on disability pensioning independent of health, and may it explain divides of education in the Oslo Health Survey? Aims: Does mastering abilities have an independent effect on disability pensioning? Are some of the large divides across education explained by mastering factors when adjusted for health?
Methods: Data from the National Insurance Administration were linked to a health survey of 10,720 Oslo inhabitants aged 40, 45, and 60 years, 46% of the total populations in these age groups who were invited to a Health Survey in 2000-01. Measures of general health, chronic somatic diseases, psychiatric symptoms, and musculoskeletal pain were self-reported. The mastering test Generalized Self Efficacy was used.
Results: 10.5% of our eligible sample had disability pension at the time of the survey. The risk was more than five times higher for those with primary school than for those with university education and 50% higher for women than for men. Lowest score on the Generalized Self Efficacy test (poor mastering) had an age-, gender-, and health-adjusted OR of 2.4 compared with the highest level of mastering. Adjusting for mastering lowered the educational divide but not that of gender, when health indicators were taken into consideration Those reporting poor general health had a seven times higher risk than those with good health, and those with a chronic somatic disease, musculoskeletal pain, or poor psychiatric health had a somewhat lower increase in risk of disability pension. Health measures did reduce the impact of education, but not of gender, when adjusted for mastering.
Conclusion: Poor mastering was associated with disability pensioning, and reduced the differences across educational level and health but not across gender.
abstract_id: PUBMED:19535405
Disability pensioning: the gender divide can be explained by occupation, income, mental distress and health. Background: We aimed to test the hypothesis that gender divide in disability pensioning is attributable to differences in health, mental distress, occupation, and income.
Methods: In a health survey between 2000 and 2001, a total of 11,072 (48.7%) of all Oslo inhabitants aged 40, 45, 59, and 60 years participated. Survey data were linked to data from the National Insurance Administration and Statistics Norway for 10,421 of the participants, and 9,195 of those were eligible to receive disability pension at the end of 2000. Occupation, general health, and mental distress were self-reported, while income was obtained from official statistics.
Results: Approximately 5% of the eligible sample received a disability pension during the four years following the health survey. The age-adjusted odds of receiving disability pension for women was greater (odds ratio = 1.41) than for men. Self-reported health significantly contributed to the risk of receiving a pension, and seemed to reduce the imbalance in disability rates between the genders, as did adjusting for level of mental distress. Further adjustment for occupation and working conditions reduced the gender divide to an insignificant level, and the inclusion of income level (income three years prior to pensioning) completely eliminated any gender difference in risk of receiving a pension.
Conclusions: Gender differences in disability pensioning in Oslo are attributable to women's poorer self-reported health, greater levels of mental distress, lower wages, and more unfavourable working conditions such as job strain and less control over work.
abstract_id: PUBMED:19346283
Disability pensioning: can ethnic divides be explained by occupation, income, mental distress, or health? Background: We aimed to test the hypothesis that differences in disability pensioning among different ethnic groups were attributable to differences in occupation, income, health, and mental distress.
Methods: In a health survey conducted between 2000 and 2001 in Oslo, nearly half (48.7%; 11,072) of all inhabitants aged 40, 45 and 59-60 years participated. Survey data related to work, general health and mental distress were linked to disability pension data from the National Insurance Administration, and to income and country of origin data from Statistics Norway. A total of 9195 persons were eligible for disability pension at the end of 2000.
Results: Approximately 5% received a disability pension in the 4 years following the health survey. An age- and gender-adjusted odds ratio of 2.27 (95% confidence interval (CI) 1.55-3.23) among immigrants from developing countries and Eastern Europe as compared to ethnic Norwegians was reduced to 0.88 (95% CI 0.46-1.67) after adjusting for occupation, working conditions, and income. The odds ratio was further reduced to 0.63 (95% CI 0.32-1.25) when self-reported health and mental distress were added to the model.
Conclusions: The higher risk of receiving a disability pension among immigrants from developing countries and Eastern Europe than among ethnic Norwegians was largely explained by work factors and level of income. The addition of mental distress and self-reported health to the multivariate model further reduced the risk, although not significantly different from ethnic Norwegians.
abstract_id: PUBMED:22314253
Educational inequalities in disability pensioning - the impact of illness and occupational, psychosocial, and behavioural factors: The Nord-Trøndelag Health Study (HUNT). Aims: Socioeconomic inequalities in disability pensioning are well established, but we know little about the causes. The main aim of this study was to disentangle educational inequalities in disability pensioning in Norwegian women and men.
Methods: The baseline data consisted of 32,948 participants in the Norwegian Nord-Trøndelag Health Study (1995-97), 25-66 years old, without disability pension, and in paid work. Additional analyses were made for housewives and unemployed/laid-off persons. Information on the occurrence of disability pension was obtained from the National Insurance Administration database up to 2008. Data analyses were performed using Cox regression.
Results: We found considerable educational inequalities in disability pensioning, and the incidence proportion by 2008 was higher in women (25-49 years 11%, 50-66 years 30%) than men (25-49 years 6%, 50-66 years 24%). Long-standing limiting illness and occupational, psychosocial, and behavioural factors were not sufficient to explain the educational inequalities: young men with primary education had a hazard ratio of 3.1 (95% CI 2.3-4.3) compared to young men with tertiary education. The corresponding numbers for young women were 2.7 (2.1-3.1). We found small educational inequalities in the oldest women in paid work and no inequalities in the oldest unemployed/laid-off women and housewives.
Conclusions: Illness and occupational, psychosocial, and behavioural factors explained some of the educational inequalities in disability pensioning. However, considerable inequalities remain after accounting for these factors. The higher incidence of disability pensioning in women than men and the small or non-existing educational inequalities in the oldest women calls for a gender perspective in future research.
abstract_id: PUBMED:35480151
The Role of Health Education in Vaccination Nursing. The role of health education in vaccination is very important. Through various forms of activities, comprehensive and systematic education of health knowledge for people can promote students and others to be aware of vaccination and actively cooperate with vaccination work. Therefore, this article intends to conduct an in-depth study of the role of health education in prevention and treatment and to enhance people's awareness of vaccination. This article mainly uses questionnaire survey method, interview method, and controlled experiment to explain and analyze the effect of health education. The subjects of this questionnaire are students, parents, and staff. They have mixed reviews for its role in vaccination, but the overall situation is positive. 239 out of 500 people believe that health education can reduce allergic reactions and improve the effectiveness of vaccines. In the control experiment, after health education and publicity training, the parents of the observation group were significantly better than the control group in terms of mastering the relevant knowledge of vaccination, successfully vaccinating unplanned vaccines, etc. This shows that the importance of health education in vaccination care is incomparable.
abstract_id: PUBMED:37140055
Student-driven disability advocacy and education within the health professions: pilot survey results from a single-day virtual conference. Background: Health professional programs can promote equitable healthcare delivery but few programs include disability in these efforts. Limited opportunities exist for health professional students to engage with disability education within the classroom or beyond. The Disability Advocacy Coalition in Medicine (DAC Med) is a national interprofessional student-led organization which hosted a virtual conference for health professional students in October 2021. We describe the impact of this single-day virtual conference on learning and the current state of disability education across health professional programs.
Methods: This cross-sectional study utilized a 17-item post-conference survey. A 5-point Likert scale-based survey was distributed to conference registrants. Survey parameters included background in disability advocacy, curricular exposure to disability, and impact of the conference.
Results: Twenty-four conference attendees completed the survey. Participants were enrolled in audiology, genetic counseling, medical, medical scientist, nursing, prosthetics and orthotics, public health, and 'other' health programs. Most participants (58.3%) reported not having a strong background in disability advocacy before the conference, with 26.1% indicating they learned about ableism in their program's curriculum. Almost all students (91.6%) attended the conference to learn how to be a better advocate for patients and peers with disabilities, and 95.8% reported that the conference provided this knowledge. Eighty-eight percent of participants agreed that they acquired additional resources to better care for patients with disabilities.
Conclusions: Few health professional students learn about disability in their curriculum. Single-day virtual, interactive conferences are effective in providing advocacy resources and empowering students to employ them.
abstract_id: PUBMED:29067687
Relationship Between Ideal Cardiovascular Health and Disability in Older Adults: The Chilean National Health Survey (2009-10). This study aimed to examine the relationship between disability and the American Heart Association metric of ideal cardiovascular health (CVH) in older adults from the 2009-10 Chilean National Health Survey. Data from 460 older adults were analyzed. All subjects were interviewed using the standardized World Health Survey, which includes 16 health-related questions and assesses the domains of mobility, self-care, pain and discomfort, cognition, interpersonal activities, vision, sleep and energy, and affect. A person who responds with a difficulty rating of severe, extreme, or unable to do in at least one of these eight functioning domains is considered to have a disability. Ideal CVH was defined as meeting the ideal levels of four behaviors (smoking, body mass index, physical activity, diet adherence) and three factors (total cholesterol, fasting glucose, blood pressure). Logistic regression analysis suggested that ideal physical activity reduces the odds of disability (odds ratio (OR) = 0.55, 95% confidence interval (CI) = 0.36-0.85). Moreover, participants with intermediate (3-4 metrics) (OR = 0.63, 95% CI = 0.41-0.97) and ideal (5-7 metrics) (OR = 0.51, 95% CI = 0.24-0.97) CVH profiles had lower odds of disability independent of history of vascular events and arthritis disease than those with a poor profile (0-2 metrics). In conclusion, despite the cross-sectional design, this study suggests the importance of promoting ideal CVH because of their relationship with disability.
abstract_id: PUBMED:28108154
Collaborative design of a health care experience survey for persons with disability. Background: When assessing results of health care delivery system reforms targeting persons with disability, quality metrics must reflect the experiences and perspectives of this population.
Objective: For persons with disability and researchers to develop collaboratively a survey that addresses critical quality questions about a new Massachusetts health care program for persons with disability dually-eligible for Medicare and Medicaid.
Methods: Persons with significant physical disability or serious mental health diagnoses participated fully in all research activities, including co-directing the study, co-moderating focus groups, performing qualitative analyses, specifying survey topics, cognitive interviewing, and refining survey language. Several sources informed survey development, including key informant interviews, focus groups, and cognitive testing.
Results: We interviewed 18 key informants from key stakeholder groups, including disability advocates, health care providers, and governmental agencies. We conducted 12 total English- and Spanish-language focus groups involving 87 participants (38 with physical disability, 49 with mental health diagnoses). Although some details differed, focus group findings were similar across the two disability groups. Analyses by collaborators with disability identified 29 questions for persons with physical disability and 38 for persons with mental health diagnoses. After cognitive testing, the final survey includes questions on topics ranging from independent living principles to health care delivery system concerns.
Conclusions: The Persons with Disabilities Quality Survey (PDQ-S) captures specific quality concerns of Massachusetts residents with physical or mental health disability about an integrated health plan. PDQ-S requires further testing elsewhere to determine its value for quality assessment more generally and to other populations with disability.
abstract_id: PUBMED:2024251
Disability pensions and long-term sick-listing in Oslo--a study of different city districts In Oslo there are significant differences in the incidence and prevalence of medically determined disability pensions. In regions with a population characterized by low level of education and low annual income the prevalence of disability pensions is high in relation to regions with a highly educated and high income population. The main reasons for a disability pension are mental complaints, mainly anxiety and depressive states, and musculo-skeletal disorders (chronic states of muscular pain). In recent years subjective symptoms have become an increasingly more important reason for disability pensions. Social and economic factors, possibility of employment, and other non-medical factors seem to play a more determining role than ill health in the process leading to disability.
abstract_id: PUBMED:37148284
Disability competency training in medical education. Purpose: Lack of health care providers' knowledge about the experience and needs of individuals with disabilities contribute to health care disparities experienced by people with disabilities. Using the Core Competencies on Disability for Health Care Education, this mixed methods study aimed to explore the extent the Core Competencies are addressed in medical education programs and the facilitators and barriers to expanding curricular integration.
Method: Mixed-methods design with an online survey and individual qualitative interviews was used. An online survey was distributed to U.S. medical schools. Semi-structured qualitative interviews were conducted via Zoom with five key informants. Survey data were analyzed using descriptive statistics. Qualitative data were analyzed using thematic analysis.
Results: Fourteen medical schools responded to the survey. Many schools reported addressing most of the Core Competencies. The extent of disability competency training varied across medical programs with the majority showing limited opportunities for in depth understanding of disability. Most schools had some, although limited, engagement with people with disabilities. Having faculty champions was the most frequent facilitator and lack of time in the curriculum was the most significant barrier to integrating more learning activities. Qualitative interviews provided more insight on the influence of the curricular structure and time and the importance of faculty champion and resources.
Conclusions: Findings support the need for better integration of disability competency training woven throughout medical school curriculum to encourage in-depth understanding about disability. Formal inclusion of the Core Competencies into the Liaison Committee on Medical Education standards can help ensure that disability competency training does not rely on champions or resources.
Answer: Yes, mastering abilities have an independent effect on disability pensioning, and it may explain some of the divides across education levels observed in the Oslo Health Survey. The study linked data from the National Insurance Administration to a health survey of 10,720 Oslo inhabitants and found that poor mastering, as measured by the Generalized Self Efficacy test, was associated with disability pensioning. Even after adjusting for age, gender, and health, those with the lowest score on the mastering test had an age-, gender-, and health-adjusted odds ratio (OR) of 2.4 for disability pensioning compared to those with the highest level of mastering. Adjusting for mastering reduced the educational divide in disability pensioning when health indicators were considered, although it did not affect the gender divide. This suggests that mastering abilities can have an effect on disability pensioning independent of health and may partially explain the educational divides in disability pensioning (PUBMED:17454919). |
Instruction: Does experience in hysteroscopy improve accuracy and inter-observer agreement in the management of abnormal uterine bleeding?
Abstracts:
abstract_id: PUBMED:27129547
Does experience in hysteroscopy improve accuracy and inter-observer agreement in the management of abnormal uterine bleeding? Background: Hysteroscopic reliability may be influenced by the experience of the operator and by a lack of morphological diagnostic criteria for endometrial malignant pathologies. The aim of this study was to evaluate the diagnostic accuracy and the inter-observer agreement (IOA) in the management of abnormal uterine bleeding (AUB) among different experienced gynecologists.
Methods: Each gynecologist, without any other clinical information, was asked to evaluate the anonymous video recordings of 51 consecutive patients who underwent hysteroscopy and endometrial resection for AUB. Experts (>500 hysteroscopies), seniors (20-499 procedures) and junior (≤19 procedures) gynecologists were asked to judge endometrial macroscopic appearance (benign, suspicious or frankly malignant). They also had to propose the histological diagnosis (atrophic or proliferative endometrium; simple, glandulocystic or atypical endometrial hyperplasia and endometrial carcinoma). Observers were free to indicate whether the quality of recordings were not good enough for adequate assessment. IOA (k coefficient), sensitivity, specificity, predictive value and the likelihood ratio were calculated.
Results: Five expert, five senior and six junior gynecologists were involved in the study. Considering endometrial cancer and endometrial atypical hyperplasia, sensitivity and specificity were respectively 55.5 % and 84.5 % for juniors, 66.6 % and 81.2 % for seniors and 86.6 % and 87.3 % for experts. Concerning endometrial macroscopic appearance, IOA was poor for juniors (k = 0.10) and fair for seniors and experts (k = 0.23 and 0.22, respectively). IOA was poor for juniors and experts (k = 0.18 and 0.20, respectively) and fair for seniors (k = 0.30) in predicting the histological diagnosis.
Conclusions: Sensitivity improves with the observer's experience, but inter-observer agreement and reproducibility of hysteroscopy for endometrial malignancies are not satisfying no matter the level of expertise. Therefore, an accurate and complete endometrial sampling is still needed.
abstract_id: PUBMED:26422912
Interobserver diagnostic agreement on digital images of hysteroscopic studies Background: Hysteroscopic studies are of the most used methods to examine the uterine cavity in patients that present abnormal uterine hemorrhage as well as those patients with infertility. The use of hysteroscopic studies during reproductive cycles has increased the successful of pregnancy rates. Up to date, there are not many studies evaluating the inter-observer agreement in the diagnosis of different uterine pathology when using a hysteroscopic study.
Objective: To evaluate the inter-observer agreement in the diagnosis of uterine pathology when using digitalized images in hysteroscopic studies made by residents of gynecological endoscopy.
Materials And Methods: A cross-sectional, descriptive and observational study was made including 28 images of hysteroscopic studies selected by at least two of three experts in hysteroscopy, who determined that they were adequate to do a diagnostic impresion. From a total of four residents, two were selected using a randomized sampling. The images were shown to each resident in a randomized presentation and the diagnosis agreement was evaluated. Kappa test was used to evaluate the interobserver agreement with 95% confidence interval.
Results: The interobserver agreement obtained by Kappa test for different images for diagnosis of uterine pathology were: normal uterine cavity (κ = 0.81 with CI 95%, 0.56-1.00), endometrial polypus (κ = 0.71 with CI 95%, 0.33-1.00), submucous myoma (κ = 0.71 with Cl 95%, 0.33- 1.00), intrauterine adherences (κ = 0.84 with Cl 95%, 0.52-1.00), uterine septum (κ = 0.76 with CI 95%, 0.43-1.00) and endometrial hyperplasia or potential endometrial cancer (κ = 0.87 with Cl 95%, 0.61-1.00).
Conclusions: The interobserver agreement using digitalized images in the diagnosis of different uterine pathology from hysteroscopic studies made by residents of endoscopic surgery was high and very high in all cases.
abstract_id: PUBMED:22461338
Detection of intracavitary uterine pathology using offline analysis of three-dimensional ultrasound volumes: interobserver agreement and diagnostic accuracy. Objective: To estimate the diagnostic accuracy and interobserver agreement in predicting intracavitary uterine pathology at offline analysis of three-dimensional (3D) ultrasound volumes of the uterus.
Methods: 3D volumes (unenhanced ultrasound and gel infusion sonography with and without power Doppler, i.e. four volumes per patient) of 75 women presenting with abnormal uterine bleeding at a 'bleeding clinic' were assessed offline by six examiners. The sonologists were asked to provide a tentative diagnosis. A histological diagnosis was obtained by hysteroscopy with biopsy or operative hysteroscopy. Proliferative, secretory or atrophic endometrium was classified as 'normal' histology; endometrial polyps, intracavitary myomas, endometrial hyperplasia and endometrial cancer were classified as 'abnormal' histology. The diagnostic accuracy of the six sonologists with regard to normal/abnormal histology and interobserver agreement were estimated.
Results: Intracavitary pathology was diagnosed at histology in 39% of patients. Agreement between the ultrasound diagnosis and the histological diagnosis (normal vs abnormal) ranged from 67 to 83% for the six sonologists. In 45% of cases all six examiners agreed with regard to the presence/absence of intracavitary pathology. The percentage agreement between any two examiners ranged from 65 to 91% (Cohen's κ, 0.31-0.81). The Schouten κ for all six examiners was 0.51 (95% CI, 0.40-0.62), while the highest Schouten κ for any three examiners was 0.69.
Conclusion: When analyzing stored 3D ultrasound volumes, agreement between sonologists with regard to classifying the endometrium/uterine cavity as normal or abnormal as well as the diagnostic accuracy varied substantially. Possible actions to improve interobserver agreement and diagnostic accuracy include optimization of image quality and the use of a consistent technique for analyzing the 3D volumes.
abstract_id: PUBMED:8981335
Diagnostic accuracy of hysteroscopy in endometrial hyperplasia. Objectives: To determine the diagnostic accuracy of hysteroscopy in the diagnosis of endometrial hyperplasia in women with abnormal uterine bleeding.
Methods: From 1993 through 1995, 980 women referred to our institution for abnormal uterine bleeding underwent diagnostic hysteroscopy with eye direct biopsy of the endometrium in case of macroscopic abnormalities. Hysteroscopic features were compared with pathologic findings in order to detect the reliability of the endoscopic procedure. Statistical analysis was performed with the McNemar test.
Results: Positive predictive value of hysteroscopy in the diagnosis of endometrial hyperplasia accounted for 63%. In fact hysteroscopic diagnosis of endometrial hyperplasia was confirmed at pathologic examination in 81 out of 128 patients. Sensitivity and specificity of the endoscopic procedure accounted for 98% and 95%, respectively. Negative predictive value accounted for 99%, as only two cases of atypical hyperplasia were missed at hysteroscopy. Positive predictive value was higher in postmenopausal patients compared to women in the fertile age (72 vs. 58%).
Conclusions: Overall, results appear encouraging, since no case of endometrial hyperplasia was missed by hysteroscopy. The high diagnostic accuracy, associated with a minimal trauma, renders hysteroscopy the ideal procedure for both diagnosis and follow-up of conservative management of endometrial hyperplasia.
abstract_id: PUBMED:33166706
Pain management for in-office hysteroscopy. A practical decalogue for the operator. Hysteroscopy is known to be the gold standard for evaluation of intrauterine pathologies, pre-menopausal and post-menopausal abnormal uterine bleeding and, in addition to this, it is a crucial examination in the infertility work-up. In-office operative hysteroscopy incorporates the outstanding possibility of seeing and treating an intracavitary pathology in the same examination, eliminating all the risk related to anesthesia and reducing procedure-related costs. By now, performing operative procedures in the office setting is recognized as feasible and safe. Over the last 20 years, many efforts have been made to implement the in-office operative approach worldwide. However, for some women, in-office hysteroscopy is still considered a painful experience, with reported discomfort at different steps of the hysteroscopic procedures. Moreover, uneventful and tedious sensations might be increased by a high level of anxiety for such examination. For this reason, despite the feasibility of the in-office approach, many clinicians are still afraid of provoking pain during the procedure and rather not to perform surgical procedures in the office, postponing the removal of the pathology in the operating room. To date, there is no consensus concerning pain management for in-office hysteroscopy and different approaches, pharmacological and non-pharmacological aids, as well as several procedural tips and tricks are utilized. Our purpose is to provide a feasible practical decalogue for the operator, to supply adequate management of pain during in-office hysteroscopic procedures, performing challenging operations, shrinking discomfort, aiming to upgrade both women's and operator's satisfaction.
abstract_id: PUBMED:11342726
Accuracy of hysteroscopy in predicting histopathology of endometrium in 1500 women. Study Objective: To estimate the accuracy of hysteroscopy in predicting endometrial histopathology.
Design: Retrospective analysis (Canadian Task Force classification II-2).
Setting: Public hospital.
Patients: One thousand five hundred women undergoing diagnostic hysteroscopy for suspected endometrial pathology, mostly because of abnormal uterine bleeding.
Interventions: Hysteroscopy and endometrial biopsy.
Measurements And Main Results: Hysteroscopy imaging was matched with histology. Functional, dysfunctional, and atrophic endometrium were considered normal findings; endometritis, endometrial polyps, hyperplasia, and carcinomas were considered abnormal. Sensitivity, specificity, and negative (NPV) and positive (PPV) predictive values of hysteroscopy in detecting normal or abnormal endometrium were calculated. These figures were defined to assess hysteroscopic accuracy in estimating pathologic conditions. Histology showed normal endometrium in 927 patients. Endometritis, polyps, hyperplasia, and malignancies were found in 21, 265, 185, and 102 patients, respectively. Hysteroscopy showed sensitivity, specificity, NPV, and PPV of 94.2%, 88.8%, 96.3%, and 83.1%, respectively, in predicting normal or abnormal histopathology of endometrium. Highest accuracy was in diagnosing endometrial polyps, with sensitivity, specificity, NPV, and PPV of 95.3%, 95.4%, 98.9%, and 81.7%, respectively; the worst result was in estimating hyperplasia, with respective figures of 70%, 91.6%, 94.3%, and 60.6%. All failures of hysteroscopic assessment resulted from poor visualization of the uterine cavity or from underestimation or overestimation of irregularly shaped endometrium.
Conclusion: Hysteroscopy was accurate in distinguishing between normal and abnormal endometrium. Nevertheless, better knowledge of relationship between hysteroscopic imaging and pathophysiologic states of endometrium is necessary to improve its accuracy. Endometrial sampling is recommended in all hysteroscopies showing unevenly shaped and thick endometrial mucosa or an anatomically distorted uterine cavity, and when endouterine visualization is less than optimal.
abstract_id: PUBMED:15232484
Outpatient hysteroscopy and ultrasonography in the management of endometrial disease. Purpose Of Review: This review is to inform the ongoing debate about the choice between ultrasound and hysteroscopy in the management of endometrial disease presenting with abnormal uterine bleeding using information provided from recently published literature.
Recent Findings: Transvaginal ultrasound measurement of endometrial thickness, using 4 or 5 mm cut-offs to define abnormality, is a good test for excluding endometrial cancer in women with postmenopausal bleeding. In contrast, hysteroscopy is a good test for detecting endometrial cancer, but less effective at excluding serious disease. The accuracy of transvaginal ultrasound in diagnosing intracavity pathology such as submucous fibroids and polyps is improved with saline instillation to levels of accuracy comparable to that of outpatient hysteroscopy. Miniaturization of hysteroscopes and ancillary instrumentation (e.g. development of bipolar intrauterine systems) has facilitated 'see and treat' outpatient hysteroscopy, so that it should no longer be considered simply an outpatient diagnostic modality. Preliminary cost-effectiveness studies have supported the use of ultrasound in the diagnosis of endometrial disease, but further, more comprehensive studies are required comparing ultrasound and outpatient hysteroscopy.
Summary: Recently published research has provided the clinician with high-quality data regarding the accuracy of ultrasound and hysteroscopy in the diagnosis of endometrial disease. Despite this, controversy remains regarding the relative roles of these uterine imaging modalities. Future research needs to be directed towards providing effectiveness and cost-effectiveness data in order to resolve the ongoing debate and guide best clinical practice.
abstract_id: PUBMED:33100465
Experience with Hysteroscopy in a Private Specialist Hospital in Nigeria. Background: Hysteroscopy is a standard method for the evaluation and treatment of various gynecological disorders. Its availability and accessibility are limited in our setting owing to resource constraints. Nevertheless, the utilization is on the increase mostly in private health institutions in Nigeria and as an adjunct in infertility management.
Objectives: The objective is to document the experience and outcome of hysteroscopy surgeries at a private specialist-assisted reproduction and endoscopy unit.
Materials And Methods: A retrospective review of all hysteroscopic procedures conducted at the unit was undertaken. Relevant sociodemographic and clinical information were extracted for analysis. In addition, outcomes of the procedure and outcome for those who eventually had in vitro fertilization (IVF) treatment were documented for analysis.
Results: A total of 106 patients had hysteroscopy over the study period. The age of patients ranged from 24 to 55 years. The most common indication for hysteroscopy was uterine synechiae (50%) others were preparatory for IVF (30.2%), uterine fibroid/polyp (10.4%), and abnormal uterine bleeding (9.4%), respectively. The major findings at hysteroscopy were intrauterine adhesions 68.9%. Therapeutic adhesiolysis was done using the scissors in most cases (83%) while two patients (1.9%) had adhesiolysis and resection of uterine polyp. A complication of noncardiogenic pulmonary edema was recorded from fluid overload. Overall most had return to normal menses (65.1%). Thirty-nine (38.8%) women had IVF treatment after hysteroscopy of which outcome was successful in 16 (41%) women.
Conclusion: The utilization of hysteroscopic surgeries in management of endometrial pathologies is increasing. It offers a safe and effective treatment and is a useful adjunct for improving IVF outcome especially for those with repeated failed treatment.
abstract_id: PUBMED:17218223
In the management of abnormal uterine bleeding, is office hysteroscopy preferable to sonography? The case for hysteroscopy. Office hysteroscopy for the diagnosis and management of abnormal uterine bleeding has developed into an easily performed procedure, with minimal discomfort and significantly reduced risks and expense. Miniaturization of instruments and safer liquid distention media, along with effective local analgesia, have made the procedure a fast, effective, and much more precise way to detect intrauterine abnormalities, as well as to better define the correct plan for any proposed operative management. In addition to the above, hysteroscopy is considered the "gold standard" for evaluating the uterine cavity. Numerous studies comparing hysteroscopy to sonography, with or without saline solution infusion are cited, along with other studies comparing hysteroscopy to blind biopsy and curettage, all indicating that hysteroscopy is more accurate, with fewer false-positive and false-negative results.
abstract_id: PUBMED:23483793
Diagnostic hysteroscopy - a retrospective study of 1545 cases. Unlabelled:
Introduction: The development of hysteroscopy has provided a minimally invasive approach to common gynecologic problems, such as abnormal uterine bleeding. Diagnostic hysteroscopy is considered now "the gold standard" by the Association of Professors of Gynecology and Obstetrics (2002) in investigation of abnormal uterine bleeding (AUB) in order to rule out organic endouterine causes of AUB. Although the World Health Organization (WHO) recommends hysterosapingography (HSG) alone for management of infertile women many specialists use hysteroscopy as a first-line routine exam for infertility patients regardless of guidelines.
Material And Method: This paper is a retrospective study of 1545 diagnostic hysteroscopies performed in the "Prof. Dr. Panait Sirbu" Obstetrics and Gynecology Hospital between January 1, 2008 and June 30, 2011. The following parameters were studied: diagnostic hysteroscopy indications, type of anesthesia used, correlation between pre-and postoperative diagnoses.
Outcomes: Of 1545 diagnostic hysteroscopies, 78% of cases were performed without anesthesia; of the total of 299 cases of primary infertility diagnostic hysteroscopy showed in 34% of cases tubal obstruction and endouterine pathology; of the total 396 cases of secondary infertility under investigation, diagnostic hysteroscopy showed in 40% of cases tubal obstruction and endouterine pathology; the highest accuracy of HSG was noted for uterine malformation and minimal accuracy was observed for intrauterine adhesions.
Conclusions: Our experience supports the opinion that diagnostic hysteroscopy should be a first-line routine exam in infertility.Because of the high rate of false positive results for HSG in our study and considering the other studies in specialty literature, we always perform a diagnostic hysteroscopy before Assisted Human Reproduction procedures regardless of the HSG aspect.
Answer: Experience in hysteroscopy does appear to improve accuracy in the management of abnormal uterine bleeding (AUB). A study evaluating the diagnostic accuracy and inter-observer agreement (IOA) among gynecologists with varying levels of experience found that sensitivity and specificity were higher for experts compared to juniors and seniors. Specifically, for endometrial cancer and endometrial atypical hyperplasia, sensitivity was 86.6% for experts, 66.6% for seniors, and 55.5% for juniors. Specificity was 87.3% for experts, 81.2% for seniors, and 84.5% for juniors (PUBMED:27129547). However, despite the improvement in sensitivity with experience, the IOA and reproducibility of hysteroscopy for endometrial malignancies were not satisfying at any level of expertise, indicating that accurate and complete endometrial sampling is still necessary (PUBMED:27129547).
In contrast, another study focusing on interobserver diagnostic agreement on digital images of hysteroscopic studies reported high and very high interobserver agreement among residents of endoscopic surgery in diagnosing different uterine pathologies (PUBMED:26422912). This suggests that when using standardized digital images, even less experienced observers can achieve a high level of agreement.
Overall, while experience seems to enhance the accuracy of hysteroscopy in managing AUB, the level of interobserver agreement may vary depending on the method of assessment and the complexity of the cases. It is important to note that regardless of experience, a thorough endometrial evaluation is essential for accurate diagnosis and management of AUB. |
Instruction: Is neutrophil/lymphocyte ratio predict to short-term mortality in acute cerebral infarct independently from infarct volume?
Abstracts:
abstract_id: PUBMED:25106834
Is neutrophil/lymphocyte ratio predict to short-term mortality in acute cerebral infarct independently from infarct volume? Background: Neutrophil/lymphocyte ratio (NLR) is related with increased mortality in both myocardial infarction and acute ischemic stroke. It remains unclear whether NLR is a simple marker of ischemic infarct volume or an independent marker of stroke mortality. The aim of this study is to investigate the relationship of NLR with infarct volume and short-term mortality in acute ischemic stroke (AIS).
Methods: This retrospective study included 151 patients with first AIS that occurred within 24 hours of symptom onset. Patients were screened from the hospital's electronic record system by using International Classification of Diseases code (G 46.8). NLR was calculated as the ratio of neutrophils to lymphocytes. Short-term mortality was defined as 30-day mortality.
Results: A total 20 of 151 patients died during follow-up. Both NLR and infarct volume of nonsurvived group were significantly higher than survived group (P < .05). Infarct volume, NLR, and National Institutes of Health Stroke Scale (NIHSS) were independent predictors of the mortality in Cox regression analysis. The optimal cutoff value for NLR as a predictor for short-term mortality was determined as 4.81. NLR displayed a moderate correlation with both NIHSS and Glasgow Coma Scale (P < .01). NLR values were significantly higher in the highest infarct volume tertile than both in the lowest volume tertile and midtertile of infarct volume (P = .001).
Conclusions: NLR at the time of hospital admission maybe a predictor of short-term mortality independent from infarct volume in AIS patients. NLR should be investigated in future prospective trials investigating AIS.
abstract_id: PUBMED:30737090
Platelet-to-neutrophil ratio is a prognostic marker for 90-days outcome in acute ischemic stroke. To investigate the prognostic value of platelet-to-neutrophil ratio (PNR) in acute ischemic stroke (AIS) patients. In this study, a total of 400 AIS patients were included. Demographic, clinical, laboratory data were collected on admission, and PNR was calculated according to platelet and neutrophil counts on admission. The prognosis after 3 months was evaluated by the Barthel index (BI), where BI ≤85 was defined as poor prognosis and BI >85 was defined as good prognosis. Regression analyses were performed, adjusting for confounders. (1) Compared with good prognosis group, PNR level on admission in poor prognosis group was significantly lower, the difference between the two groups was statistically significant (P < 0.05). (2) The difference in PNR level between the large infarct volume group and small infarct volume group was no statistically significant, nor between the moderate to severe group and the mild group (all P > 0.05). (3) In multivariate logistic regression analysis, PNR, platelet-to-lymphocyte ratio (PLR), platelet-to-white blood cell ratio (PWR) level were correlated with the 3 month prognosis of AIS. PNR may be an independent protective factor for predicting the prognosis of AIS. PNR level has a higher accuracy in the 3 month prognosis of acute ischemic cerebral infarction than the level of PLR and PWR. The level of PNR is correlated with the 3 month prognosis of acute ischemic cerebral infarction. The level of PNR may be an independent protective factor for predicting the prognosis of AIS.
abstract_id: PUBMED:34442341
The Neutrophil-to-Lymphocyte and Platelet-to-Lymphocyte Ratios Predict Reperfusion and Prognosis after Endovascular Treatment of Acute Ischemic Stroke. Background: Studies assessing the prognostic effect of inflammatory markers of blood cells on the outcomes of patients with acute ischemic stroke treated with endovascular treatment (EVT) are sparse. We evaluated whether the neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) affect reperfusion status in patients receiving EVT.
Methods: Using a multicenter registry database, 282 patients treated with EVT were enrolled in this study. The primary outcome measure was unsuccessful reperfusion rate after EVT defined by thrombolysis in cerebral infarction grades 0-2a. Logistic regression analysis was performed to analyze the association between NLR/PLR and unsuccessful reperfusion rate after EVT.
Results: Both NLR and PLR were higher in the unsuccessful reperfusion group than in the successful reperfusion group (p < 0.001). Multivariate analysis showed that both NLR and PLR were significantly associated with unsuccessful reperfusion (adjusted odds ratio (95% confidence interval): 1.11 (1.04-1.19), PLR: 1.004 (1.001-1.01)). The receiver operating characteristic curve showed that the predictive ability of both NLR and PLR was close to good (area under the curve (AUC) of NLR: 0.63, 95% CI (0.54-0.72), p < 0.001; AUC of PLR: 0.65, 95% CI (0.57-0.73), p < 0.001). The cutoff values of NLR and PLR were 6.2 and 103.6 for unsuccessful reperfusion, respectively.
Conclusion: Higher NLR and PLR were associated with unsuccessful reperfusion after EVT. The combined application of both biomarkers could be useful for predicting outcomes after EVT.
abstract_id: PUBMED:36447170
Increased neutrophil-to-lymphocyte ratio is associated with unfavorable functional outcomes in acute pontine infarction. Background: The neutrophil-to-lymphocyte ratio (NLR) is positively associated with unfavorable outcomes in patients with cerebral infarction. This study aimed to investigate the relationship between the NLR and the short-term clinical outcome of acute pontine infarction.
Methods: Patients with acute pontine infarction were consecutively included. Clinical and laboratory data were collected. All patients were followed up at 3 months using modified Rankin Scale (mRS) scores. An unfavorable outcome was defined as an mRS score ≥ 3. Receiver operating characteristic (ROC) curve analysis was used to calculate the optimal cutoff values for patients with acute pontine infarction. risk factors can be predictive factors for an unfavorable outcome after acute pontine infarction.
Results: Two hundred fifty-six patients with acute pontine infarction were included in this study. The NLR was significantly higher in the unfavorable outcome group than in the favorable outcome group (P < 0.05). Additionally, the infarct size was significantly higher in the high NLR tertile group than in the low NLR tertile group (P < 0.05). Multivariate logistic regression analysis revealed that the baseline National Institutes of Health Stroke Scale (NIHSS) score, NLR, platelet count, and fasting blood glucose (FBG) level were significantly associated with unfavorable outcomes 3 months after acute pontine infarction. The optimal cutoff value of the NLR for predicting the 3-month outcome of acute pontine infarction was 3.055. The negative and positive predictive values of NLR were 85.7% and 61.3%, respectively, and the sensitivity and specificity of NLR were 69.2% and 80.9%.
Conclusions: We found that the NLR may be an independent predictive factor for the outcome of acute pontine infarction.
abstract_id: PUBMED:33066933
Epicardial Adipose Thickness and Neutrophil Lymphocyte Ratio in Acute Occlusive Cerebrovascular Diseases. Objectives: We investigate the relationship between the severity of vascular disease and epicardial adipose tissue thickness(EAT-t) and the neutrophil/lymphocyte (NEU/LY) ratio in acute stroke patients.
Methods: Seventy-six patients and 38 healthy controls were included in the study. Strokes were divided into three groups: lacunar infarction, middle cerebral artery infarction (MCA), and other arterial infarcts. Patients were assessed using the GCS (Glasgow coma scale) and NIHSS (National Institutes of Health Stroke Scale) scales. In addition to laboratory measurements, EAT-t was evaluated in all patients by using echocardiography.
Results: The EAT-t value and NEU/LY ratio were higher in the patient group than in the control group. The MCA group was found to have a significantly higher NEU/LY ratio than the lacuna group (p = 0.017) as well as the other patient (p = 0.025) group. There was a positive correlation of NIHSS score with EAT-t (r = 0.291; p = 0.013), and NEU/LY ratio (r = 0.289; p = 0.014).
Conclusion: The EAT-t and NEU/LY ratio were high in patients with acute ischemic stroke patients. The higher ratio of NEU/LY compared to other infarcts in the MCA group. These findings support the relationship between acute ischemic stroke severity and inflammation .
abstract_id: PUBMED:18162626
Early neutrophilia is associated with volume of ischemic tissue in acute stroke. Background And Purpose: Few data exist on the relationship between differential subpopulations of peripheral leukocytes and early cerebral infarct size in ischemic stroke. Using diffusion-weighted MR imaging (DWI), we assessed the relationship of early total and differential peripheral leukocyte counts and volume of ischemic tissue in acute stroke.
Methods: All included patents had laboratory investigations and neuroimaging collected within 24 hours of stroke onset. Total peripheral leukocyte counts and differential counts were analyzed individually and by quartiles. DWI lesions were outlined using a semiautomated threshold technique. The relationship between leukocyte quartiles and DWI infarct volumes was examined using multivariate quartile regression.
Results: 173 patients met study inclusion criteria. Median age was 73 years. Total leukocyte counts and DWI volumes showed a strong correlation (Spearman rho=0.371, P<000.1). Median DWI volumes (mL) for successive neutrophil quartiles were: 1.3, 1.3, 3.2, and 20.4 (P for trend <0.001). Median DWI volumes (mL) for successive lymphocyte quartiles were: 3.2, 8.1, 1.3, and 1.5 (P=0.004). After multivariate analysis, larger DWI volume remained strongly associated with higher total leukocyte and neutrophil counts (both probability values <0.001), but not with lymphocyte count (P=0.4971). Compared with the lowest quartiles, DWI volumes were 8.7 mL and 12.9 mL larger in the highest quartiles of leukocyte and neutrophil counts, respectively.
Conclusions: Higher peripheral leukocyte and neutrophil counts, but not lymphocyte counts, are associated with larger infarct volumes in acute ischemic stroke. Attenuating neutrophilic response early after ischemic stroke may be a viable therapeutic strategy and warrants further study.
abstract_id: PUBMED:32028265
A high neutrophil-to-lymphocyte ratio predicts hemorrhagic transformation of large atherosclerotic infarction in patients with acute ischemic stroke. Increasing evidence suggests that inflammation is associated with the development of acute ischemic stroke (AIS). The neutrophil-to-lymphocyte ratio (N/L) is an important marker of inflammation and is highly correlated with mortality in stroke patients in recent studies. The N/L of patients who experience hemorrhagic transformation (HT) after AIS is know, but any relationship between N/L and large artery atherosclerosis (LAA) remains unclear, this is our present topic. We enrolled 185 patients with LAA-type HT in the development cohort from a prospective, consecutive, hospital-based stroke registry to this end. We matched these patients to 213 LAA patients who did not develop HT as controls. The incidence of HT after LAA was significantly greater (P<0.01) in patients with higher N/L. We developed a predictive nomogram (incorporating age, systolic blood pressure, the National Institutes of Health Stroke Scale, and the N/L) for LAA patients. The predictive power was good (area under the curve, AUC: 0.832, 95%CI: 0.791-0.872). Our findings were further validated in a validation cohort of 202 patients with AIS attributable to LAA (AUC:0.836, 95%CI:0.781-0.891). In summary, a high N/L is associated with an increased risk for HT after LAA.
abstract_id: PUBMED:37084652
Elevated neutrophil-related immune-inflammatory biomarkers in acute anterior choroidal artery territory infarction with early progression. Objective: The anterior choroidal artery territory (AChA) infarction has a high rate of progression and poor functional prognosis. The aim of the study is to search for fast and convenient biomarkers to forecast the early progression of acute AChA infarction.
Methods: We respectively collected 51 acute AChA infarction patients, and compared the laboratorial index between early progressive and non-progressive acute AChA infarction patients. The receiver-operating characteristics curve (ROC) analysis was used to determine the discriminant efficacy of indicators that had statistical significance.
Results: The white blood cell, neutrophil, monocyte, white blood cell to high-density lipoprotein cholesterol ratio, neutrophil to high-density lipoprotein cholesterol ratio (NHR), monocyte to high-density lipoprotein cholesterol ratio, monocyte to lymphocyte ratio, neutrophil to lymphocyte ratio (NLR), and hypersensitive C-reaction protein in acute AChA infarction are significantly higher than healthy controls (P < 0.05). The NHR (P = 0.020) and NLR (P = 0.006) are remarkably higher in acute AChA infarction patients with early progression than non-progression. The area under the ROC curve of NHR, NLR, the combine of NHR and NLR are 0.689 (P = 0.011), 0.723 (P = 0.003), 0.751 (P < 0.001), respectively. But there are no significant differences in efficiency between NHR and NLR and their combined marker in predicting progression (P > 0.05).
Conclusion: NHR and NLR may be significant predictors of early progressive patients with acute AChA infarction, and the combination of NHR and NLR could be a preferable prognostic marker for AChA infarction with early progressive course in acute stage.
abstract_id: PUBMED:32416677
Association Between Neutrophil to Lymphocyte Ratio and Malignant Brain Edema in Patients With Large Hemispheric Infarction. Introduction: Malignant brain edema (MBE) is a life-threatening complication for patients with large hemispheric infarction (LHI). Stroke-related inflammatory responses may cause secondary brain injury and lead to brain edema. The neutrophil to lymphocyte ratio (NLR) is a well-known systemic inflammatory biomarker. The aim of this study was to evaluate if NLR is associated with MBE in patients with LHI.
Methods: A retrospective analysis was performed of LHI patients within 24 h from stroke onset admitted to the Department of Neurology, West China Hospital from January 1, 2017 to December 31, 2018. Blood samples were collected upon admission. MBE was diagnosed by any neurological deterioration accompanied by brain edema in follow-up images. Patients were categorized according to NLR tertiles. Univariate analyses were performed to identify potential confounding variables and a multivariate logistic regression analysis was conducted to determine the correlation between NLR and MBE.
Results: A total of 257 patients with a mean age of 68.6 ± 14.0 years were identified. Among them, 83 (32.3%) patients developed MBE with a median time of one day (interquartile range [IQR] 0-2 days) from hospital admission. An elevated NLR was related to an increased risk of MBE when the lowest and highest tertiles were compared (odds ratio 2.27, 95% confidence interval 1.11-4.62, p = 0.024). The risk of MBE increased with the increase of NLR in a dosedependent manner (p for trend = 0.029). No interaction between potential modifiers and NLR on MBE was observed.
Conclusions: Higher NLR was associated with an increased risk of MBE in patients with LHI.
abstract_id: PUBMED:31709500
Stroke Severity, and Not Cerebral Infarct Location, Increases the Risk of Infection. Infection is a leading cause of death in patients with stroke; however, the impact of cerebral infarct size or location on infectious outcome is unclear. To examine the effect of infarct size on post-stroke infection, we utilised the intraluminal middle-cerebral artery occlusion (MCAO) mouse model of ischemic stroke and adjusted the duration of arterial occlusion. At 1 day following stroke onset, the proportion of mice with infection was significantly greater in mice that had larger infarct sizes. Additionally, the presence of lung infection in these mice with severe strokes extended past 2 days, suggestive of long-term immune impairment. At the acute phase, our data demonstrated an inverse relationship between infarct volume and the number of circulating leukocytes, indicating the elevated risk of infection in more severe stroke is associated with reduced cellularity in peripheral blood, owing predominately to markedly decreased lymphocyte numbers. In addition, the stroke-induced reduction of lymphocyte-to-neutrophil ratio was also evident in the lung of all post-stroke animals. To investigate the effect of infarct location on post-stroke infection, we additionally performed a photothrombotic (PT) model of stroke and using an innovative systematic approach of analysis, we found the location of cerebral infarct does not impact on the susceptibility of post-stroke infection, confirming the greater role of infarct volume over infarct location in the susceptibility to infection. Our experimental findings were validated in a clinical setting and reinforced that stroke severity, and not infarct location, influences the risk of infection after stroke.
Answer: Yes, the neutrophil/lymphocyte ratio (NLR) is suggested to be a predictor of short-term mortality in acute cerebral infarct independently from infarct volume. A study including 151 patients with first acute ischemic stroke (AIS) found that both NLR and infarct volume were significantly higher in the group of patients who did not survive compared to those who did. Furthermore, infarct volume, NLR, and National Institutes of Health Stroke Scale (NIHSS) were identified as independent predictors of mortality in Cox regression analysis. The optimal cutoff value for NLR as a predictor for short-term mortality was determined to be 4.81, and NLR showed a moderate correlation with both NIHSS and Glasgow Coma Scale. Importantly, NLR values were significantly higher in the highest infarct volume tertile, but the study concluded that NLR at the time of hospital admission may be a predictor of short-term mortality independent of infarct volume in AIS patients (PUBMED:25106834).
This finding is consistent with other studies that have identified NLR as an important prognostic marker in AIS. For instance, another study found that a high neutrophil-to-lymphocyte ratio is associated with unfavorable functional outcomes in acute pontine infarction, with the NLR being significantly higher in the unfavorable outcome group. Multivariate logistic regression analysis in this study also revealed that NLR was significantly associated with unfavorable outcomes 3 months after acute pontine infarction (PUBMED:36447170).
Moreover, elevated NLR has been associated with increased risk for hemorrhagic transformation after large artery atherosclerosis (PUBMED:32028265) and with an increased risk of malignant brain edema in patients with large hemispheric infarction (PUBMED:32416677). These studies collectively suggest that NLR is a valuable prognostic marker in AIS that can predict outcomes independently of infarct volume. |
Instruction: The insensate foot following severe lower extremity trauma: an indication for amputation?
Abstracts:
abstract_id: PUBMED:16322607
The insensate foot following severe lower extremity trauma: an indication for amputation? Background: Plantar sensation is considered to be a critical factor in the evaluation of limb-threatening lower extremity trauma. The present study was designed to determine the long-term outcomes following the treatment of severe lower extremity injuries in patients who had had absent plantar sensation at the time of the initial presentation.
Methods: We examined the outcomes for a subset of fifty-five subjects who had had an insensate extremity at the time of presentation. The patients were divided into two groups on the basis of the treatment in the hospital: an insensate amputation group (twenty-six patients) and an insensate salvage group (twenty-nine patients), the latter of which was the group of primary interest. In addition, a control group was constructed from the parent cohort so that the patients in the study groups could be compared with patients in whom plantar sensation was present and in whom the limb was reconstructed. Patient and injury characteristics as well as functional and health-related quality-of-life outcomes at twelve and twenty-four months after the injury were compared between the subjects in the insensate salvage group and those in the other two groups.
Results: The patients in the insensate salvage group did not report or demonstrate significantly worse outcomes at twelve or twenty-four months after the injury compared with subjects in the insensate amputation or sensate control groups. Among the patients in whom the limb was salvaged (that is, those in the insensate salvage and sensate control groups), an equal proportion (approximately 55%) had normal plantar sensation at two years after the injury, regardless of whether plantar sensation had been reported to be intact at the time of admission. No significant differences were noted among the three groups with regard to the overall, physical, or psychosocial scores. At two years after the injury, only one patient in the insensate salvage group had absent plantar sensation.
Conclusions: Outcome was not adversely affected by limb salvage, despite the presence of an insensate foot at the time of presentation. More than one-half of the patients who had presented with an insensate foot that was treated with limb reconstruction ultimately regained sensation at two years. Initial plantar sensation is not prognostic of long-term plantar sensory status or functional outcomes and should not be a component of a limb-salvage decision algorithm.
abstract_id: PUBMED:37590307
Republication of "Minimally Invasive Surgery Using the Circular External Fixator to Correct Neglected Severe Stiff Equinocavus Foot Deformities". Background: Stiff equinocavus deformities of the foot are challenging to treat, often requiring extensive soft tissue dissection and bone removal. These procedures frequently yield suboptimal results and not infrequently amputation. Minimally invasive surgery using a circular external fixator potentially avoids the trauma to the soft tissue and may lead to improvement in outcomes and a lower amputation rate. The objective of this study was to evaluate the efficacy of minimally invasive surgery using a circular external fixator and limited soft tissue release to correct stiff equinocavus deformities.
Methods: The treatment outcome of 29 patients (31 feet) with stiff equinocavus deformities of the foot and ankle treated with minimally invasive surgery and circular external fixation were reviewed after a mean follow-up period of 63 months. Patients' demographics and cause of the deformities were recorded. Weight bearing radiographs of the foot were compared pre and postoperatively.
Results: Outcome was satisfactory (plantigrade foot with improvement/resolution of pain) in 21 of 31 extremities, fair in 6 of 31 extremities, and poor in 4 of 31 extremities. In the majority of patients, a significant improvement in the equinocavus deformities was achieved with a statistically significant improvement in calcaneus and navicular height. Two patients with Charcot-Marie-Tooth and severely insensate feet had a poor outcome, resulting in transtibial amputation.
Conclusion: Minimally invasive surgery with gradual correction of neglected stiff equinocavus deformities using a modular circular external fixator is a reliable initial limb salvage strategy. Minimally invasive surgery and gradual correction of neglected severe stiff equinocavus deformities using the modular circular external fixator to gradually correct neglected severe stiff equinocavus deformities, is a safe initial limb salvage strategy which may simplify secondary procedures such as arthrodesis.
Level Of Evidence: IV.
abstract_id: PUBMED:8110884
Diabetic foot ulcers: pathogenesis and management. Approximately 60,000 major lower extremity amputations annually are performed on diabetic patients in the United States. Diabetic foot ulcers are a major factor in 84% of these amputations. The ulcers develop as a result of minor trauma or callus breakdown in the insensate foot. Infection and vascular insufficiency lead to gangrene and amputation. Delay in treatment of these ulcers is a major factor leading to gangrene and amputation. The most important treatments of the ulcer are debridement to healthy bleeding tissue, proper culture and antibiotic therapy, identification of osteomyelitis, metabolic control, keeping weight off the foot, and (when indicated) peripheral arterial reconstruction to improve blood flow. Therapeutic shoes to prevent recurrence of the ulcer are extremely important in posttreatment of these ulcers. Because the management of ulcers is complicated, the team approach and consultation are frequently necessary. The most important step in prevention of foot ulcers in the diabetic is repeated patient education in foot care.
abstract_id: PUBMED:19472025
Compartment syndrome of the lower leg and foot. Unlabelled: Compartment syndrome of the lower leg or foot, a severe complication with a low incidence, is mostly caused by high-energy deceleration trauma. The diagnosis is based on clinical examination and intracompartmental pressure measurement. The most sensitive clinical symptom of compartment syndrome is severe pain. Clinical findings must be documented carefully. A fasciotomy should be performed when the difference between compartment pressure and diastolic blood pressure is less than 30 mm Hg or when clinical symptoms are obvious. Once the diagnosis is made, immediate fasciotomy of all compartments is required. Fasciotomy of the lower leg can be performed either by one lateral incision or by medial and lateral incisions. The compartment syndrome of the foot requires thorough examination of all compartments with special focus on the calcaneal compartment. Depending on the injury, clinical examination, and compartment pressure, fasciotomy is recommended via a dorsal and/or medial plantar approach. Surgical management does not eliminate the risk of developing nerve and muscle dysfunction. When left untreated, poor outcomes with contractures, toe deformities, paralysis, and sensory neuropathy can be expected. In severe cases, amputation may be necessary.
Level Of Evidence: Level III. See Guidelines for Authors for a complete description of levels of evidence.
abstract_id: PUBMED:35986492
Revision Surgery Following Severe Frostbite Injury Compared to Similar Hand and Foot Burns. Severe frostbite is associated with loss of digits or limbs and high levels of morbidity. The current practice is to salvage as much of the limb/digit as possible with the use of thrombolytic and adjuvant therapies. Sequelae from amputation can include severe nerve pain and poor wound healing requiring revision surgery. The aim of this study was to examine the rate of revision surgery after primary amputation and compare this to revision surgery in isolated hand/foot burns. Frostbite and burn patients from 2014 to 2019 were identified in the prospectively maintained database at a single urban burn and trauma center. Patients with primary amputations related to isolated hand/foot burns or frostbite were included in the study. Descriptive statistics included Student's t-test and Fisher's exact test. A total of 63 patients, 54 frostbite injuries and 9 isolated hand or foot burns, met inclusion criteria for the study. The rate of revision surgery was similar following frostbite and burn injury (24% vs 33%, P = .681). There were no significant differences in age, sex, or length of stay on the primary hospitalization between those that required revision surgery and those that did not. Neither the impacted limb nor the presence of infection or cellulitis on primary amputation was associated with future need for revision surgery. Of the 16 patients requiring revision surgery, 5 (31%) required additional debridement alone, 6 (38%) required reamputation alone, and 5 required both. A total of 6 patients (38%) had cellulitis or infection at the time of revision surgery. Time from primary surgery to revision ranged from 4 days to 3 years. Planned, delayed primary amputation is a mainstay of frostbite management. To our knowledge, this is the first assessment of revision surgery in the setting of severe frostbite injury. Our observed rate of revision surgery following frostbite injury did not differ significantly from revision surgery in the setting of isolated hand or foot burns. This study brings up important questions of timing and surgical planning in these complex patients that will require a multicenter collaborative study.
abstract_id: PUBMED:27316920
Does obtaining an initial magnetic resonance imaging decrease the reamputation rates in the diabetic foot? Objective: Diabetes mellitus (DM) through its over glycosylation of neurovascular structures and resultant peripheral neuropathy continues to be the major risk factor for pedal amputation. Repetitive trauma to the insensate foot results in diabetic foot ulcers, which are at high risk to develop osteomyelitis. Many patients who present with diabetic foot complications will undergo one or more pedal amputations during the course of their disease. The purpose of this study was to determine if obtaining an initial magnetic resonance imaging (MRI), prior to the first amputation, is associated with a decreased rate of reamputation in the diabetic foot. Our hypothesis was that the rate of reamputation may be associated with underutilization of obtaining an initial MRI, useful in presurgical planning. This study was designed to determine whether there was an association between the reamputation rate in diabetic patients and utilization of MRI in the presurgical planning and prior to initial forefoot amputations.
Methods: Following approval by our institutional review board, our study design consisted of a retrospective cohort analysis of 413 patients at Staten Island University Hospital, a 700-bed tertiary referral center between 2008 and 2013 who underwent an initial great toe (hallux) amputation. Of the 413 patients with a hallux amputation, there were 368 eligible patients who had a history of DM with documented hemoglobin A1c (HbA1c) within 3 months of the initial first ray (hallux and first metatarsal) amputation and available radiographic data. Statistical analysis compared the incidence rates of reamputation between patients who underwent initial MRI and those who did not obtain an initial MRI prior to their first amputation. The reamputation rate was compared after adjustment for age, gender, ethnicity, HbA1c, cardiovascular disease, hypoalbuminemia, smoking, body mass index, and prior antibiotic treatment.
Results: The results of our statistical analysis failed to reveal a significant association between obtaining an initial MRI and the reamputation rate. We did, however, find a statistical association between obtaining an early MRI and decreased mortality rates.
Discussion: Obtaining an early MRI was not associated with the reamputation rate incidence in the treatment of the diabetic foot. It did, however, have a statistically significant association with the mortality rate as demonstrated by the increased survival rate in patients undergoing MRI prior to initial amputation.
abstract_id: PUBMED:22929524
A ten-year review of lower extremity burns in diabetics: small burns that lead to major problems. Diabetes mellitus with its resulting neurovascular changes may lead to an increased risk of burns and impaired wound healing. The purpose of this article is to review 10 years of experience with foot and lower leg burns in patients with diabetes at a single adult burn center. Patients with lower extremity burns and diabetes mellitus, between May 1999 and December 2009, were identified in the Trauma Registry of the American College of Surgeons database, and their charts were reviewed for data related to their outcomes. Sixty-eight diabetic patients, 87% male, with a mean age of 54 years, sustained foot or lower extremity burns with 37 having burns resulting from insensate feet. The pathogenesis included walking on a hot or very cold surface (8), soaking feet in hot water (22), warming feet on or near something hot such as a heater (13), or spilling hot water (7). The majority of patients were taking insulin (59.6%) or oral hyperglycemic medications (34.6%). Blood sugar levels were not well controlled (mean glucose, 215.8 mg/dl; mean hemoglobin A1c, 9.08%). Renal disease was common with admission serum blood urea nitrogen (27.5 mg/dl) and creatinine (2.21 mg/dl), and 13 were on dialysis preinjury. Cardiovascular problems were common with 39 (57%) having hypertension or cardiac disease, 3 having peripheral vascular disease, and 9, previous amputations. The mean burn size was 4.2% TBSA (range, 0.5-15%) with 57% being full thickness. Despite the small burn, the mean length of stay was 15.2 days (range, 1-95), with 5.65 days per 1% TBSA. Inability to heal these wounds was evident in 19 patients requiring readmission (one required 10 operative procedures). At least one patient sustained more than one burn. There were 62 complications with 30 episodes of infection (cellulitis, 28; osteomyelitis, 4; deep plantar infections, 2; ruptured Achilles tendon, 1) and 3 deaths. Eleven patients needed amputations (7 below-knee amputations, 4 transmetatarsal amputations, and 20 toe amputations) with several needing revisions or higher amputations. Patients with diabetes have an increased risk for lower extremity complications, but the risk of burns is not well known. The majority of lower extremity burns result from intentional exposure to sources of heat without recognition for the risk of burns. Once a burn occurs, morbidity and cost to the patient and society are severe. Prevention programs should be initiated to make diabetic patients and their doctors aware of the significant risk for burns.
abstract_id: PUBMED:27146807
Limb salvage versus amputation after severe lower extremity injury : Cases from clinical practice Following severe lower extremity injury, the potential outcome of a salvage procedure might often be questionable. Objective criteria should help in decision-making. From the clinical practice of a level I trauma center, we demonstrate three case reports and approaches following severe lower extremity injury.
abstract_id: PUBMED:14743027
Prevalence and patterns of foot injuries following motorcycle trauma. Objectives: To determine the prevalence and patterns of foot injuries following motorcycle trauma.
Design: Prospective.
Setting: Yorkshire Region Trauma Units (Level 1 trauma centers with trauma research).
Patients: Individuals injured in motorcycle road traffic accidents between January 1993 and December 1999.
Outcome Measurements: Patient demographics, protective devices (helmet) use, Injury Severity Score (ISS), Glasgow Coma Scale (GCS), clinical details, therapeutic interventions, resuscitation requirements, duration of hospital stay, mortality, and type of foot injuries sustained.
Results: The parent population of 1239 contained 53 (4.3%) foot-injured motorcyclists (49 men) with a mean age of 31.7 years (range 18-79 years). Fifty-two were drivers and one was a rear-seat passenger. Mean ISS was 6.9 (range 4-33), significantly lower than the parent population mean of 34.98 (range 9-75) (P = 0.001). Mean GCS was 14.7 (range 13-15). The motorcyclists' injuries included 26 metatarsal fractures (49.1%), 14 talar fractures (26.4%), 7 os calcis fractures (13.2%), and 6 toe fractures (11.3%). Associated foot injuries included three partial foot amputations, four Lisfranc dislocations, three cases of foot compartment syndrome (two crush injuries with no fracture, one open fourth metatarsal fracture with associated Lisfranc dislocation). Forty-six motorcyclists had more than one foot injury. Associated injuries included 22 ankle fractures (41.5%), 15 tibial fractures (28.3%), 6 femoral fractures (11.3%), 5 pelvic ring fractures (9.4%), 23 upper limb injuries (43.4%), and 3 cases of chest trauma (5.7%). No one sustained abdominal trauma or head injury compared with the parent population. All patients required operative stabilization of foot fractures, including their associated injuries. Mean hospital stay was 10.9 days (range 1-35 days). In the parent population, there were 71 deaths (6.0%), whereas there was only 1 death (1.9%) in the foot-injured group (with fractures including open book pelvic, T6-8, unilateral open femur, tibial, ankle, and metatarsal) with an ISS 33, who died of multiorgan dysfunction syndrome. At final follow-up, all patients underwent radiologic and clinical assessment of foot injuries. Forty-three patients returned to their previous occupation and level of mobility. Ten of the more significantly injured patients had to modify their occupation from manual to sedentary-type jobs due to their foot injuries. We noted a pattern of complex ipsilateral foot and limb injury in nine patients, which we postulate was due to the actual mechanism of contact with the road surface.
Conclusion: Motorcycle accidents continue to be a source of severe injury, especially to the foot. The most common foot injury is a metatarsal fracture; however, there must be a high index of suspicion for associated injuries. Although these injuries are associated with a low mortality rate, they require prompt assessment and treatment to limit long-term morbidity and disability. The difference in foot injury pattern and mortality between the parent population and our series, among other factors, potentially may be influenced by the actual mechanism of contact with the road surface and the modifying action of the foot during the accident.
abstract_id: PUBMED:36189677
Annual Foot Exams are Associated with Reduced Incident Amputation among Older Veterans with Diabetes. We conducted a secondary data analysis to evaluate the association between annual foot exams and incident lower extremity amputations (LEA) among older veterans with diabetes during FY2007-FY2014. Older Veterans with at least one primary care provider visit each year (N = 664,162) and at least one foot exam each year (N = 72,892) and the overlap were identified from the 5 years prior to the study period of interest (FY2002-FY2006 (N = 71,122)). After excluding incident LEA related to cancer and trauma, 71,018 veterans (mean age +/- SD, % male) were included in the final cohort, which was followed from FY2007-FY2014 to evaluate the influence of subsequent annual foot exams and incident LEA. Consistent annual foot exams were protective for incident LEA in older veterans with diabetes, adjusted OR was 0.85 (97% CI: 0.74-0.96). Results indicate that adherence to annual foot exam guidelines can reduce incident LEA in older veterans with diabetes.
Answer: The insensate foot following severe lower extremity trauma is not necessarily an indication for amputation. A study designed to determine the long-term outcomes following the treatment of severe lower extremity injuries in patients with absent plantar sensation at the time of initial presentation found that outcomes were not adversely affected by limb salvage, despite the presence of an insensate foot at the time of presentation. More than half of the patients who had presented with an insensate foot and were treated with limb reconstruction ultimately regained sensation at two years. Initial plantar sensation was not prognostic of long-term plantar sensory status or functional outcomes and should not be a component of a limb-salvage decision algorithm (PUBMED:16322607).
Additionally, minimally invasive surgery using a circular external fixator to correct stiff equinocavus deformities, which can be associated with insensate feet, has been shown to be a reliable initial limb salvage strategy. This approach may lead to improvement in outcomes and a lower amputation rate (PUBMED:37590307).
However, it is important to note that the presence of an insensate foot can lead to complications such as diabetic foot ulcers, which are a major factor in amputations among diabetic patients. The ulcers develop as a result of minor trauma or callus breakdown in the insensate foot, and infection and vascular insufficiency can lead to gangrene and amputation (PUBMED:8110884). Therefore, while an insensate foot is not an automatic indication for amputation, it does require careful management and monitoring to prevent complications that could eventually necessitate amputation. |
Instruction: Does size really matter?
Abstracts:
abstract_id: PUBMED:29201287
Does size really matter? A multisite study assessing the latent structure of the proposed ICD-11 and DSM-5 diagnostic criteria for PTSD. Background: Researchers and clinicians within the field of trauma have to choose between different diagnostic descriptions of posttraumatic stress disorder (PTSD) in the DSM-5 and the proposed ICD-11. Several studies support different competing models of the PTSD structure according to both diagnostic systems; however, findings show that the choice of diagnostic systems can affect the estimated prevalence rates. Objectives: The present study aimed to investigate the potential impact of using a large (i.e. the DSM-5) compared to a small (i.e. the ICD-11) diagnostic description of PTSD. In other words, does the size of PTSD really matter? Methods: The aim was investigated by examining differences in diagnostic rates between the two diagnostic systems and independently examining the model fit of the competing DSM-5 and ICD-11 models of PTSD across three trauma samples: university students (N = 4213), chronic pain patients (N = 573), and military personnel (N = 118). Results: Diagnostic rates of PTSD were significantly lower according to the proposed ICD-11 criteria in the university sample, but no significant differences were found for chronic pain patients and military personnel. The proposed ICD-11 three-factor model provided the best fit of the tested ICD-11 models across all samples, whereas the DSM-5 seven-factor Hybrid model provided the best fit in the university and pain samples, and the DSM-5 six-factor Anhedonia model provided the best fit in the military sample of the tested DSM-5 models. Conclusions: The advantages and disadvantages of using a broad or narrow set of symptoms for PTSD can be debated, however, this study demonstrated that choice of diagnostic system may influence the estimated PTSD rates both qualitatively and quantitatively. In the current described diagnostic criteria only the ICD-11 model can reflect the configuration of symptoms satisfactorily. Thus, size does matter when assessing PTSD.
abstract_id: PUBMED:25916437
Size distribution characteristics of particulate matter in the top areas of coke oven Objective: To systematically evaluate the environmental exposure information of coke oven workers, we investigated the concentration and size distribution characteristics of the particle matter (PM) in the top working area of coke oven.
Methods: The aerodynamic particle sizer spectrometer was employed to collect the concentration and size distribution information of PM at a top working area. The PM was divided into PM ≤ 1.0 µm, 1.0 µm < PM ≤ 2.5 µm, 2.5 µm < PM ≤ 5.0 µm, 5.0 µm < PM ≤ 10.0 µm and PM>10.0 µm based on their aerodynamic diameters. The number concentration, surface area concentration, and mass concentration were analyzed between different groups. We also conducted the correlation analysis on these parameters among groups.
Results: We found the number and surface area concentration of top area particulate was negatively correlated with particle size, but mass concentration curve showed bimodal type with higher point at PM = 1.0 µm and PM = 5.0 µm. The average number concentration of total particulate matter in the top working area was 661.27 number/cm³, surface area concentration was 523.92 µm²/cm³, and mass concentration was 0.12 mg/m³. The most number of particulate matter is not more than 1 µm (PM(1.0)), and its number concentration and surface area concentration accounted for 96.85% and 67.01% of the total particles respectively. In the correlation analysis, different particle size correlated with the total particulate matter differently. And the characteristic parameters of PM2.5 cannot fully reflect the total information of particles.
Conclusion: The main particulate matter pollutants in the top working area of coke oven is PM1.0, and it with PM(5.0) can account for a large proportion in the mass concentration of PM. It suggest that PM1.0 and PM(5.0) should be considered for occupational health surveillance on the particulate matter in the top area of coke oven.
abstract_id: PUBMED:29287244
Investigating the differential contributions of sex and brain size to gray matter asymmetry. Scientific reports of sex differences in brain asymmetry - the difference between the two hemispheres - are rather inconsistent. Some studies report no sex differences whatsoever, others reveal striking sex effects, with large discrepancies across studies in the magnitude, direction, and location of the observed effects. One reason for the lack of consistency in findings may be the confounding effects of brain size as male brains are usually larger than female brains. Thus, the goal of this study was to investigate the differential contributions of sex and brain size to asymmetry with a particular focus on gray matter. For this purpose, we applied a well-validated workflow for voxel-wise gray matter asymmetry analyses in a sample of 96 participants (48 males/48 females), in which a subsample of brains (24 males/24 females) were matched for size. By comparing outcomes based on three different contrasts - all males versus all females; all large brains versus all small brains; matched males versus matched females - we were able to disentangle the contributing effects of sex and brain size, to reveal true (size-independent) sex differences in gray matter asymmetry: Males show a significantly stronger rightward asymmetry than females within the cerebellum, specifically in lobules VII, VIII, and IX. This finding agrees closely with prior research suggesting sex differences in sensorimotor, cognitive and emotional function, which are all moderated by the respective cerebellar sections. No other significant sex effects on gray matter were detected across the remainder of the brain.
abstract_id: PUBMED:30195677
Size distribution and source of heavy metals in particulate matter on the lead and zinc smelting affected area. In order to understand the size distribution and the main kind of heavy metals in particulate matter on the lead and zinc smelting affected area, particulate matter (PM) and the source samples were collected in Zhuzhou, Hunan Province from December 2011 to January 2012 and the results were discussed and interpreted. Atmospheric particles were collected with different sizes by a cascade impactor. The concentrations of heavy metals in atmospheric particles of different sizes, collected from the air and from factories, were measured using an inductively coupled plasma mass spectrometry (ICP-MS). The results indicated that the average concentration of PM, chromium (Cr), arsenic (As), cadmium (Cd) and lead (Pb) in PM was 177.3 ± 33.2 μg/m3, 37.3 ± 8.8 ng/m3, 17.3 ± 8.1 ng/m3, 4.8 ± 3.1 ng/m3 and 141.6 ± 49.1 ng/m3, respectively. The size distribution of PM displayed a bimodal distribution; the maximum PM size distribution was at 1.1-2.1 μm, followed by 9-10 μm. The size distribution of As, Cd and Pb in PM was similar to the distribution of the PM mass, with peaks observed at the range of 1.1-2.1 μm and 9-10 μm ranges while for Cr, only a single-mode at 4.7-5.8 μm was observed. PM (64.7%), As (72.5%), Cd (72.2%) and Pb (75.8%) were associated with the fine mode below 2.1 μm, respectively, while Cr (46.6%) was associated with the coarse mode. The size distribution characteristics, enrichment factor, correlation coefficient values, source information and the analysis of source samples showed that As, Cd and Pb in PM were the typical heavy metal in lead and zinc smelting affected areas, which originated mainly from lead and zinc smelting sources.
abstract_id: PUBMED:30384061
Effect of particle size on adsorption of norfloxacin and tetracycline onto suspended particulate matter in lake. Aquatic systems are important sinks of antibiotics; however, their final destination has not been completely elucidated. Therefore, we investigated the adsorption behaviors of suspended particulate matter (SPM) in lakes to support the analysis of the migration and transformation of antibiotics in lacustrine environments. SPM was collected from Meiliang Bay (ML) and Gonghu Bay (GH) in Lake Taihu, China, which was sieved into four particle sizes of >300, 150-300, 63-150, and <63 μm for subsequent antibiotic adsorption experiments. All particles exhibited rapid and substantial adsorption of tetracycline and norfloxacin. Most size fractions fit a Langmuir model, indicative of monomolecular adsorption, except the <63-μm fraction, which fit a Freundlich model. Particle size had a substantial influence on antibiotic adsorption; the 63-150-μm fraction had the greatest adsorption capacity, while the >300-μm fraction had the lowest capacity. The influence of particle size on adsorption was mainly related to SPM physicochemical properties, such as cation exchange capacity, surface area, and organic matter content, rather than types of functional groups. Considering the mass ratios, the <63-μm fraction had the greatest contribution to adsorption. Antibiotics adsorbed onto the SPM from ML and GH exhibited different behaviors. The ML SPM settled more readily into sediment, and larger, denser particles were more resistant to resuspension. Conversely, the GH SPM was more likely to be found in the water column, and larger, less-dense particles remained in the water column. These results help improve our understanding of the interactions between SPM and antibiotics in aquatic systems.
abstract_id: PUBMED:18318338
Synopsis of the temporal variation of particulate matter composition and size. A synopsis of the detailed temporal variation of the size and number distribution of particulate matter (PM) and its chemical composition on the basis of measurements performed by several regional research consortia funded by the U.S. Environmental Protection Agency (EPA) PM Supersite Program is presented. This program deployed and evaluated a variety of research and emerging commercial measurement technologies to investigate the physical and chemical properties of atmospheric aerosols at a level of detail never before achieved. Most notably these studies demonstrated that systematic size-segregated measurements of mass, number, and associated chemical composition of the fine (PM2.5) and ultrafine (PM0.1) fraction of ambient aerosol with a time resolution down to minutes and less is achievable. A wealth of new information on the temporal variation of aerosol has been added to the existing knowledge pool that can be mined to resolve outstanding research and policy-related questions. This paper explores the nature of temporal variations (on time scales from several minutes to hours) in the chemical and physical properties of PM and its implications in the identification of PM formation processes, and source attribution (primary versus secondary), the contribution of local versus transported PM and the development of effective PM control strategies. The PM Supersite results summarized indicate that location, time of day, and season significantly influence not only the mass and chemical composition but also the size-resolved chemical/elemental composition of PM. Ambient measurements also show that ultrafine particles have different compositions and make up only a small portion of the PM mass concentration compared with inhalable coarse and fine particles, but their number concentration is significantly larger than their coarse or fine counterparts. PM size classes show differences in the relative amounts of nitrates, sulfates, crustal materials, and most especially carbon as well as variations in seasonal and diurnal patterns.
abstract_id: PUBMED:34989887
Effects of aerosol particle size on the measurement of airborne PM2.5 with a low-cost particulate matter sensor (LCPMS) in a laboratory chamber. Previous validation studies found a good linear correlation between the low-cost particulate matter sensors (LCPMS) and other research grade particulate matter (PM) monitors. This study aimed to determine if different particle size bins of PM would affect the linear relationship and agreement between the Dylos DC1700 (LCPMS) particle count measurements (converted to PM2.5 mass concentrations) and the Grimm 11R (research grade instrument) mass concentration measurements. Three size groups of PM2.5 (mass median aerodynamic diameters (MMAD): < 1 µm, 1-2 µm, and > 2 µm) were generated inside a laboratory chamber, controlled for temperature and relative humidity, by dispersing sodium chloride crystals through a nebulizer. A linear regression comparing 1-min average PM2.5 particle counts from the Dylos DC1700 (Dylos) to the Grimm 11R (Grimm) mass concentrations was estimated by particle size group. The slope for the linear regression was found to increase as MMAD increased (< 1 µm, 0.75 (R2 = 0.95); 1-2 µm, 0.90 (R2 = 0.93); and > 2 µm, 1.03 (R2 = 0.94). The linear slopes were used to convert Dylos counts to mass concentration, and the agreement between converted Dylos mass and Grimm mass was estimated. The absolute relative error between converted Dylos mass and the Grimm mass was smaller in the < 1 µm group (16%) and 1-2 µm group (16%) compared to the > 2 µm group (32%). Therefore, the bias between converted Dylos mass and Grimm mass varied by size group. Future studies examining particle size bins over a wider range of coarse particles (> 2.5 µm) would provide useful information for accurately converting LCPMS counts to mass concentration.
abstract_id: PUBMED:35669811
The effect of size distribution of ambient air particulate matter on oxidative potential by acellular method Dithiothreitol; a systematic review. Today air pollution caused by particulate matter (PM) is a global issue, especially in densely populated and high-traffic cities. The formation of reactive oxygen species (ROS) by various toxicological studies is considered as one of the important effects caused by airborne particles that can lead to adverse effects on human health. In this study, to answer the question of whether particle size affects oxidative potential (OP), we searched the main databases, including PubMed, Scopus, Embase, and Web of Science, and defined search strategy based on the MESH terms for the above-mentioned search engines. All articles published until 2021 were searched. An ANOVA was run using R software to show the correlation between the size distributions of particulate matter and oxidative potential (base on mass and volumetric units) in ambient air. As expected, the regression results showed that the relationship between particle size and OP values for the studies based on mass-logarithm has a significant difference in the different distribution size categories, which was related to the difference between the <2.5 and < 1 categories. However, ANOVA analysis did not show a significant difference in the volumetric OP logarithm in the different distribution size categories. In this study, it was found that sizes higher than 2.5 μm did not have much effect on human health, and it is recommended that future research focus on PM2.5.
Supplementary Information: The online version contains supplementary material available at 10.1007/s40201-021-00768-w.
abstract_id: PUBMED:28510903
Characterization of soil organic matter in perhumid natural cypress forest: comparison of humification in different particle-size fractions. Background: The Chamaecyparis forest is a valuable natural resource in eastern Asia. The characteristics of soil humic substances and the influence of environmental factors in natural Chamaecyparis forests in subtropical mountain regions are poorly understood. The study site of a perhumid Chamaecyparis forest is in the Yuanyang Lake Preserved Area in northcentral Taiwan. We collected samples from organic horizons (Oi, Oe and Oa) and from the surface horizon (O/A horizon) at the summit, footslope and lakeshore to characterize the composition of the soil organic matter. Samples of organic horizons were dried and ground, and those of the O/A horizon were passed through wet sieving for different particle-size fractions before analysis. The C chemical structure in the samples was determined with CP/MAS 13C NMR spectra.
Results: The ratios of alkyl-C/O-alkyl-C and aromaticity increased with decomposition of litter from the Oi, Oe, to Oa horizon. The ratio of alkyl-C/O-alkyl-C also increased from coarse (> 250 μm) to very fine (< 2 μm) particle fractions, which indicates increased humification of soil organic matter (SOM) in the fine-sized fractions. However, aromaticity tended to decrease with decreasing particle size, so it may not be useful in evaluating SOM humification of different particle-size fractions.
Conclusions: The humification degree of the samples from O horizons and different particle-size fractions of the O/A horizon showed no gradient change with change in topography. This prevalent slow decomposition of organic matter in these perhumid climate conditions may narrow the difference in humification from the summit to lakeshore.
abstract_id: PUBMED:33181520
Does Size Really Matter? Abstract: Ileocolonoscopy remains the mainstay of objective disease assessment in Crohn's Disease, and various validated indices are used to grade severity of the disease. The most commonly used indices are the Simple Endoscopic Score for Crohn's Disease (including the size of ulcers) and the Crohn's Disease Endoscopic Index of Severity (including the depth of ulcers). These measurements are highly subjective, especially the depth of an ulcer, and are based solely on the discretion of the endoscopist coupled with the imaging capabilities of the colonoscope and adequacy of the bowel prep. Narula et al. undertook a post hoc analysis of baseline predictors of endoscopic remission (ER) at week 26 in a subset (172 of 508) of moderate-severe Crohn's disease patients participating in the SONIC trial. The authors found no significant differences in the odds of achieving ER when comparing overall or segmental severe inflammation (high Simple Endoscopic Score for Crohn's Disease [>16 overall or >3 per segment] or Crohn's Disease Endoscopic Index of Severity [>12 overall or >3 per segment] scores) with moderate inflammation. The number of affected segments involved also did not impact the likelihood of achieving week 26 ER. The authors then found a potentially synergistic effect with large and deep ulcers in the ileum and rectum. The optimal time to assess whether ulcers ultimately heal or not is unknown, but waiting longer than 26 weeks may negate any lead time bias regarding ulcer size. Therefore, similar to many areas of life, it is likely that size ultimately does not matter, but instead location, location, and location.
Answer: The question "Does size really matter?" can be interpreted in various contexts based on the provided abstracts. In the context of posttraumatic stress disorder (PTSD) diagnostic criteria, size does matter. The study in abstract PUBMED:29201287 found that using a larger set of diagnostic criteria (DSM-5) compared to a smaller set (ICD-11) can influence the estimated PTSD rates both qualitatively and quantitatively. The ICD-11 model was found to reflect the configuration of symptoms satisfactorily, suggesting that the size of the diagnostic description impacts PTSD assessment.
In environmental health research, size also matters when it comes to particulate matter (PM). The size distribution of PM has implications for occupational health surveillance (PUBMED:25916437), the differential contributions of sex and brain size to gray matter asymmetry (PUBMED:29287244), the distribution and source of heavy metals in PM (PUBMED:30195677), and the adsorption behaviors of antibiotics onto suspended particulate matter in lakes (PUBMED:30384061). The size of PM affects its number concentration, surface area concentration, mass concentration, and the potential health risks associated with exposure.
Furthermore, the temporal variation of PM composition and size has significant implications for identifying PM formation processes, source attribution, and developing effective control strategies (PUBMED:18318338). The effect of aerosol particle size on the measurement of airborne PM2.5 with low-cost sensors was also found to vary by size group, affecting the accuracy of these measurements (PUBMED:34989887).
In terms of oxidative potential, particle size affects the formation of reactive oxygen species by airborne particles, with sizes higher than 2.5 μm having less effect on human health (PUBMED:35669811). In soil science, the humification degree in different particle-size fractions of soil organic matter does not show a gradient change with topography in perhumid climate conditions (PUBMED:28510903).
Lastly, in the medical field, the size of ulcers in Crohn's Disease patients was not found to significantly affect the odds of achieving endoscopic remission, suggesting that in this context, size may not be as important as other factors such as location (PUBMED:33181520).
In summary, across various fields of study, size does matter as it can significantly impact diagnostic rates, environmental health assessments, brain structure analyses, and the effectiveness of medical treatments. |
Instruction: Randomized controlled trials in very preterm infants: does inclusion in the study result in any long-term benefit?
Abstracts:
abstract_id: PUBMED:24969309
Randomized controlled trials in very preterm infants: does inclusion in the study result in any long-term benefit? Background: Since the introduction of randomized controlled trials (RCT) in clinical research, there has been discussion of whether enrolled patients have worse or better outcomes than comparable non-participants.
Objective: To investigate whether very preterm infants randomized to a placebo group in an RCT have equivalent neurodevelopmental outcomes to infants who were eligible but not randomized (eligible NR).
Methods: In the course of an RCT investigating the neuroprotective effect of early high-dose erythropoietin on the neurodevelopment of very preterm infants, the outcome data of 72 infants randomized to placebo were retrospectively compared with those of 108 eligible NR infants. Our primary outcome measures were the mental (MDI) and psychomotor (PDI) developmental indices of the Bayley Scales of Infant Development II at 24 months of corrected age. The outcomes of the two groups were considered equivalent if the confidence intervals (CIs) of their mean differences fitted within our ±5-point margin of equivalence.
Results: Except for a higher socioeconomic status of the trial participants, both groups were balanced for most perinatal variables. The mean difference (90% CI) between the eligible NR and the placebo group was -2.1 (-6.1 and 1.9) points for the MDI and -0.8 (-4.2 and 2.5) points for the PDI. After adjusting for the socioeconomic status, maternal age and child age at follow-up, the mean difference for the MDI was -0.5 (-4.3 and 3.4) points.
Conclusions: Our results indicate that the participation of very preterm infants in an RCT is associated with equivalent long-term outcomes compared to non-participating infants.
abstract_id: PUBMED:21251154
Conducting randomized controlled trials with older people with dementia in long-term care: Challenges and lessons learnt. The characteristics of older people with dementia and the long-term care environment can make conducting research a challenge and, as such, this population and setting are often understudied, particularly in terms of clinical or randomized controlled trials. This paper provides a critical discussion of some of the difficulties faced whilst implementing a randomized controlled trial exploring the effect of a live music programme on the behaviour of older people with dementia in long-term care. A discussion of how these challenges were addressed is presented to aid investigators planning the design of similar research and help encourage a proactive approach in dealing with research-related challenges right from project conception. The article is structured according to the three principles of a randomized controlled trial in order to keep experimental rigour at the forefront of this research area.
abstract_id: PUBMED:28556820
Placebo by Proxy in Neonatal Randomized Controlled Trials: Does It Matter? Placebo effects emerging from the expectations of relatives, also known as placebo by proxy, have seldom been explored. The aim of this study was to investigate whether in a randomized controlled trial (RCT) there is a clinically relevant difference in long-term outcome between very preterm infants whose parents assume that verum (PAV) had been administered and very preterm infants whose parents assume that placebo (PAP) had been administered. The difference between the PAV and PAP infants with respect to the primary outcome-IQ at 5 years of age-was considered clinically irrelevant if the confidence interval (CI) for the mean difference resided within our pre-specified ±5-point equivalence margins. When adjusted for the effects of verum/placebo, socioeconomic status (SES), head circumference and sepsis, the CI was [-3.04, 5.67] points in favor of the PAV group. Consequently, our study did not show equivalence between the PAV and PAP groups, with respect to the pre-specified margins of equivalence. Therefore, our findings suggest that there is a small, but clinically irrelevant degree to which a preterm infant's response to therapy is affected by its parents' expectations, however, additional large-scale studies are needed to confirm this conjecture.
abstract_id: PUBMED:38452434
Effect of Placental Transfusion on Long-Term Neurodevelopmental Outcomes in Premature Infants: A Systematic Review and Meta-Analysis of Randomized Controlled Trials. Background: The pathophysiology and the potential risks of placental transfusion (PT) differ substantially in preterm infants, necessitating specific studies in this population. This study aimed to evaluate the safety and efficacy of PT in preterm infants from the perspective of long-term neurodevelopmental outcomes.
Methods: We conducted a systematic literature search using placental transfusion, preterm infant, and its synonyms as search terms. Cochrane Central Register of Controlled Trials, Medline, and Embase were searched until March 07, 2023. Two reviewers independently identified, extracted relevant randomized controlled trials, and appraised the risk of bias. The extracted studies were included in the meta-analysis of long-term neurodevelopmental clinical outcomes using fixed-effects models.
Results: A total of 5612 articles were identified, and seven randomized controlled trials involving 2551 infants were included in our meta-analysis. Compared with immediate cord clamping (ICC), PT may not impact adverse neurodevelopment events. No clear evidence was found of a difference in the risk of neurodevelopmental impairment (risk ratio [RR]: 0.89, 95% confidence interval [CI]: 0.76 to 1.03, P = 0.13, I2 = 0). PT was not associated with the incidence of cerebral palsy (RR: 1.23, 95% CI: 0.59 to 2.57, P = 0.79, I2 = 0). Analyses showed no differences between the two interventions in cognitive, language, and motor domains of neurodevelopment.
Conclusions: From the perspective of long-term neurodevelopment, PT at preterm birth may be as safe as ICC. Future studies should focus on standardized, high-quality clinical trials and individual participant data to optimize cord management strategies for preterm infants after birth.
abstract_id: PUBMED:14613596
Randomized controlled trials in long-term care residents with dementia: a systematic review. Objective: To evaluate the quality of reporting of randomized controlled trials for pharmacologic interventions in long-term care residents with dementia.
Data Sources: We performed electronic searches of AMED, CINAHL, E-PSYCHE, Cochrane Controlled Trials Register, and MEDLINE. We also searched the reference lists of included studies and bibliographies of relevant review articles.
Study Selection: All randomized controlled trials for pharmacologic interventions in patients with dementia residing in long-term care facilities.
Data Abstraction: We abstracted data independently, in duplicate, using a data abstraction sheet and a quality checklist.
Data Synthesis: Fifteen trials met inclusion criteria. Five trials lacked institutional ethical review, while two lacked informed consent. Eleven trials gave adequate description of withdrawals and 14 trials reported adverse events adequately. We found incomplete reporting of methods of randomization, allocation concealment, restriction, blinding, sample size estimation and intention-to-treat analysis. Sensitivity analysis indicated that reporting of allocation concealment was associated with increased quality of trial according to the quality scale (P = 0.007).
Conclusions: Clinicians and the public do not have high-quality information to guide pharmacologic decision making for long-term care residents with dementia. The reporting quality is highly variable in the trials reviewed, and concerns exist surrounding the conduct of several trials.
abstract_id: PUBMED:29169664
Long-term efficacy of psychotherapy for posttraumatic stress disorder: A meta-analysis of randomized controlled trials. Psychotherapies are well established as efficacious acute interventions for posttraumatic stress disorder (PTSD). However, the long-term efficacy of such interventions and the maintenance of gains following termination is less understood. This meta-analysis evaluated enduring effects of psychotherapy for PTSD in randomized controlled trials (RCTs) with long-term follow-ups (LTFUs) of at least six months duration. Analyses included 32 PTSD trials involving 72 treatment conditions (N=2935). Effect sizes were significantly larger for active psychotherapy conditions relative to control conditions for the period from pretreatment to LTFU, but not posttreatment to LTFU. All active interventions demonstrated long-term efficacy. Pretreatment to LTFU effect sizes did not significantly differ among treatment types. Exposure-based treatments demonstrated stronger effects in the posttreatment to LTFU period (d=0.27) compared to other interventions (p=0.005). Among active conditions, LTFU effect sizes were not significantly linked to trauma type, population type, or intended duration of treatment, but were strongly tied to acute dropout as well as whether studies included all randomized patients in follow-up analyses. Findings provide encouraging implications regarding the long-term efficacy of interventions and the durability of symptom reduction, but must be interpreted in parallel with methodological considerations and study characteristics of RCTs.
abstract_id: PUBMED:28715727
Efficacy and safety of long-term antidepressant treatment for bipolar disorders - A meta-analysis of randomized controlled trials. Objective: Efficacy and safety of long-term use of antidepressants (AD) in bipolar disorder (BD) patients remains highly controversial. Here we performed a meta-analysis of randomized controlled trials (RCTs) exploring the efficacy and safety of long-term AD use in BD patients.
Methods: English-written literature published in peer-reviewed journal was systematically searched from Pubmed, EMBASE, CENTRAL, PsycINFO and Clinicaltrials.gov. Each database was searched from its first available time to August 31, 2016. Additional papers were searched from recent guidelines, expert consensus and systematic reviews by hand. RCTs exploring the efficacy and safety of long-term (≥4m) antidepressant treatment for patients with bipolar disorder were eligible. Two authors (HF, JL) independently extracted the data. Risk ratio (RR), number needed to treat (NNT) and/or number needed to harm (NNH) for new depressive episodes and new manic/hypomanic episodes were calculated. Subgroup analyses were performed based on treatment regimen (AD monotherapy or combined with MS), types of antidepressants, funding source, bipolar subtypes and treatment duration.
Results: Eleven trials with 692 bipolar disorder patients were included in the meta-analysis. The risk of bias assessment demonstrated moderate bias risk. Antidepressants were superior to placebo in reducing new depressive episodes in bipolar disorders without increasing risk of new manic/hypomanic episodes either used as monotherapy or in combination with MS. Subgroup analyses revealed that greater benefit and lower risk may be achieved in BD II than in BD I. However, compared with MS monotherapy, AD monotherapy significantly increased the risk of affective switch with no improvement in prophylaxis of new depressive episodes.
Conclusions: Reduced new depressive episodes may be achieved by long-term AD treatment with no significantly increased risk of new manic/hypomanic episodes in BD, particularly in BD II. The elevated risk of affective switch of AD monotherapy compared with MS monotherapy may be contributed to the protective effect of MS in diminishing manic/hypomanic episodes. Further studies are needed to verify our findings.
abstract_id: PUBMED:28870134
Role of massage therapy on reduction of neonatal hyperbilirubinemia in term and preterm neonates: a review of clinical trials. Background: Neonatal hyperbilirubinemia (NNH) is one of the leading causes of admissions in nursery throughout the world. It affects approximately 2.4-15% of neonates during the first 2 weeks of life.
Aims: To evaluate the role of massage therapy for reduction of NNH in both term and preterm neonates.
Method: The literature search was done for various randomized control trials (RCTs) by searching the Cochrane Library, PubMed, and EMBASE.
Results: This review included total of 10 RCTs (two in preterm neonates and eight in term neonates) that fulfilled inclusion criteria. In most of the trials, Field massage was given. Six out of eight trials reported reduction in bilirubin levels in term neonates. However, only one trial (out of two) reported significant reduction in bilirubin levels in preterm neonates. Both trials in preterm neonates and most of the trials in term neonates (five trials) reported increased stool frequencies.
Conclusion: Role of massage therapy in the management of NNH is supported by the current evidence. However, due to limitations of the trials, current evidences are not sufficient to use massage therapy for the management of NNH in routine practice.
abstract_id: PUBMED:36574530
Efficacy of Therapeutic Exercise on Activities of Daily Living and Cognitive Function Among Older Residents in Long-term Care Facilities: A Systematic Review and Meta-analysis of Randomized Controlled Trials. Objectives: This study aimed to systematically analyze the efficacy of therapeutic exercise on activities of daily living (ADL) and cognitive function among older residents in long-term care facilities.
Data Sources: PubMed, Cochrane Central of Register Trials, Physiotherapy Evidence Database, OTseeker, and Ichushi-Web were searched from inception until December 2018.
Study Selection: Databases were searched to identify randomized controlled trials (RCTs) of therapeutic exercise for long-term care facility residents aged 60 years and older, focusing on ADL and cognitive function as outcomes.
Data Extraction: Two independent reviewers extracted the key information from each eligible study. Two reviewers independently screened and assessed all studies for eligibility, extracting information on study participants, details of interventions, outcome characteristics, and significant outcomes. Any discrepancies were resolved by a third reviewer.
Data Synthesis: A total of 11 RCTs with 1280 participants were eligible for analyses. Therapeutic exercise had a significant benefit on ADL (standard mean difference [SMD]=0. 22, 95% confidence interval [CI]: 0.02, 0.42, P=.03). Subgroup analyses indicated that interventions conducted ≥3 days per week [SMD=0.42, 95% CI 0.02, 0.82, P=.04] had a significant benefit on ADL. For cognitive function, group exercise and ≥3 days/week of intervention had a significant benefit (group exercise: mean difference [MD]=3.36, 95% CI 0.91, 5.80, P=.007; ≥3 days/week of intervention: MD=2.28, 95% CI 0.07, 4.49, P=.04).
Conclusions: Therapeutic exercise conducted 3 or more days per week may be effective for improving ADL and cognitive function among older residents in long-term care facilities. This meta-analysis suggested that group exercise for cognitive functions was effective. However, the effective method of intervention delivery for ADL was unclear.
abstract_id: PUBMED:28368368
Review of Randomized Controlled Trials of Massage in Preterm Infants. Preterm birth affects about 10% of infants born in the United States. Massage therapy is being used in some neonatal intensive care units for its potential beneficial effects on preterm infants. This article reviews published randomized controlled trials on the effects of massage in preterm infants. Most studies evaluating the effect of massage in weight gain in premature infants suggest a positive effect on weight gain. Increase in vagal tone has been reported in infants who receive massage and has been suggested as a possible mechanism for improved weight gain. More studies are needed on the underlying mechanisms of the effects of massage therapy on weight gain in preterm infants. While some trials suggest improvements in developmental scores, decreased stress behavior, positive effects on immune system, improved pain tolerance and earlier discharge from the hospital, the number of such studies is small and further evidence is needed. Further studies, including randomized controlled trials, are needed on the effects of massage in preterm infants.
Answer: The question of whether inclusion in randomized controlled trials (RCTs) results in any long-term benefit for very preterm infants has been explored in several studies. According to the findings from one study, very preterm infants who were randomized to a placebo group in an RCT investigating the neuroprotective effect of early high-dose erythropoietin showed equivalent neurodevelopmental outcomes compared to infants who were eligible but not randomized. The primary outcome measures were the mental (MDI) and psychomotor (PDI) developmental indices of the Bayley Scales of Infant Development II at 24 months of corrected age. The results indicated that participation in an RCT is associated with equivalent long-term outcomes compared to non-participating infants (PUBMED:24969309).
Another study examined the effect of placental transfusion (PT) on long-term neurodevelopmental outcomes in preterm infants. The systematic review and meta-analysis included seven RCTs involving 2551 infants and found that PT may not impact adverse neurodevelopment events when compared with immediate cord clamping (ICC). The analyses showed no differences between the two interventions in cognitive, language, and motor domains of neurodevelopment, suggesting that PT at preterm birth may be as safe as ICC from the perspective of long-term neurodevelopment (PUBMED:38452434).
Additionally, a review of RCTs of massage in preterm infants indicated that most studies evaluating the effect of massage on weight gain in premature infants suggest a positive effect. Some trials also suggest improvements in developmental scores, decreased stress behavior, positive effects on the immune system, improved pain tolerance, and earlier discharge from the hospital. However, the number of such studies is small, and further evidence is needed to confirm these findings (PUBMED:28368368).
In summary, the evidence from these studies suggests that participation in RCTs does not result in worse long-term outcomes for very preterm infants and may be associated with benefits such as equivalent or improved neurodevelopmental outcomes and other health-related improvements. However, more research is needed to fully understand the long-term impact of RCT participation on this population. |
Instruction: Biparametric 3T Magnetic Resonance Imaging for prostatic cancer detection in a biopsy-naïve patient population: a further improvement of PI-RADS v2?
Abstracts:
abstract_id: PUBMED:34065851
Diagnostic Performance of PI-RADS v2, Proposed Adjusted PI-RADS v2 and Biparametric Magnetic Resonance Imaging for Prostate Cancer Detection: A Preliminary Study. Purpose: To evaluate the diagnostic performance of PI-RADS v2, proposed adjustments to PI-RADS v2 (PA PI-RADS v2) and biparametric magnetic resonance imaging (MRI) for prostate cancer detection.
Methods: A retrospective cohort of 224 patients with suspected prostate cancer was included from January 2016 to November 2018. All the patients underwent a multi-parametric MR scan before biopsy. Two radiologists independently evaluated the MR examinations using PI-RADS v2, PA PI-RADS v2, and a biparametric MRI protocol, respectively. Receiver operating characteristic (ROC) curves for the three different protocols were drawn.
Results: In total, 90 out of 224 cases (40.18%) were pathologically diagnosed as prostate cancer. The area under the ROC curves (AUC) for diagnosing prostate cancers by biparametric MRI, PI-RADS v2, and PA PI-RADS v2 were 0.938, 0.935, and 0.934, respectively. For cancers in the peripheral zone (PZ), the diagnostic sensitivity was 97.1% for PI-RADS v2/PA PI-RADS v2 and 96.2% for biparametric MRI. Moreover, the specificity was 84.0% for biparametric MRI and 58.0% for PI-RADS v2/PA PI-RADS v2. For cancers in the transition zone (TZ), the diagnostic sensitivity was 93.4% for PA PI-RADS v2 and 88.2% for biparametric MRI/PI-RADS v2. Furthermore, the specificity was 95.4% for biparametric MRI/PI-RADS v2 and 78.0% for PA PI-RADS v2.
Conclusions: The overall diagnostic performance of the three protocols showed minimal differences. For lesions assessed as being category 3 using the biparametric MRI protocol, PI-RADS v2, or PA PI-RADS v2, it was thought prostate cancer detection could be improved. Attention should be paid to false positive results when PI-RADS v2 or PA PI-RADS v2 are used.
abstract_id: PUBMED:34243992
Can high b-value 3.0 T biparametric MRI with the Simplified Prostate Image Reporting and Data System (S-PI-RADS) be used in biopsy-naïve men? Objective: To analyze the clinical value of high b-value 3.0 T biparametric magnetic resonance with the Simplified Prostate Image Reporting and Data System (S-PI-RADS) in biopsy-naïve men.
Methods: A retrospective analysis of the data of 224 patients who underwent prostate biopsy (cognitive fusion targeted biopsy combined with systematic biopsy) after a high b-value 3.0 T magnetic resonance examination at Haikou Hospital from July 2018 to July 2020 was performed. Two radiologists performed multi-parameter magnetic resonance imaging (mp-MRI) with the prostate imaging report and data system version 2 (PI-RADS v2) and biparametric magnetic resonance imaging (bp-MRI) with the simplified prostate image reporting and data system (S-PI-RADS). The detection efficacy of the two regimens was evaluated by classifying prostate cancer (PCa) and clinically significant prostate cancer (csPCa) according to pathology, and the statistical significance of the differences between the two regimens was determined by Z-test.
Results: The area under the receiver operating curve (AUC) values of mp-MRI based on PI-RADS v2 and bp-MRI based on S-PI-RADS to detect PCa were 0.905 and 0.892, respectively, while the AUC values for the detection of csPCa were 0.919 and 0.906, respectively. There was no statistically significant difference between the two tests (Z values were 0.909 and 1.145, p > 0.05).
Conclusion: There was no significant difference in the detection efficacy of high b-value bp-MRI based on the S-PI-RADS score for prostate cancer and clinically significant prostate cancer compared with the standard PI-RADS v2 score with mp-MRI protocols, which can be applied clinically.
abstract_id: PUBMED:27842676
Biparametric 3T Magnetic Resonance Imaging for prostatic cancer detection in a biopsy-naïve patient population: a further improvement of PI-RADS v2? Objectives: To prospectively determine the diagnostic accuracy of a biparametric 3T magnetic resonance imaging protocol (BP-MRI) for prostatic cancer detection, compared to a multiparametric MRI protocol (MP-MRI), in a biopsy naïve patient population.
Methods: Eighty-two untreated patients (mean age 65±7.6years) with clinical suspicion of prostate cancer and/or altered prostate-specific antigen (PSA) levels underwent a MP-MRI, including T2-weighted imaging, diffusion-weighted imaging (with the correspondent apparent diffusion coefficient maps) and dynamic contrast enhanced sequence, followed by prostate biopsy. Two radiologists reviewed both the BP-MRI and the MP-MRI protocols to establish a radiological diagnosis. Receiver operating characteristics curves were obtained to determine the diagnostic performance of the two protocols.
Results: The mean PSA level was 8.8±8.1ng/ml. A total of 34 prostatic tumors were identified, with a Gleason score that ranged from 3+3 to 5+4. Of these 34 tumors, 29 were located within the peripheral zone and 5 in the transitional zone. BP-MRI and MP-MRI showed a similar performance in terms of overall diagnostic accuracy, with an area under the curve of 0.91 and 0.93, respectively (p=n.s.).
Conclusions: BP-MRI prostate protocol is feasible for prostatic cancer detection compared to a standard MP-MRI protocol, requiring a shorter acquisition and interpretation time, with comparable diagnostic accuracy to the conventional protocol, without the administration of gadolinium-based contrast agent.
abstract_id: PUBMED:28012730
Magnetic resonance imaging of the prostate: interpretation using the PI-RADS V2. Objective: Version 2 of the Prostate Imaging and Reporting and Data System (PI-RADS) was developed to help in the detection, location, and characterization of prostate cancer with magnetic resonance imaging (MRI). Its recommendations for standardizing image acquisition parameters aims to reduce variability in the interpretation of MRI studies of the prostate; this approach, together with structured reporting, has the added value of improving communication among radiologists and between radiologists and urologists. This article aims to explain the PI-RADS v2 classification in a simple way, using illustrative images for each of the categories, as well as to recommend the use of a standard technique that helps ensure the reproducibility of multiparametric MRI.
Conclusion: The PI-RADS v2 is simple to appy when reading multiparametric MRI studies of the prostate. It is important for radiologists doing prostate imaging to use the PI-RADS v2 in daily practice to write clear and concise reports that improve communication between radiologists and urologists.
abstract_id: PUBMED:29908876
Prostate Imaging-Reporting and Data System Steering Committee: PI-RADS v2 Status Update and Future Directions. Context: The Prostate Imaging-Reporting and Data System (PI-RADS) v2 analysis system for multiparametric magnetic resonance imaging (mpMRI) detection of prostate cancer (PCa) is based on PI-RADS v1, accumulated scientific evidence, and expert consensus opinion.
Objective: To summarize the accuracy, strengths and weaknesses of PI-RADS v2, discuss pathway implications of its use and outline opportunities for improvements and future developments.
Evidence Acquisition: For this consensus expert opinion from the PI-RADS steering committee, clinical studies, systematic reviews, and professional guidelines for mpMRI PCa detection were evaluated. We focused on the performance characteristics of PI-RADS v2, comparing data to systems based on clinicoradiologic Likert scales and non-PI-RADS v2 imaging only. Evidence selections were based on high-quality, prospective, histologically verified data, with minimal patient selection and verifications biases.
Evidence Synthesis: It has been shown that the test performance of PI-RADS v2 in research and clinical practice retains higher accuracy over systematic transrectal ultrasound (TRUS) biopsies for PCa diagnosis. PI-RADS v2 fails to detect all cancers but does detect the majority of tumors capable of causing patient harm, which should not be missed. Test performance depends on the definition and prevalence of clinically significant disease. Good performance can be attained in practice when the quality of the diagnostic process can be assured, together with joint working of robustly trained radiologists and urologists, conducting biopsy procedures within multidisciplinary teams.
Conclusions: It has been shown that the test performance of PI-RADS v2 in research and clinical practice is improved, retaining higher accuracy over systematic TRUS biopsies for PCa diagnosis.
Patient Summary: Multiparametric magnetic resonance imaging (MRI) and MRI-directed biopsies using the Prostate Imaging-Reporting and Data System improves the detection of prostate cancers likely to cause harm, and at the same time decreases the detection of disease that does not lead to harms if left untreated. The keys to success are high-quality imaging, reporting, and biopsies by radiologists and urologists working together in multidisciplinary teams.
abstract_id: PUBMED:33102394
Detection of prostate cancer using prostate imaging reporting and data system score and prostate-specific antigen density in biopsy-naive and prior biopsy-negative patients. Background: Few studies report on indications for prostate biopsy using Prostate Imaging-Reporting and Data System (PI-RADS) score and prostate-specific antigen density (PSAD). No study to date has included biopsy-naïve and prior biopsy-negative patients. Therefore, we evaluated the predictive values of the PI-RADS, version 2 (v2) score combined with PSAD to decrease unnecessary biopsies in biopsy-naïve and prior biopsy-negative patients.
Materials And Methods: A total of 1,098 patients who underwent multiparametric magnetic resonance imaging at our hospital before a prostate biopsy and who underwent their second prostate biopsy with an initial benign negative prostatic biopsy were included. We found factors associated with clinically significant prostate cancer (csPca). We assessed negative predictive values by stratifying biopsy outcomes by prior biopsy history and PI-RADS score combined with PSAD.
Results: The median age was 65 years (interquartile range: 59-70), and the median PSA was 5.1 ng/mL (interquartile range: 3.8-7.1). Multivariate logistic regression analysis revealed that age, prostate volume, PSAD, and PI-RADS score were independent predictors of csPca. In a biopsy-naïve group, 4% with PI-RADS score 1 or 2 had csPca; in a prior biopsy-negative group, 3% with PI-RADS score 1 or 2 had csPca. The csPca detection rate was 2.0% for PSA density <0.15 ng/mL/mL and 4.0% for PSA density 0.15-0.3 ng/mL/mL among patients with PI-RADS score 3 in a biopsy-naïve group. The csPca detection rate was 1.8% for PSA density <0.15 ng/mL/mL and 0.15-0.3 ng/mL/mL among patients with PI-RADS score 3 in a prior biopsy-negative group.
Conclusion: Patients with PI-RADS v2 score ≤2, regardless of PSA density, may avoid unnecessary biopsy. Patients with PI-RADS score 3 may avoid unnecessary biopsy through PSA density results.
abstract_id: PUBMED:33893066
Diagnostic Accuracy of Single-plane Biparametric and Multiparametric Magnetic Resonance Imaging in Prostate Cancer: A Randomized Noninferiority Trial in Biopsy-naïve Men. Background: Urological guidelines recommend multiparametric magnetic resonance imaging (mpMRI) in men with a suspicion of prostate cancer (PCa). The resulting increase in MRI demand might place health care systems under substantial stress.
Objective: To determine whether single-plane biparametric MRI (fast MRI) workup could represent an alternative to mpMRI in the detection of clinically significant (cs) PCa.
Design, Setting, And Participants: Between April 2018 and February 2020, 311 biopsy-naïve men aged ≤75 yr with PSA ≤15 ng/ml and negative digital rectal examination were randomly assigned to 1.5-T fast MRI (n = 213) or mpMRI (n = 98).
Intervention: All MRI examinations were classified according to Prostate Imaging-Reporting and Data System (PI-RADS) version 2. Men scored PI-RADS 1-2 underwent 12-core standard biopsy (SBx) and those with PI-RADS 4-5 on fast MRI or PI-RADS 3-5 on mpMRI underwent targeted biopsy in combination with SBx. Equivocal cases on fast MRI (PI-RADS 3) underwent mpMRI and then biopsy according to the findings.
Outcome Measurements And Statistical Analysis: The primary outcome was to compare the detection rate of csPCa in both study arms, setting a 10% difference for noninferiority. The secondary outcome was to assess the role of prostate-specific antigen density (PSAD) in ruling out men who could avoid biopsy among those with equivocal findings on fast MRI.
Results And Limitations: The overall MRI detection rate for csPCa was 23.5% (50/213; 95% confidence interval [CI] 18.0-29.8%) with fast MRI and 32.7% (32/98; 95% CI 23.6-42.9%) with mpMRI (difference 9.2%; p = 0.09). The reproducibility of the study could have been affected by its single-center nature.
Conclusions: Fast MRI followed by mpMRI in equivocal cases is not inferior to mpMRI in the detection of csPCa among biopsy-naïve men aged ≤75 yr with PSA ≤15 ng/ml and negative digital rectal examination. These findings could pave the way to broader use of MRI for PCa diagnosis.
Patient Summary: A faster MRI (magnetic resonance imaging) protocol with no contrast agent and fewer scan sequences for examination of the prostate is not inferior to the typical MRI approach in the detection of clinically significant prostate cancer. If our findings are confirmed in other studies, fast MRI could represent a time-saving and less invasive examination for men with suspicion of prostate cancer. This trial is registered at ClinicalTrials.gov as NCT03693703.
abstract_id: PUBMED:26935594
Combination of prostate imaging reporting and data system (PI-RADS) score and prostate-specific antigen (PSA) density predicts biopsy outcome in prostate biopsy naïve patients. Objective: To assess the value of the Prostate Imaging Reporting and Data System (PI-RADS) scoring system, for prostate multi-parametric magnetic resonance imaging (mpMRI) to detect prostate cancer, and classical parameters, such as prostate-specific antigen (PSA) level, prostate volume and PSA density, for predicting biopsy outcome in biopsy naïve patients who have suspected prostate cancer.
Patients And Methods: Patients who underwent mpMRI at our hospital, and who had their first prostate biopsy between July 2010 and April 2014, were analysed retrospectively. The prostate biopsies were taken transperineally under transrectal ultrasonography guidance. In all, 14 cores were biopsied as a systematic biopsy in all patients. Two cognitive fusion-targeted biopsy cores were added for each lesion in patients who had suspicious or equivocal lesions on mpMRI. The PI-RADS scoring system version 2.0 (PI-RADS v2) was used to describe the MRI findings. Univariate and multivariate analyses were performed to determine significant predictors of prostate cancer and clinically significant prostate cancer.
Results: In all, 288 patients were analysed. The median patient age, PSA level, prostate volume and PSA density were 69 years, 7.5 ng/mL, 28.7 mL, and 0.26 ng/mL/mL, respectively. The biopsy results were benign, clinically insignificant, and clinically significant prostate cancer in 129 (45%), 18 (6%) and 141 (49%) patients, respectively. The multivariate analysis revealed that PI-RADS v2 score and PSA density were independent predictors for prostate cancer and clinically significant prostate cancer. When PI-RADS v2 score and PSA density were combined, a PI-RADS v2 score of ≥4 and PSA density ≥0.15 ng/mL/mL, or PI-RADS v2 score of 3 and PSA density of ≥0.30 ng/mL/mL, was associated with the highest clinically significant prostate cancer detection rates (76-97%) on the first biopsy. Of the patients in this group with negative biopsy results, 22% were subsequently diagnosed as prostate cancer. In contrast, a PI-RADS v2 score of ≤3 and PSA density of <0.15 ng/mL/mL yielded no clinically significant prostate cancer and no additional detection of prostate cancer on further biopsies.
Conclusions: A combination of PI-RADS v2 score and PSA density can help in the decision-making process before prostate biopsy and in the follow-up strategy in biopsy naïve patients. Patients with a PI-RADS v2 score of ≤3 and PSA density of <0.15 ng/mL/mL may avoid unnecessary biopsies.
abstract_id: PUBMED:30895901
PI-RADS v2 and periprostatic fat measured on multiparametric magnetic resonance imaging can predict upgrading in radical prostatectomy pathology amongst patients with biopsy Gleason score 3 + 3 prostate cancer. Purpose: An underestimated biopsy Gleason score 3 + 3 can result in unfounded optimism amongst patients and cause physicians to miss the window for prostate cancer (PCa) cure. This study aims to evaluate the effectiveness of Prostate Imaging Reporting and Data System (PI-RADS) version 2 as well as periprostatic fat (PPF) measured on multiparametric magnetic resonance imaging (mp-MRI) at predicting pathological upgrading amongst patients with biopsy Gleason score 3 + 3 disease.
Patients And Methods: A retrospective analysis of 56 patients with biopsy Gleason score 6 PCa who underwent prebiopsy mp-MRI and radical prostatectomy (RP) between November 2013 and March 2018 was conducted. Two radiologists performed PI-RADS v2 score evaluation and different fat measurements on mp-MRI. The associations amongst clinical information, PI-RADS v2 score, different fat parameters and pathologic findings were analyzed. A nomogram predicting upgrading was established based on the results of logistic regression analysis.
Results: A total of 38 (67.9%) patients were upgraded to Gleason ≥7 disease on RP specimens. Prostate-specific antigen density (PSAD) (p < .001), positive core (p < .001), single-core positivity (p = .039), PI-RADS score (p < .001), front PPF area (p = .007) and front-to-total ratio (the ratio of front PPF area to total contour area) (p < .001) were risk factors for upgrading. On multivariate analysis, Epstein criteria (p = .02), PI-RADS score >3 (p = .024), and front-to-total ratio (p = .006) were independent risk factors for pathologic upgrading. The AUC value of the nomogram was 0.893 (95% CI, 0.787-0.999).
Conclusion: The combination of PI-RADS v2 and periprostatic fat measured on mp-MRI can help predict pathologic upgrading amongst patients with biopsy Gleason score 3 + 3 PCa.
abstract_id: PUBMED:35935910
Performance of multi-parametric magnetic resonance imaging through PIRADS scoring system in biopsy naïve patients with suspicious prostate cancer. Background: Use of multi-parametric magnetic resonance imaging (mp-MRI) and Prostate Imaging Reporting and Data System (PI-RADS) scoring system allowed more precise detection of prostate cancer (PCa). Our study aimed at evaluating the diagnostic performance of mp-MRI in detection of PCa.
Methods: Eighty-six patients suspected to have prostate cancer were enrolled. All patients underwent mp-MRI followed by systematic and targeted trans-rectal ultrasound (TRUS) guided prostate biopsies. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and accuracy of mp-MRI were evaluated.
Results: Forty-six patients (53.5%) had prostate cancer on targeted and systematic TRUS biopsies. On mp-MRI, 96.6% of lesions with PI-RADS < 3 revealed to be benign by TRUS biopsy, 73.3% of lesions with PI-RADS 4 showed ISUP grades ≥1, whereas all PI-RADS 5 lesions showed high ISUP grades ≥ 3. For PI-RADS 3 lesions, 62.5% of them revealed to be benign and 37.5% showed ISUP grades ≥1 by TRUS biopsy. PI-RADS scores ˃3 had 69.57% sensitivity and 85% specificity for detection of PCa. On adding the equivocal PI-RADS 3 lesions, PI-RADS scores ≥3 had higher sensitivity (97.83%), but at the cost of lower specificity (32.5%).
Conclusion: Mp-MRI using PI-RADS V2 scoring system categories ≤3 and >3 could help in detection of PCa. PI-RADS 3 lesions are equivocal. Including PI-RADS lesions ≥3 demonstrated higher sensitivity, but at the cost of lower specificity for mp-MRI in diagnosis for Pca.
Abbreviations: CDR: cancer detection rates; DRE: digital rectal examination; ISUP: international society of urological pathology; mp-MRI: multi-parametric magnetic resonance imaging; NPV: negative predictive value; PCa: prosatate cancer; PI-RADS: Prostate Imaging Reporting and Data System; PPV: Positive predictive value; PSA: prostate specific antigen; TRUS: transrectal ultrasound.
Answer: The diagnostic accuracy of biparametric 3T magnetic resonance imaging (MRI) for prostatic cancer detection in a biopsy-naïve patient population has been evaluated in comparison to multiparametric MRI protocols. The studies suggest that biparametric MRI is a feasible alternative to multiparametric MRI, with comparable diagnostic accuracy and the advantage of requiring shorter acquisition and interpretation time, without the need for gadolinium-based contrast agents (PUBMED:27842676).
A study by PUBMED:34065851 found that the overall diagnostic performance of biparametric MRI, PI-RADS v2, and proposed adjusted PI-RADS v2 (PA PI-RADS v2) showed minimal differences. For peripheral zone cancers, biparametric MRI had a slightly lower specificity compared to PI-RADS v2/PA PI-RADS v2. For transition zone cancers, biparametric MRI had a higher specificity than PA PI-RADS v2. The study concluded that attention should be paid to false positive results when using PI-RADS v2 or PA PI-RADS v2.
Another study (PUBMED:34243992) reported no significant difference in the detection efficacy of high b-value biparametric MRI based on the Simplified Prostate Image Reporting and Data System (S-PI-RADS) for prostate cancer and clinically significant prostate cancer compared with the standard PI-RADS v2 score with multiparametric MRI protocols.
In a randomized noninferiority trial (PUBMED:33893066), it was found that a single-plane biparametric MRI (fast MRI) workup followed by multiparametric MRI in equivocal cases is not inferior to multiparametric MRI in the detection of clinically significant prostate cancer among biopsy-naïve men.
These findings suggest that biparametric MRI can be a viable option for prostate cancer detection in biopsy-naïve patients, potentially offering a faster, less invasive examination with diagnostic performance comparable to that of the more established PI-RADS v2 multiparametric MRI protocol. |
Instruction: Nodal Yield: Is it a Prognostic Factor for Head and Neck Squamous Cell Carcinoma?
Abstracts:
abstract_id: PUBMED:34340270
Nodal tumor volume as a prognostic factor for head and neck squamous cell carcinoma: a systematic review. Introduction: Several studies suggest that there is an association between the metastatic nodal tumor volume and the clinical outcome in patients with solid cancers. However, despite the prognostic potential of nodal volume, a standardized method for estimating the nodal volumetric parameters is lacking. Herein, we conducted a systematic review of the published scientific literature towards investigating the prognostic value of nodal volume in the carcinomas of head and neck, taking into consideration the primary tumor site and the human papillomavirus (HPV) status. Methodological issues: For this purpose, the biomedical literature database PubMed/MEDLINE was searched for studies relevant to the relationship of nodal volume to the treatment outcome and survival in head and neck squamous cell carcinoma (HNSCC) patients. Collectively, based on stringent inclusion/exclusion criteria, 23 eligible studies were included in the present systematic review. Results: On the basis of our findings, nodal volume is suggested to be strongly associated with clinical outcomes in HNSCC patients. Of particular note, there is an indication that nodal volume is an independent factor for further risk stratification for recurrence-free survival in patients with squamous cell carcinoma of the pharynx (oropharynx and hypopharynx). Extranodal extension (ENE) and HPV status should be also taken into consideration in further studies.
abstract_id: PUBMED:37485190
Clinicopathological Parameters Predicting Nodal Metastasis in Head and Neck Squamous Cell Carcinoma. Introduction Squamous cell carcinoma (SCC) is the most common type of malignancy of the head and neck region arising from the mucosal epithelium of the oral cavity and oropharynx. It is a multifactorial disease with a high rate of mortality. Lymph node metastasis is an important prognostic parameter associated with adverse prognosis. This study was conducted to establish a relationship between various clinicopathological characteristics and nodal metastasis in head and neck squamous cell carcinoma (HNSCC). Methods This retrospective study was conducted at Liaquat National Hospital, Karachi, Pakistan. A total of 306 biopsy-proven cases of HNSCC were included in the study. Clinical data, which included age, sex, and site of the lesion, were obtained from the clinical referral forms. Resections of the lesions were performed, and the specimens collected were sent to the laboratory for histological evaluation. The histological subtype, perineural invasion (PNI), depth of invasion (DOI), nodal metastasis, and extranodal extension were assessed, and the association of clinicopathological parameters with nodal metastasis was sought. Results The mean age at diagnosis was 50.26 ± 12.86 years with a female predominance (55.27%), and the mean tumor size was 3.37 ± 1.75 cm. The mean DOI was 1.08 ± 0.67 cm. The most common site of tumor was found to be the oral cavity (68.6%), followed by the tongue (24.2%). Keratinizing SCC (59.5%) was found to be the most prevalent histological subtype. At the time of diagnosis, the majority of the tumors were grade 2 (62.4%). PNI was present in 12.1% of the cases. Nodal metastasis was present in 44.8%, and extranodal extension was present in 17% of the cases. A significant association of nodal metastasis was noted with age, gender, tumor site, tumor size, and DOI. Male patients with HNSCC showed a higher frequency of nodal metastasis than female patients. Patients between the ages of 31 and 50 years with a tumor size of above 4 cm and a DOI of more than 1 cm had a higher frequency of nodal metastasis. Similarly, tumors arising in the oral cavity and the keratinizing subtype were more likely to possess nodal metastasis. Conclusion We found that HNSCCs were more prevalent among the female population, with the most common site being the oral cavity. Nodal metastasis was significantly associated with the keratinizing subtype of SCC, oral cavity location, male gender, and middle age group. Similarly, the tumor size and DOI were important predictors of nodal metastasis in HNSCC in our study.
abstract_id: PUBMED:30527237
Nodal volume as a prognostic factor in locally advanced head and neck cancer: Identifying candidates for elective neck dissection after chemoradiation with IGRT from a single institutional prospective series from the Indian subcontinent. Objective: Nodal volume as a prognostic factor has been extensively evaluated in head and neck cancer, however there is still no consensus. We attempted to analyze nodal volume as a prognostic factor in head and neck cancer treated with chemoradiation (CCRT) without an elective neck dissection with image guided intensity modulated radiotherapy (IG-IMRT).
Material And Methods: We prospectively analysed 87 patients of Stage III-IV cancer of the oropharynx (57), and hypopharynx (30), who subsequently received definitive concurrent chemoradiation. Total Nodal volume (TNV) was the sum of all lymph node volumes calculated by volume algorithm from the planning CT. The impact of TNV on overall survival (OS) & regional control (RC) was assessed. Survival analysis was done using SPSS version 20.0 (SPSS, Chicago, Illinois). A receiver operating characteristics (ROC) curve analysis was done for estimation of cut offs.
Results: The 2 year OS & RC were 64% and 83% respectively. On multivariate analysis, the TNV was a significant prognostic factor for OS &RC. ROC curve analysis found an optimal volumetric cut off of 15cc for OS & RC. The 2 year OS & RC for <15cc/>15cc group were 78% /30% (p = 0.001) & 100%/52% (p = 0.001). Similar results were obtained on subset analysis of our oropharyngeal patients with 2 year OS 75%/24% for the <15cc and >15cc group (p = 0.001).
Conclusion: TNV is an independent prognostic factor for OS & RC in head and neck cancer. TNV can identify patients for consideration of elective neck dissection post CCRT ie for patients with TNV > 15CC.
abstract_id: PUBMED:32916013
The role of Nodal and Cripto-1 in human oral squamous cell carcinoma. Oral squamous cell carcinoma (OSCC) is a common epithelial malignancy of the oral cavity. Nodal and Cripto-1 (CR-1) are important developmental morphogens expressed in several adult cancers and are associated with disease progression. Whether Nodal and CR-1 are simultaneously expressed in the same tumor and how this affects cancer biology are unclear. We investigate the expression and potential role of both Nodal and CR-1 in human OSCC. Immunohistochemistry results show that Nodal and CR-1 are both expressed in the same human OSCC sample and that intensity of Nodal staining is correlated with advanced-stage disease. However, this was not observed with CR-1 staining. Western blot analysis of lysates from two human OSCC line experiments shows expression of CR-1 and Nodal, and their respective signaling molecules, Src and ERK1/2. Treatment of SCC25 and SCC15 cells with both Nodal and CR-1 inhibitors simultaneously resulted in reduced cell viability and reduced levels of P-Src and P-ERK1/2. Further investigation showed that the combination treatment with both Nodal and CR-1 inhibitors was capable of reducing invasiveness of SCC25 cells. Our results show a possible role for Nodal/CR-1 function during progression of human OSCC and that targeting both proteins simultaneously may have therapeutic potential.
abstract_id: PUBMED:23799167
Conversion from selective to comprehensive neck dissection: is it necessary for occult nodal metastasis? 5-year observational study. Objectives: To compare the therapeutic results between selective neck dissection (SND) and conversion modified radical neck dissection (MRND) for the occult nodal metastasis cases in head and neck squamous cell carcinoma.
Methods: Forty-four cases with occult nodal metastasis were enrolled in this observational cohort study. For twenty-nine cases, SNDs were done and for fifteen cases, as metastatic nodes were found in the operative field, conversion from selective to MRNDs type II were done. Baseline data on primary site, T and N stage, extent of SND, extracapsular spread of occult metastatic node and type of postoperative adjuvant therapy were obtained. We compared locoregional control rate, overall survival rate and disease specific survival rate between two groups.
Results: Among the 29 patients who underwent SND, only one patient had a nodal recurrence which occurred in the contralateral undissected neck. On the other hand, among the 15 patients who underwent conversion MRND, two patients had nodal recurrences which occurred in previously undissected neck. According to the Kaplan Meier survival curve, there was no statistically significant difference for locoregional control rate, overall survival rate and disease specific survival rate between two groups (P=0.2719, P=0.7596, and P=0.2405, respectively).
Conclusion: SND is enough to treat occult nodal metastasis in head and neck squamous cell carcinoma and it is not necessary to convert from SND to comprehensive neck dissection.
abstract_id: PUBMED:30537291
Correlation of focal adhesion kinase expression with nodal metastasis in patients with head and neck cutaneous squamous cell carcinoma. Background: Focal adhesion kinase (FAK) and cortactin overexpression is frequently detected in a variety of cancers, and has been associated with poor clinical outcome. However, there are no data in cutaneous squamous cell carcinoma (cSCC).
Objective: To investigate the relationship of FAK and cortactin expression with the clinicopathologic features and the impact on the prognosis of cSCC patients.
Methods: FAK and cortactin expression was analyzed by immunohistochemistry on paraffin-embedded tissue samples from 100 patients with cSCC, and correlated with the clinical data.
Results: FAK overexpression was a significant risk factor for nodal metastasis with crude and adjusted ratios (HRs) of 2.04, (95% CI [1.08-3.86], [P = 0.029]) and 2.23 (95% CI [1.01-4.91], [P = 0.047]), respectively. Cortactin expression was not a significant risk factor for nodal metastasis.
Conclusion: These findings demonstrate that FAK overexpression is an independent predictor of nodal metastasis that might be helpful for risk stratification and management of patients with cSCC.
abstract_id: PUBMED:33871090
Unilateral versus bilateral nodal irradiation: Current evidence in the treatment of squamous cell carcinoma of the head and neck. Cancers of the head and neck region often present with nodal involvement. There is a long-standing convention within the community of head and neck radiation oncology to irradiate both sides of the neck electively in almost all cases to include both macroscopic and microscopic disease extension (so called elective nodal volume). International guidelines for the selection and delineation of the elective lymph nodes were published in the early 2000s and were updated recently. However, diagnostic imaging techniques have improved the accuracy and reliability of nodal staging and as a result, small metastases that used to remain undetected and were thus in the past included in the elective nodal volume, will now be included in high-dose volumes. Furthermore, the elective nodal areas are situated close to the parotid glands, the submandibular glands and the swallowing muscles. Therefore, irradiation of a smaller, more selected volume of the elective nodes could reduce treatment-related toxicity. Several researchers consider the current bilateral elective neck irradiation strategies an overtreatment and show growing interest in a unilateral nodal irradiation in selected patients. The aim of this article is to give an overview of the current evidence about the indications and benefits of unilateral nodal irradiation and the use of SPECT/CT-guided nodal irradiation in squamous cell carcinomas of the head and neck.
abstract_id: PUBMED:32205977
Clinical Spectrum, Pattern, and Level-Wise Nodal Involvement Among Oral Squamous Cell Carcinoma Patients - Audit of 945 Oral Cancer Patient Data. Oral Squamous cell carcinoma (OSCC) is a locoregionally aggressive malignancy. Timely management of neck node dissemination, an important prognostic factor, impacts survival. The aim of the current study was to obtain comprehensive data on patterns or level-wise involvement of neck nodes to optimize neck management in OSCC. It was a retrospective analysis of a prospectively maintained database in a hospital-based setting. The current study evaluated patterns of spread to neck nodes in 945 pathologically proven OSCC patients who underwent neck dissection between 1995 and 2013. Clinical, surgical, pathological, level-wise information of neck nodes was available, and records of these patients were analyzed in relation to the pattern of involvement. Absolute/relative frequency distribution was used to describe the distribution of categorical variables. Continuous measures were organized as mean (standard deviation) and/or median (range). Buccal mucosa (28.78%) was the most common, whereas lip (5.08%) was the least common oral subsite. Modified neck dissection (69.75%) was the most common type of neck dissection. Pathological node positivity was documented in 39.8% patients and Level I(62.54%) and level II(57.33%) are the most common neck levels for nodal involvement. Involvement of Level III to V was seen less often (7.17%). There was no significant association between node positivity among different subsites of oral cancer. Neck level I and II are the most commonly involved levels. Sensitivity and specificity of clinical assessment are 83.51% and 30.05%, respectively. In view of this void in clinical assessment and a predictable nodal spread, alternate node assessment methodology must be explored.
abstract_id: PUBMED:36288860
Prognostic Significance of the Nodal Ratio and Number of Positive Nodes in Patients With Squamous Cell Carcinomas of the Head and Neck. Background/aim: The lymph node status has high prognostic relevance in head and neck squamous cell carcinoma (HNSCC). This study aimed to address the hypothesis that the number of positive nodes and the nodal ratio have a prognostic impact on survival in HNSCC.
Patients And Methods: A retrospective analysis of 221 patients with HNSCC and clinical N+ status who underwent a neck dissection during primary surgery or after definitive radio(chemo)therapy was performed. The possible influence of age, sex, TNM stage, number of positive nodes and nodal ratio on survival was analyzed by univariate and multivariate Cox models and log-rank tests.
Results: On average, 30.1 lymph nodes were removed and 4.96 metastases were detected. The mean nodal ratio was 9.4%, the median nodal ratio was 5.3%. Multivariate analysis demonstrated a nodal ratio of ≥6-<12.5% [hazard ratio (HR)=2.33, 95% confidence interval (CI)=1.24-4.37; p=0.008] and of ≥12.5% (HR=2.86, 95% CI=1.40-5.84; p=0.004) compared to nodal ratio 0, number of positive nodes pN=1 compared to number of positive nodes=0 (HR=2.02, 95% CI=1.08-3.80. p=0.029), as well as N3 compared to N0 (HR=8.10, 95% CI=1.89-34.66; p=0.005), and Mx compared to M0 (HR of 2.76, 95% CI=1.59-4,79, p≤0.001) were of main importance for poor prognosis. Postoperative radio(chemo)therapy after surgery was associated with prolonged survival in multivariate analysis (HR=0.37, 95% CI=0.24-0.57; p≤0.001).
Conclusion: The nodal ratio and number of positive nodes seem to have a high prognostic impact in patients with HNSCC and can be of value in identifying patients at high risk who warrant more aggressive therapy.
abstract_id: PUBMED:30275206
Suboptimal Outcomes in Cutaneous Squamous Cell Cancer of the Head and Neck with Nodal Metastases. Background/aim: There are limited data regarding survival, failure patterns, and factors associated with disease recurrence in patients with cutaneous squamous cell cancer of the head and neck (cSCC-HN) with nodal metastases.
Patients And Methods: A retrospective analysis of patients with cSCC-HN metastatic to cervical and/or parotid lymph nodes treated with surgery and post-operative radiation therapy was performed.
Results: This study included 76 patients (57 immunocompetent and 18 immunosuppressed) with a median follow-up of 18 months. Overall survival, disease-free survival (DFS), and disease recurrence (DR) at 2 years was 60%, 49%, and 40%, respectively. Immunosuppressed patients had significantly lower 2-year DFS (28% vs. 55%; p=0.003) and higher DR (61% vs. 34%; p=0.04) compared to immunocompetent patients. Analysis of immunocompetent patients demonstrated extracapsular extension (ECE) as the only factor associated with DR (p<0.0001).
Conclusion: Patients with nodal metastases from cSCC-HN have suboptimal outcomes. ECE and immunosuppression were significantly associated with DR.
Answer: The nodal yield, which refers to the number of lymph nodes removed and examined during surgery, is not explicitly mentioned as a prognostic factor in the provided abstracts. However, several abstracts discuss related concepts that are important for the prognosis of head and neck squamous cell carcinoma (HNSCC).
For instance, the nodal ratio, which is the proportion of positive nodes to the total number of nodes removed, is highlighted as having a high prognostic impact in patients with HNSCC (PUBMED:36288860). A higher nodal ratio is associated with a poorer prognosis. Similarly, the number of positive nodes is also mentioned as a prognostic factor, with a higher number of positive nodes correlating with worse outcomes (PUBMED:36288860).
Nodal volume, which is the sum of all lymph node volumes, is suggested to be an independent prognostic factor for overall survival and regional control in head and neck cancer (PUBMED:30527237). A larger nodal volume is associated with poorer clinical outcomes.
The presence of nodal metastasis itself is a significant prognostic factor, with various studies highlighting its association with adverse prognosis (PUBMED:37485190; PUBMED:30537291). Factors such as extranodal extension (ENE) and the presence of occult nodal metastasis are also important considerations in the prognosis and treatment planning for HNSCC (PUBMED:23799167; PUBMED:30275206).
In summary, while the specific term "nodal yield" is not discussed, the number of positive nodes, the nodal ratio, and the nodal volume are all related concepts that have been shown to be prognostic factors for HNSCC. These factors can influence treatment decisions, such as the extent of neck dissection and the need for adjuvant therapy. |
Instruction: Can composite performance measures predict survival of patients with colorectal cancer?
Abstracts:
abstract_id: PUBMED:25400466
Can composite performance measures predict survival of patients with colorectal cancer? Aim: To assess the relationship between long-term colorectal patient survival and methods of calculating composite performance scores.
Methods: The Taiwan Cancer Database was used to identify patients who underwent bowel resection for colorectal adenocarcinoma between 2003 and 2004. Patients were assigned to one of three cohorts based on tumor staging: cohort 1, colon cancer stage < III; cohort 2, colon cancer stage III; cohort 3, rectal cancer. A composite performance score (CPS) was calculated for each patient using five different aggregating methods, including all-or-none, 70% standard, equal weight, analytic hierarchy process (AHP), and principal component analysis (PCA) algorithms. The relationships between CPS and five-year overall, disease-free, and disease-specific survivals were evaluated by a Cox proportional hazards model. A goodness-of-fit analysis for all five methods was performed using Akaike's information criterion.
Results: A total of 3272 colorectal cancer patients (cohort 1, 1164; cohort 2, 790; cohort 3, 1318 patients) with a mean age of 65 years were enrolled in the study. Bivariate correlation analysis showed that CPS values from the equal weight method were highly correlated with those from the AHP method in all cohorts (all P < 0.05). Multivariate Cox hazards analysis showed that CPS values derived from equal weight and AHP methods were significantly associated with five-year survivals of patients in cohorts 1 and 2 (all P < 0.05). In these cohorts, higher CPS values suggested a higher probability of five-year survival. However, CPS values derived from the all-or-none method did not show any significant process-outcome relationship in any cohort. Goodness-of-fit analyses showed that CPS values derived from the PCA method were the best fit to the Cox proportional hazards model, whereas the values from the all-or-none model showed the poorest fit.
Conclusion: CPS values may highlight process-outcome relationships for patients with colorectal cancer in addition to evaluating quality of care performance.
abstract_id: PUBMED:25373407
Developing and Evaluating Composite Measures of Cancer Care Quality. Background: Composite measures are useful for distilling quality data into summary scores; yet, there has been limited use of composite measures for cancer care.
Objective: Compare multiple approaches for generating cancer care composite measures and evaluate how well composite measures summarize dimensions of cancer care and predict survival.
Study Design: We computed hospital-level rates for 13 colorectal, lung, and prostate cancer process measures in 59 Veterans Affairs hospitals. We computed 4 empirical-factor (based on an exploratory factor analysis), 3 cancer-specific (colorectal, lung, prostate care), and 3 care modality-specific (diagnosis/evaluation, surgical, nonsurgical treatments) composite measures. We assessed correlations among all composite measures and estimated all-cause survival for colon, rectal, non-small cell lung, and small cell lung cancers as a function of composite scores, adjusting for patient characteristics.
Results: Four factors emerged from the factor analysis: nonsurgical treatment, surgical treatment, colorectal early diagnosis, and prostate treatment. We observed strong correlations (r) among composite measures comprised of similar process measures (r=0.58-1.00, P<0.0001), but not among composite measures reflecting different care dimensions. Composite measures were rarely associated with survival.
Conclusions: The empirical-factor domains grouped measures variously by cancer type and care modality. The evidence did not support any single approach for generating cancer care composite measures. Weak associations across different care domains suggest that low-quality and high-quality cancer care delivery may coexist within Veterans Affairs hospitals.
abstract_id: PUBMED:34697642
Composite Score: prognostic tool to predict survival in patients undergoing surgery for colorectal liver metastases. Background: Several existing scoring systems predict survival of patients with colorectal liver metastases. Many lack validation, rely on old clinical data, and have been found to be less accurate since the introduction of chemotherapy. This study aimed to construct and validate a clinically relevant preoperative prognostic model for patients with colorectal liver metastases.
Methods: A predictive model with data available before surgery was developed. Survival was analysed by Cox regression analysis, and the quality of the model was assessed using discrimination and calibration. The model was validated using multifold cross-validation.
Results: The model included 1212 consecutive patients who underwent liver resection for colorectal liver metastases between 2005 and 2015. Prognostic factors for survival included advanced age, raised C-reactive protein level, hypoalbuminaemia, extended liver resection, larger number of metastases, and midgut origin of the primary tumour. A Composite Score was developed based on the prognostic variables. Patients were classified into those at low, medium, and high risk. Survival differences between the groups were significant; median overall survival was 87.4 months in the low-risk group, 50.1 months in the medium-risk group, and 22.6 months in the high-risk group. The discriminative performance, assessed by the concordance index, was 0.71, 0.67, and 0.67 respectively at 1, 3, and 5 years. Calibration, assessed graphically, was close to perfect. A multifold cross-validation of the model confirmed its internal validity (C-index 0.63 versus 0.62).
Conclusion: The Composite Score categorizes patients into risk strata, and may help identify patients who have a poor prognosis, for whom surgery is questionable.
abstract_id: PUBMED:35425715
Molecular Biomarker of Drug Resistance Developed From Patient-Derived Organoids Predicts Survival of Colorectal Cancer Patients. The drug 5-fluorouracil (5-Fu) is the critical composition of colorectal cancer (CRC) treatments. Prognostic and predictive molecular biomarkers for CRC patients (CRCpts) treated with 5-Fu-based chemotherapy can provide assistance for tailoring treatment approach. Here, we established a molecular biomarker of 5-Fu resistance derived from colorectal cancer organoids (CRCOs) for predicting the survival of CRCpts. Forty-one CRCO cultures were generated from 50 CRC tumor tissues after surgery (82%). The following experiments revealed a great diversity in drug sensitivity for 10 μM 5-Fu treatment tested by using organoid size change. Fourteen cases (34.1%) were 5-Fu sensitive and the other 27 (65.9%) were resistant. Then, differentially expressed genes (DEGs) associated with 5-Fu resistance were outputted by transcriptome sequencing. In particular, DEGs were generated in two comparison groups: 1) 5-Fu sensitive and resistant untreated CRCOs; 2) CRCOs before 5-Fu treatment and surviving CRCOs after 5-Fu treatment. Some molecules and most of the pathways that have been reported to be involved in 5-Fu resistance were identified in the current research. By using DEGs correlated with 5-Fu resistance and survival of CRCpts, the gene signature and drug-resistant score model (DRSM) containing five molecules were established in The Cancer Genome Atlas (TCGA)-CRC cohort by least absolute shrinkage and selection operator (LASSO) regression analysis and 5-fold cross-validation. Multivariate analysis revealed that drug-resistant score (DRS) was an independent prognostic factor for overall survival (OS) in CRCpts in TCGA-CRC cohort (P < 0.001). Further validation results from four Gene Expression Omnibus (GEO) cohorts elucidated that the DRSM based on five genes related to 5-Fu chemosensitivity and developed from patient-derived organoids can predict survival of CRCpts. Meanwhile, our model could predict the survival of CRCpts in different subgroups. Besides, the difference of molecular pathways, tumor mutational burden (TMB), immune response-related pathways, immune score, stromal score, and immune cell proportion were dissected between DRS-high and DRS-low patients in TCGA-CRC cohort.
abstract_id: PUBMED:19067500
Benchmarking physician performance: reliability of individual and composite measures. Objective: To examine the reliability of quality measures to assess physician performance, which are increasingly used as the basis for quality improvement efforts, contracting decisions, and financial incentives, despite concerns about the methodological challenges.
Study Design: Evaluation of health plan administrative claims and enrollment data.
Methods: The study used administrative data from 9 health plans representing more than 11 million patients. The number of quality events (patients eligible for a quality measure), mean performance, and reliability estimates were calculated for 27 quality measures. Composite scores for preventive, chronic, acute, and overall care were calculated as the weighted mean of the standardized scores. Reliability was estimated by calculating the physician-to-physician variance divided by the sum of the physician-to-physician variance plus the measurement variance, and 0.70 was considered adequate.
Results: Ten quality measures had reliability estimates above 0.70 at a minimum of 50 quality events. For other quality measures, reliability was low even when physicians had 50 quality events. The largest proportion of physicians who could be reliably evaluated on a single quality measure was 8% for colorectal cancer screening and 2% for nephropathy screening among patients with diabetes mellitus. More physicians could be reliably evaluated using composite scores (<17% for preventive care, >7% for chronic care, and 15%-20% for an overall composite).
Conclusions: In typical health plan administrative data, most physicians do not have adequate numbers of quality events to support reliable quality measurement. The reliability of quality measures should be taken into account when quality information is used for public reporting and accountability. Efforts to improve data available for physician profiling are also needed.
abstract_id: PUBMED:36420080
Tumor expression of CXCL12 and survival of patients with colorectal cancer: A meta-analysis. C-X-C motif chemokine ligand 12 (CXCL12) has been suggested as a possible biomarker of poor prognosis in patients with various malignancies. However, the association between tumor expression of CXCL12 and survival of patients with colorectal cancer (CRC) remains to be comprehensively analyzed. A meta-analysis to systematically evaluate this association was performed in the present study. Relevant cohort studies were retrieved by searching the PubMed, Embase and Web of Science databases from inception to March 22, 2022. A conservative random-effect model incorporating the possible influence of between-study heterogeneity was used to pool the results. A total of 14 cohort studies that included 2,060 patients with CRC contributed to the meta-analysis, and 1,055 (51.2%) of them had higher tumor expression levels of CXCL12. Pooled results showed that a higher tumor expression level of CXCL12 was associated with poor overall survival [hazard ratio (HR), 1.74; 95% confidence interval (CI), 1.29-2.34; P<0.001; I2, 33%] and progression-free survival (HR, 2.00; 95% CI, 1.47-2.73; P<0.001; I2, 33%). Subgroup analyses showed that the association between higher cancer expression levels of CXCL12 and poor survival in patients with CRC was not significantly affected by the country of the study, the location of the tumor, the cancer stage or the methods used for measuring tumor CXCL12 levels (all P>0.05). In conclusion, the study found that a higher tumor expression level of CXCL12 was associated with the poor survival of patients with CRC. Studies are warranted to determine if CXCL12-targeted intervention could improve the prognosis of patients with CRC.
abstract_id: PUBMED:33176752
Nutritional and inflammatory measures predict survival of patients with stage IV colorectal cancer. Background: This study aimed to evaluate the prognostic impact of nutritional and inflammatory measures (controlling nutritional status (CONUT) score, prognostic nutritional index (PNI), and modified Glasgow prognostic score (mGPS)) on overall survival (OS) in patients with stage IV colorectal cancer (CRC).
Methods: Subjects were 996 patients with stage IV CRC who were referred to the National Cancer Center Hospital between 2001 and 2015. We retrospectively investigated correlations between OS and CONUT score, PNI, and mGPS. Multivariate analyses were performed using Cox proportional hazards regression models.
Results: After adjusting for known factors (age, gender, BMI, ECOG performance status, location of primary tumor, CEA levels, histological type, M category, and prior surgical treatment), all three measures were found to be independent prognostic factors for OS in patients with stage (CONUT score, p < 0.001; PNI, p < 0.001; mGPS, p < 0.001). Significant differences in OS were found between low CONUT score (0/1) (n = 614; 61%) and intermediate CONUT score (2/3) (n = 276; 28%) (hazard ratio (HR) = 1.20, 95% confidence interval (CI): 1.02-1.42, p = 0.032), and intermediate CONUT score and high CONUT score (≥4) (n = 106; 11%) (HR = 1.30, 95% CI: 1.01-1.67, p = 0.045). Significant differences in OS were found between mGPS = 0 (n = 633; 64%) and mGPS = 1 (n = 234; 23%) (HR = 1.84, 95% CI: 1.54-2.19, p < 0.001), but not between mGPS = 1 and mGPS = 2 (n = 129; 13%) (HR = 1.12, 95% CI: 0.88-1.41, p = 0.349). Patients with low PNI (< 48.0) (n = 443; 44%) showed a significantly lower OS rate than those with high PNI (≥48.0) (n = 553; 56%) (HR = 1.39, 95% CI: 1.19-1.62, p < 0.001).
Conclusions: CONUT score, PNI, and mGPS were found to be independent prognostic factors for OS in patients with stage IV CRC, suggesting that nutritional and inflammatory status is a useful host-related prognostic indicator in stage IV CRC.
abstract_id: PUBMED:29071263
Mortality and Survival after Surgical Treatment of Colorectal Cancer in Patients Aged over 80 Years. Objective: The purpose of this study was to identify the clinical factors and tumor characteristics that predict the outcome of colorectal cancer patients aged >80 years.
Materials And Methods: The data of 186 patients aged >80 years with colorectal cancer were collected from a computer database, and the variables were analyzed by both uni- and multivariate analyses.
Results: The 30-day mortality was 4% and the 90-day mortality 10%. The 1-year survival was 76%, and 27 (61%) of the 44 deaths were unrelated to cancer. The overall 5-year survival was 36%, the median survival 38 months, and the cancer-specific survival 40%. The recurrence rate after radical surgery was 22% and it was not affected by age. Kaplan-Meier estimates indicated that age, number of underlying diseases, radical operation, Union for International Cancer Control stage of the tumor, tumor size, number of lymph nodes involved, venous invasion, and recurrent disease were significant predictors of survival, but in the Cox regression model, only radical operation and venous invasion were independent prognostic factors for survival.
Conclusions: After good surgical selection, low early mortality and acceptable long-term survival can be achieved even in the oldest old patients with colorectal cancer. However, low early mortality seems to underestimate the effects of surgery during the first postoperative year.
abstract_id: PUBMED:36898057
Patient-Derived Tumor Organoids Can Predict the Progression-Free Survival of Patients With Stage IV Colorectal Cancer After Surgery. Background: Recent studies have shown patient-derived tumor organoids can predict the drug response of patients with cancer. However, the prognostic value of patient-derived tumor organoid-based drug tests in predicting the progression-free survival of patients with stage IV colorectal cancer after surgery remains unknown.
Objective: This study aimed to explore the prognostic value of patient-derived tumor organoid-based drug tests in patients with stage IV colorectal cancer after surgery.
Design: Retrospective cohort study.
Settings: Surgical samples were obtained from patients with stage IV colorectal cancer at the Nanfang Hospital.
Patients: A total of 108 patients who underwent surgery with successful patient-derived tumor organoid culture and drug testing were recruited between June 2018 and June 2019.
Interventions: Patient-derived tumor organoid culture and chemotherapeutic drug testing.
Main Outcomes Measures: Progression-free survival.
Results: According to the patient-derived tumor organoid-based drug test, 38 patients were drug sensitive and 76 patients were drug resistant. The median progression-free survival was 16.0 months in the drug-sensitive group and 9.0 months in the drug resistant group ( p < 0.001). Multivariate analyses showed that drug resistance (HR, 3.38; 95% CI, 1.84-6.21; p < 0.001), right-sided colon (HR, 3.50; 95% CI, 1.71-7.15; p < 0.001), mucinous adenocarcinoma (HR, 2.47; 95% CI, 1.34-4.55; p = 0.004), and non-R0 resection (HR, 2.70; 95% CI, 1.61-4.54; p < 0.001) were independent predictors of progression-free survival. The new patient-derived tumor organoid-based drug test model, which includes the patient-derived tumor organoid-based drug test, primary tumor location, histological type, and R0 resection, was more accurate than the traditional clinicopathological model in predicting progression-free survival ( p = 0.001).
Limitations: A single-center cohort study.
Conclusions: Patient-derived tumor organoids can predict progression-free survival in patients with stage IV colorectal cancer after surgery. Patient-derived tumor organoid drug resistance is associated with shorter progression-free survival, and the addition of patient-derived tumor organoid drug tests to existing clinicopathological models improves the ability to predict progression-free survival.
abstract_id: PUBMED:35681721
Impact of Age on Multimodality Treatment and Survival in Locally Advanced Rectal Cancer Patients. Background: Optimal treatment for locally advanced rectal cancer is neoadjuvant (chemo)radiation followed by radical surgery. This is challenging in the aging population because of frequently concomitant comorbidity. We analyzed whether age below and above 70 years is associated with differences in treatment strategy and outcome in this population-based study. Methods: Data between 2008 and 2016 were extracted from the Netherlands Cancer Registry with follow-up until 2021. Differences in therapy, referral and outcome were analyzed using χ2 tests, multivariable logistic regression and relative survival analysis. Results: In total, 6524 locally advanced rectal cancer patients were included. A greater proportion of patients <70 years underwent resection compared to older patients (89% vs. 71%). Patients ≥70 years were more likely treated with neoadjuvant radiotherapy (OR 3.4, 95% CI 2.61−4.52), than with chemoradiation (OR 0.3, 95% CI 0.23−0.37) and less often referred to higher volume hospitals for resection (OR 0.7, 95% CI 0.51−0.87). Five-year relative survival after resection following neoadjuvant therapy was comparable and higher for both patients <70 years and ≥70 years (82% and 77%) than after resection only. Resection only was associated with worse survival in the elderly compared to younger patients (56% vs. 75%). Conclusion: Elderly patients with locally advanced rectal cancer received less intensive treatment and were less often referred to higher volume hospitals for surgery. Relative survival was good and comparable after optimal treatment in both age groups. Effort is necessary to improve guideline adherence, and multimodal strategies should be tailored to age, comorbidity and performance status.
Answer: Yes, composite performance measures can predict survival of patients with colorectal cancer. Studies have shown that composite performance scores (CPS) derived from different methods, such as equal weight and analytic hierarchy process (AHP), were significantly associated with five-year survivals of patients with colorectal cancer, suggesting that higher CPS values indicated a higher probability of five-year survival (PUBMED:25400466). Additionally, a study that developed a Composite Score based on prognostic variables for patients undergoing surgery for colorectal liver metastases found that this score could categorize patients into risk strata and may help identify those with a poor prognosis, for whom surgery might be questionable (PUBMED:34697642).
However, the relationship between composite measures and survival is not always straightforward. For instance, a study that compared multiple approaches for generating cancer care composite measures found that while there were strong correlations among composite measures comprised of similar process measures, these composite measures were rarely associated with survival (PUBMED:25373407). This suggests that the predictive power of composite measures may vary depending on how they are constructed and what aspects of care they summarize.
Moreover, other studies have identified additional factors and measures that can predict survival in colorectal cancer patients. For example, tumor expression of CXCL12 was associated with poor overall survival and progression-free survival (PUBMED:36420080), and nutritional and inflammatory measures such as the CONUT score, PNI, and mGPS were found to be independent prognostic factors for overall survival in patients with stage IV colorectal cancer (PUBMED:33176752). Furthermore, patient-derived tumor organoids have been shown to predict progression-free survival in patients with stage IV colorectal cancer after surgery (PUBMED:36898057).
In summary, while composite performance measures can be predictive of survival in patients with colorectal cancer, their effectiveness may depend on the specific measures included and the methods used to calculate them. Additionally, other biological and clinical factors also play a significant role in predicting patient outcomes. |
Instruction: Indices used in differentiation of thalassemia trait from iron deficiency anemia in pediatric population: are they reliable?
Abstracts:
abstract_id: PUBMED:12421257
Most reliable indices in differentiation between thalassemia trait and iron deficiency anemia. Background: Iron deficiency anemia (IDA) and thalassemia trait (TT) are the most common forms of microcytic anemia. Some discrimination indices calculated from red blood cell indices are defined and used for rapid discrimination between TT and IDA. However, there has been no study carried out in which the validity of all of the defined indices are compared in the same patient groups. Youden's index is the most reliable method by which to measure the validity of a particular technique, because it takes into account both sensitivity and specificity.
Methods: We calculated eight discrimination indices (Mentzer Index, England and Fraser Index, Srivastava Index, Green and King Index, Shine and Lal Index, red blood cell (RBC) count, red blood cell distribution width and red blood cell blood distribution width index (RDWI)) in 26 patients with IDA and in 37 patients with beta TT (betaTT). We determined the number of correctly identified patients by using each discrimination index. We also calculated sensitivity, specificity, positive and negative predictive value and Youden's index of each discrimination index.
Results: None of the discrimination indices showed a sensitivity and specificity of 100%. Youden's indices of RBC count and RDWI were the highest with the value of 82 and 80%, respectively. Ninety percent and 92% of the patients were correctly identified with RBC and RDWI, respectively.
Conclusions: Red blood cell count and RDWI are the most reliable discrimination indices in differentiation between betaTT and IDA.
abstract_id: PUBMED:22866672
Indices used in differentiation of thalassemia trait from iron deficiency anemia in pediatric population: are they reliable? Background: Iron deficiency (IDA) and beta thalassemia trait (TT) are the most common causes of hypochromia and microcytosis. Many indices have been defined to quickly discriminate these similar entities via parameters obtained from automated blood cell analyzers. However, studies in the pediatric age group are scarce and their results are controversial.
Methods: We calculated eight discrimination indices [Mentzer Index (MI), England and Fraser Index (E&F), Srivastava Index (S), Green and King Index (G&K), Shine and Lal Index (S&L), red blood cell (RBC) count, RBC distribution width, and red blood cell distribution width Index (RDWI)] in 100 patients. We calculated sensitivity (SENS), specificity (SPEC), positive and negative predictive value (PPV and NPV), and Youden's Index (YI) of each discrimination index.
Results: None of the discrimination indices showed a SENS and SPEC of 100%. The highest SENS was obtained with S&L (87.1%), while the highest SPEC was obtained with E&F formula (100%). The highest YI value was obtained with E&F formula (58.1%).
Conclusion: In our study, none of the formulas appears reliable in discriminating between TT and IDA patients. The evaluation of iron status and measurement of hemoglobin A(2) (HbA(2)) remain the most reliable investigations to differentiate between TT and IDA patients.
abstract_id: PUBMED:23492565
Red cell indices: differentiation between β-thalassemia trait and iron deficiency anemia and application to sickle cell disease and sickle cell thalassemia. Background: In Tunisia, thalassemia and sickle cell disease (SS) represent the most prevalent monogenic hemoglobin disorders with 2.21% and 1.89% of carriers, respectively. This study aims to evaluate the diagnosis reliability of 12 red blood cell (RBC) indices in differentiation of β-thalassemia trait (β-TT) from iron deficiency anemia (IDA) and between homozygous SS and sickle cell thalassemia (ST).
Methods: The study covered 384 patients divided into three groups. The first one is composed of 145 control group, the second consists of 57 β-TT and 52 IDA subjects and the last one with 88 SS and 42 ST patients. We calculated sensitivity, specificity, positive-predictive values, negative-predictive values, percentage of correctly identified patients and Youden's Index (YI) for each indice. We also established new cut-off values by receiver operating characteristic curves for each indice. An evaluation study was performed on another population composed of 106 β-TT, 125 IDA, 31 SS, and 17 ST patients.
Results: Srivastava Index (SI) shows the highest reliability in discriminating β-TT from IDA at 5.17 as a cut-off and also SS from ST with 7.7 as another threshold. Mentzer Index (MI) and RBC appear also useful in both groups with new cut-offs slightly different from those described in literature for β-TT and IDA.
Conclusions: The effectiveness and the simplicity of calculation of these indices make them acceptable and easy to use. They can be relied on for differential diagnosis and even for diagnosis of β-TT with atypical HbA₂ levels.
abstract_id: PUBMED:23800659
Red cell indices: differentiation between β-thalassemia trait and iron deficiency anemia and application to sickle-cell disease and sickle-cell thalassemia. Background: In Tunisia, thalassemia and sickle cell disease represent the most prevalent monogenic hemoglobin disorders with 2.21% and 1.89% of carriers, respectively. This study aims to evaluate the diagnosis reliability of a series of red blood cell indices and parameters in differentiation of beta-thalassemia trait (β-TT) from iron deficiency anemia (IDA) and between homozygous sickle cell disease (SS) and sickle cell-thalassemia (ST).
Methods: The study covered 384 patients divided into three groups. The first one is composed of 145 control group, the second consists of 57 β-TT and 52 IDA subjects and the last one with 88 SS and 42 ST patients. We calculated sensitivity, specificity, positive-predictive values, negative-predictive values, percentage of correctly identified patients and Youden's index for each indice. We also established new cut-off values by receiver operating characteristic curves for each indice. An evaluation study was performed on another population composed of 106 β-TT, 125 IDA, 31 SS and 17 ST patients.
Results: Srivastava Index, mean corpuscular hemoglobin, red blood cell, Mentzer Index (MI) and mean corpuscular hemoglobin concentration show the highest reliability in discriminating β-TT from IDA with new cut-offs slightly different from those described in literature. Ehsani Index, mean corpuscular volume, MI, Shine and Lal Index and Sirdah Index are the most powerful in the differentiation between SS and ST.
Conclusions: The effectiveness and the simplicity of calculation of these indices make them acceptable and easy to use for differential diagnosis.
abstract_id: PUBMED:37386931
Logistic-Nomogram model based on red blood cell parameters to differentiate thalassemia trait and iron deficiency anemia in southern region of Fujian Province, China. Background: Differentiation between thalassemia trait (TT) and iron deficiency anemia (IDA) is challenging and costly. This study aimed to construct and evaluate a model based on red blood cell (RBC) parameters to differentiate TT and IDA in the southern region of Fujian Province, China.
Methods: RBC parameters of 364 TT patients and 316 IDA patients were reviewed. RBC parameter-based Logistic-Nomogram model to differentiate between TT and IDA was constructed by multivariate logistic regression analysis plus nomogram, and then compared with 22 previously reported differential indices.
Results: The patients were randomly selected to a training cohort (nTT = 248, nIDA = 223) and a validation cohort (nTT = 116, nIDA = 93). In the training cohort, multivariate logistic regression analysis identified RBC count, mean corpuscular hemoglobin (MCH), and MCH concentration (MCHC) as independent parameters associated with TT susceptibility. A nomogram was plotted based on these parameters, and then the RBC parameter-based Logistic-Nomogram model g (μy ) = 1.92 × RBC count-0.51 × MCH + 0.14 × MCHC-39.2 was devised. The area under the curve (AUC) (95% CI) was 0.95 (0.93-0.97); sensitivity and specificity at the best cutoff score (120.24) were 0.93 and 0.89, respectively; the accuracy was 0.91. In the validation cohort, the RBC parameter-based Logistic-Nomogram model had AUC (95% CI) of 0.95 (0.91-0.98); sensitivity and specificity were 0.92 and 0.87, respectively; accuracy was 0.90. Moreover, compared with 22 reported differential indices, the RBC parameter-based Logistic-Nomogram model showed numerically higher AUC, net reclassification index, and integrated discrimination index (all p < 0.001).
Conclusion: The RBC parameter-based Logistic-Nomogram model shows high performance in differentiating patients with TT and IDA from the southern region of Fujian Province.
abstract_id: PUBMED:28811791
Differentiation of beta thalassemia trait from iron deficiency anemia by hematological indices. Objective: We aimed at finding out reliable parameter in the differentiation of iron deficiency anemia (IDA) and beta-thalassemia trait (β-TT) in the adult population subjected to Saudi Arabian Premarital Screening Program.
Methods: A total of 620 adults (age range 21-36 years) reported during February 2012 to November 2012. Tests for serum iron and ferritin were carried out in individuals showing low hemoglobin (Hb). All the selected subjects' samples were subjected to blood morphology, comparison of MCV, RBC count. Red Cell Distribution Width (RDW) was noted from the Coulter Report whereas Red Cell Distribution Width Index (RDWI) value was calculated for all the samples.
Results: A total of one hundred &thirty-five individuals with hypochromic microcytic anemia having normal hemoglobin F and hemoglobin A2 < 3.2% were inducted in the study. Ninety-three were diagnosed having IDA, whereas thirty-two were having βTT. Ten individuals revealed other causes of anemia. The RBC count was higher, and MCV was much lower in βTT as compared to IDA. Both groups were subjected to RDW and RDWI, however, RDWI which showed better sensitivity and specificity for βTT.
Conclusion: RDWI is a reliable and useful index for differentiation among IDA and βTT, as compared to RDW.
abstract_id: PUBMED:36932722
Diagnostic Performance of 10 Mathematical Formulae for Identifying Blood Donors with Thalassemia Trait. Objective: To compare the diagnostic performance of 10 mathematical formulae for identifying thalassemia trait in blood donors.
Methods: Compete blood counts were conducted on peripheral blood specimens using the UniCel DxH 800 hematology analyzer. Receiver operating characteristic curves were used to evaluate the diagnostic performance of each mathematical formula.
Results: In the 66 donors with thalassemia and 288 subjects with no thalassemia analyzed, donors with thalassemia trait had lower values for mean corpuscular volume and mean corpuscular hemoglobin than subjects without thalassemia donors (77 fL vs 86 fL [P < .001]; 25 pg vs 28 pg [P < .001]). The formula developed by Shine and Lal in 1977 showed the highest area under the curve value, namely, 0.9. At the cutoff value of <1812, this formula had maximum specificity of 82.35% and sensitivity of 89.58%.
Conclusions: Our data indicate that the Shine and Lal formula has remarkable diagnostic performance in identifying donors with underlying thalassemia trait.
abstract_id: PUBMED:33067975
Evaluation of Red Blood Cell Indices and Parameter Formulas in Differential Diagnosis of Thalassemia Trait and Iron Deficiency Anemia for Children in Shenzhen Area of Guangdong Province in China Objective: To evaluate the efficiency of red blood cell indices and fomulas for the differential diagnosis of the thalassemia trait (TT) and iron deficiency anemia (IDA) for children in Shenzhen area of Guangdong Province in China.
Methods: A total of 849 child patients from Shenzhen were enrolled, including 536 cases of TT and 313 cases of IDA. The sensitivity (SEN), specificity (SPE), positive predictive values (PPV), negative predictive value (NPV), and Youden's indices (YI) were analyzed using five red blood cell indices [including red blood cell count, average red blood cell volune(MCV), average amount of red blood cell hemoglobin(CMH), red blood hemoglobin cancentration(MCHC), red blood cell distribution width(RDW)] and 10 red blood cell paramter formulas including Mentzer, Green and King, Srivastava, Ricerca, RDWI, Sirdah, Huber-Herklotz, Ehsani, Shine and Lal, and England and Fraser. Receiver operating characteristic (ROC) curve was drawn.
Results: Green and King was the most reliable index, as it had the highest YI (63.7%) and area under ROC curve (AUC) (0.875), the SEN and SPE was 82.5% and 81.2%. The YI, SEN, SPE, and AUC for RDWI were 62.8%, 79.1%, 83.7%, and 0.870, respectively.
Conclusion: The formulas of Green and King and RDWI can be used for the differential diagnosis of TT and IDA, suitable for chidren in Shenzhen, China.
abstract_id: PUBMED:32564505
Application of HbA2 levels and red cell indices-based new model in the differentiation of thalassemia traits from iron deficiency in hypochromic microcytic anemia Cases. Introduction: Thalassemia traits and iron deficiency anemia are the most common types of hypochromic microcytic anemia with similar clinical and laboratory features. It is vital to establish a new screening model based on HbA2 levels and red cell indices for the differentiation of TT from IDA in hypochromic microcytic anemia cases.
Method: The data comprised of the red blood cell indices and HbA2 prenatal diagnostic test results of 810 individuals who were identified to conform to the following criteria: MCV < 80 fl or MCH < 26 pg. We launched a new model consisting mainly of significative red cell indices and HbA2 levels, as well as proposing cutoff values by using decision trees and logistic regression analyses. Next, we evaluated our new method by comparing the sensitivity, specificity, positive, and negative predictive values with those of the previous formulas.
Results: We put forward a new model and compared it with 5 efficient formulas. The new model exhibited the highest accuracy (0.918), with its sensitivity and specificity calculated as 0.917 and 0.921, respectively. Our new model's Youden index was 0.838, which is higher than the other formulas' Youden indices.
Conclusions: The new screening model, based on HbA2 levels and red cell indices, is suitable for the screening of thalassemia patients in the hypochromic microcytic anemia group and has the best efficiency in distinguishing TT and IDA.
abstract_id: PUBMED:35936522
The Evaluation of Results of Twenty Common Equations for Differentiation of Beta Thalassemia Trait from Iron Deficiency Anemia: A Cross-Sectional Study. Background: Beta Thalassemia Trait (BTT) and Iron Deficiency Anemia (IDA) were two common clinical problems with clinical hypochromic and microcytic manifestations, and their differentiation from each other was very important and needs innovative formulas and laboratory tests. Since the consideration of anemia as a pair with BTT leads to beta-thalassemia major birth in 25% of cases, offering prospective parents detailed information about the likelihood of their offspring developing BTT is essential. The present study aimed to investigate the performance of common equations in differentiation of BTT from IDA.
Methods: In the present cross-sectional study, twenty common equations were selected in the differentiation of BTT from IDA. To evaluate the equations, the tests of 292 individuals (73 individuals with BTT and 219 individuals with IDA) were compared with the initial diagnosis of hypochromic and microcytic anemia using the formulas. Descriptive and value indices and Roc curve were utilized for all equations to analyze the results.
Results: Among twenty differential equations, Bordbar, Kerman I, II and Srivastava equations had the highest area under Roc curve (AUC) of 0.841, 0.838, 0.836, and 0.830 respectively, but Kandhro I. equation had the lowest AUC (0.378).
Conclusion: Given the importance of AUC and value indices of differential equations in clinical decision making, and results of evaluating common equations in differentiation of BTT from IDA. It is essential to improve the values of the equations by re-examining the parameters involved in them.
Answer: The reliability of indices used in the differentiation of thalassemia trait (TT) from iron deficiency anemia (IDA) in the pediatric population appears to be variable and not entirely consistent. According to the study by PUBMED:22866672, none of the discrimination indices, including the Mentzer Index (MI), England and Fraser Index (E&F), Srivastava Index (S), Green and King Index (G&K), Shine and Lal Index (S&L), red blood cell (RBC) count, RBC distribution width, and red blood cell distribution width Index (RDWI), showed 100% sensitivity and specificity. The highest sensitivity was obtained with S&L (87.1%), while the highest specificity was obtained with E&F formula (100%). However, the highest Youden's Index value was obtained with the E&F formula (58.1%). The study concluded that none of the formulas appeared reliable in discriminating between TT and IDA patients and suggested that the evaluation of iron status and measurement of hemoglobin A2 (HbA2) remain the most reliable investigations to differentiate between TT and IDA patients.
In contrast, another study by PUBMED:33067975 evaluated the efficiency of red blood cell indices and formulas for differential diagnosis of TT and IDA in children in the Shenzhen area of Guangdong Province in China. The study found that the Green and King formula was the most reliable index, with the highest Youden's Index (63.7%) and area under the ROC curve (AUC) (0.875), with sensitivity and specificity of 82.5% and 81.2%, respectively. The RDWI also showed good reliability with a Youden's Index of 62.8%, sensitivity of 79.1%, and specificity of 83.7%.
These findings suggest that while some indices may offer a degree of reliability in differentiating TT from IDA in the pediatric population, their performance is not perfect, and they may not be universally reliable. The use of multiple indices in combination with clinical assessment and other laboratory tests, such as iron status and HbA2 measurement, may improve diagnostic accuracy. |
Instruction: Care outcomes in long-term care facilities in British Columbia, Canada. Does ownership matter?
Abstracts:
abstract_id: PUBMED:17001264
Care outcomes in long-term care facilities in British Columbia, Canada. Does ownership matter? Objectives: This study investigated whether for-profit (FP) versus not-for-profit (NP) ownership of long-term care facilities resulted in a difference in hospital admission and mortality rates among facility residents in British Columbia, Canada.
Research Design: This retrospective cohort study used administrative data on all residents of British Columbia long-term care facilities between April 1, 1996, and August 1, 1999 (n = 43,065). Hospitalizations were examined for 6 diagnoses (falls, pneumonia, anemia, dehydration, urinary tract infection, and decubitus ulcers and/or gangrene), which are considered to be reflective of facility quality of care. In addition to FP versus NP status, facilities were divided into ownership subgroups to investigate outcomes by differences in governance and operational structures.
Results: We found that, overall, FP facilities demonstrated higher adjusted hospitalization rates for pneumonia, anemia, and dehydration and no difference for falls, urinary tract infections, or DCU/gangrene. FP facilities demonstrated higher adjusted hospitalization rates compared with NP facilities attached to a hospital, amalgamated to a regional health authority, or that were multisite. This effect was not present when comparing FP facilities to NP single-site facilities. There was no difference in mortality rates in FP versus NP facilities.
Conclusions: The higher adjusted hospitalization rates in FP versus NP facilities is consistent with previous research from U.S. authors. However, the superior performance by the NP sector is driven by NP-owned facilities connected to a hospital or health authority, or that had more than one site of operation.
abstract_id: PUBMED:15738489
Staffing levels in not-for-profit and for-profit long-term care facilities: does type of ownership matter? Background: Currently there is a lot of debate about the advantages and disadvantages of for-profit health care delivery. We examined staffing ratios for direct-care and support staff in publicly funded not-for-profit and for-profit nursing homes in British Columbia.
Methods: We obtained staffing data for 167 long-term care facilities and linked these to the type of facility and ownership of the facility. All staff were members of the same bargaining association and received identical wages in both not-for-profit and for-profit facilities. Similar public funding is provided to both types of facilities, although the amounts vary by the level of functional dependence of the residents. We compared the mean number of hours per resident-day provided by direct-care staff (registered nurses, licensed practical nurses and resident care aides) and support staff (housekeeping, dietary and laundry staff) in not-for-profit versus for-profit facilities, after adjusting for facility size (number of beds) and level of care.
Results: The nursing homes included in our study comprised 76% of all such facilities in the province. Of the 167 nursing homes examined, 109 (65%) were not-for-profit and 58 (35%) were for-profit; 24% of the for-profit homes were part of a chain, and the remaining homes were owned by a single operator. The mean number of hours per resident-day was higher in the not-for-profit facilities than in the for-profit facilities for both direct-care and support staff and for all facility levels of care. Compared with for-profit ownership, not-for-profit status was associated with an estimated 0.34 more hours per resident-day (95% confidence interval [CI] 0.18-0.49, p < 0.001) provided by direct-care staff and 0.23 more hours per resident-day (95% CI 0.15-0.30, p < 0.001) provided by support staff.
Interpretation: Not-for-profit facility ownership is associated with higher staffing levels. This finding suggests that public money used to provide care to frail eldery people purchases significantly fewer direct-care and support staff hours per resident-day in for-profit long-term care facilities than in not-for-profit facilities.
abstract_id: PUBMED:36944427
The association of facility ownership with COVID-19 outbreaks in long-term care homes in British Columbia, Canada: a retrospective cohort study. Background: Long-term care (LTC) in Canada is delivered by a mix of government-, for-profit- and nonprofit-owned facilities that receive public funding to provide care, and were sites of major outbreaks during the early stages of the COVID-19 pandemic. We sought to assess whether facility ownership was associated with COVID-19 outbreaks among LTC facilities in British Columbia, Canada.
Methods: We conducted a retrospective observational study in which we linked LTC facility data, collected annually by the Office of the Seniors Advocate BC, with public health data on outbreaks. A facility outbreak was recorded when 1 or more residents tested positive for SARS-CoV-2 between Mar. 1, 2020, and Jan. 31, 2021. We used the Cox proportional hazards method to calculate the adjusted hazard ratio (HR) of the association between risk of COVID-19 outbreak and facility ownership, controlling for community incidence of COVID-19 and other facility characteristics.
Results: Overall, 94 outbreaks involved residents in 80 of 293 facilities. Compared with health authority-owned facilities, for-profit and nonprofit facilities had higher risks of COVID-19 outbreaks (adjusted HR 1.99, 95% confidence interval [CI] 1.12-3.52 and adjusted HR 1.84, 95% CI 1.00-3.36, respectively). The model adjusted for community incidence of infection (adjusted HR 1.12, 95% CI 1.07-1.17), total nursing hours per resident-day (adjusted HR 0.84, 95% CI 0.33-2.14), facility age (adjusted HR 1.01, 95% CI 1.00-1.02), number of facility beds (adjusted HR 1.20, 95% CI 1.12-1.30) and facilities with beds in shared rooms (adjusted HR 1.16, 95% CI 0.73-1.85).
Interpretation: Findings suggest that ownership of LTC facilities by health authorities in BC offered some protection against COVID-19 outbreaks. Further study is needed to unpack the underlying pathways behind this observed association.
abstract_id: PUBMED:32646822
Long-Term Care Facility Ownership and Acute Hospital Service Use in British Columbia, Canada: A Retrospective Cohort Study. Objective: Previous studies report higher hospitalization rates in for-profit compared with nonprofit long-term care facilities (LTCFs), but have not included staffing data, a major potential confounder. Our objective was to examine the effect of ownership on hospital admission rates, after adjusting for facility staffing levels and other facility and resident characteristics, in a large Canadian province (British Columbia).
Design: Retrospective cohort study.
Setting And Participants: Our cohort included individuals resident in a publicly funded LTCF in British Columbia at any time between April 1, 2012 and March 31, 2016.
Measures: Health administrative data were extracted from multiple databases, including continuing care, hospital discharge, and Minimum Data Set (MDS 2.0) assessment records. Cox extended hazards regression was used to estimate hospitalization risk associated with facility- and resident-level factors.
Results: The cohort included 49,799 residents in 304 LTCF facilities (116 publicly owned and operated, 99 for-profit, and 89 nonprofit) over the study period. Hospitalization risk was higher for residents in for-profit (adjusted hazard ratio [adjHR] 1.34; 95% confidence interval [CI] 1.29-1.38) and nonprofit (adjHR 1.37; 95% CI 1.32-1.41) facilities compared with publicly owned and operated facilities, after adjustment for staffing, facility size, urban location, resident demographics, and case mix. Within subtypes, risk was highest in single-site facilities: for-profit (adjHR 1.42; 95% CI 1.36-1.48) and nonprofit (adjHR 1.38, 95% CI 1.33-1.44).
Conclusions And Implications: This is the first Canadian study using linked health data from hospital discharge records, MDS 2.0, facility staffing, and ownership records to examine the adjusted effect of facility ownership characteristics on hospital use of LTCF residents. We found significantly lower adjHRs for hospital admission in publicly owned facilities compared with both for-profit and nonprofit facilities. Our finding that publicly owned facilities have lower hospital admission rates compared with for-profit and nonprofit facilities can help inform decision-makers faced with the challenge of optimizing care models in both nursing homes and hospitals as they build capacity to care for aging populations.
abstract_id: PUBMED:33749532
How Do Social Workers Working in Long-term Care Understand Their Roles? Using British Columbia, Canada as an Example. A common problem faced by social workers working in long-term care is that they are not given the opportunity to tell how they understand their roles and thus their roles are neither understood nor recognized by other professionals. There is a need for social workers to tell how they understand their roles so that their roles can be better understood and recognized. A research study was conducted in the province of British Columbia in Canada to explore how social workers working in long-term care understand their roles. Fourteen semi-structured interviews were conducted. Five themes were identified, including advocating for the most vulnerable, humanizing long-term care, balancing between self-determination and safety, dancing with the systems, and facilitating collaboration. The results reiterated but also supplemented the existing literature. This research study also proposes future research studies on the roles of social workers working in long-term care.
abstract_id: PUBMED:21269009
Trends in long-term care staffing by facility ownership in British Columbia, 1996 to 2006. Background: Long-term care facilities (nursing homes) in British Columbia consist of a mix of for-profit, not-for-profit non-government, and not-for-profit health-region-owned establishments. This study assesses the extent to which staffing levels have changed by facility ownership category.
Data And Methods: With data from Statistics Canada's Residential Care Facilities Survey, various types of care hours per resident-day were examined from 1996 through 2006 for the province of British Columbia. Random effects linear regression modeling was used to investigate the effect of year and ownership on total nursing hours per resident-day, adjusting for resident demographics, case mix, and facility size.
Results: From 1996 to 2006, crude mean total nursing hours per resident-day rose from 1.95 to 2.13 hours in for-profit facilities (p = 0.06); from 1.99 to 2.48 hours in not-for-profit non-government facilities (p < 0.001); and from 2.25 to 3.30 hours in not-for-profit health-region-owned facilities (p < 0.001). The adjusted rate of increase in total nursing hours per resident-day was significantly greater in not-for-profit health-region-owned facilities.
Interpretation: While total nursing hours per resident-day have increased in all facility groups, the rate of increase was greater in not-for-profit facilities operated by health authorities.
abstract_id: PUBMED:29169741
Utilization of Antibiotics in Long-Term Care Facilities in British Columbia, Canada. Background: Antibiotic use is highly prevalent in long-term care facilities (LTCFs); a resident's annual exposure to at least 1 course of antibiotic is approximately 50% to 80%. The objective of this study was to understand the extent of antibiotic use in the population of residents in British Columbia's (BC) LTCFs from 2007 to 2014.
Methods: Antibiotic prescription data for LTCF residents was extracted from the central prescription database and linked to the physician billing plan to obtain antibiotic indication. Total defined daily dose (DDD) per 1000 residents per day was calculated.
Results: Our database had 381 LTCFs with an average of nearly 24,694 residents annually and 419,036 antibiotic prescriptions. Antibiotic utilization did not change dramatically between 2007 and 2014, ranging from 39.2 in 2007 to 35.2 DDD per 1000 residents per day in 2014. Although usage of most antibiotics declined, use of moxifloxacin, amoxicillin-clavulanate, doxycycline, and amoxicillin increased significantly. The indication most frequently linked to prescription was urinary tract infection (6.58 DDD per 1000 residents per day), with nitrofurantoin, ciprofloxacin, and trimethoprim/sulfamethoxazole being the most commonly prescribed agents. This was followed closely by prescriptions for respiratory infections (5.34 DDD per 1000 residents per day), with moxifloxacin being the most commonly prescribed antibiotic, primarily for upper respiratory tract infection (URTI), whereas doxycycline is used commonly for lower respiratory tract infection. Duration of antibiotic therapy in LTCF residents has decreased significantly from 9.29 days to 7.3 days per prescription in 2014.
Conclusion: Antibiotic use in LTCFs is high relative to the general population. Our study underscores that stewardship in LTCFs should continue to focus on length of treatment, appropriate detection of urinary tract infections, and avoidance of treating URTIs with antibiotics.
abstract_id: PUBMED:3121537
Forecasting client transitions in British Columbia's Long-Term Care Program. This article presents a model for the annual transitions of clients through various home and facility placements in a long-term care program. The model, an application of Markov chain analysis, is developed, tested, and applied to over 9,000 clients (N = 9,483) in British Columbia's Long Term Care Program (LTC) over the period 1978-1983. Results show that the model gives accurate forecasts of the progress of groups of clients from state to state in the long-term care system from time of admission until eventual death. Statistical methods are used to test the modeling hypothesis that clients' year-over-year transitions occur in constant proportions from state to state within the long-term care system. Tests are carried out by examining actual year-over-year transitions of each year's new admission cohort (1978-1983). Various subsets of the available data are analyzed and, after accounting for clear differences among annual cohorts, the most acceptable model of the actual client transition data occurred when clients were separated into male and female groups, i.e., the transition behavior of each group is describable by a different Markov model. To validate the model, we develop model estimates for the numbers of existing clients in each state of the long-term care system for the period (1981-1983) for which actual data are available. When these estimates are compared with the actual data, total weighted absolute deviations do not exceed 10 percent of actuals. Finally, we use the properties of the Markov chain probability transition matrix and simulation methods to develop three-year forecasts with prediction intervals for the distribution of the existing total clients into each state of the system. The tests, forecasts, and Markov model supplemental information are contained in a mechanized procedure suitable for a microcomputer. The procedure provides a powerful, efficient tool for decision makers planning facilities and services in response to the needs of long-term care clients.
abstract_id: PUBMED:34698584
Visit restrictions highlight crucial role of family members in long-term care: The British Columbia experience. The COVID-19 pandemic has created much-needed attention on long-term care in Canada. Public exposure of life in long-term care has generated tremendous public support for improvements that to a large extent focus on staffing and ageing infrastructure. However, the impact of visit restrictions on long-term care residents during the pandemic highlights the need for changes to how long-term care has traditionally viewed the role and value of family members. The Office of the Seniors Advocate of British Columbia published a report Staying Apart to Stay Safe that highlighted the impact of visit restrictions from the perspective of over 13,000 residents and family members directly affected. The results provide compelling evidence of the need for health leaders to re-evaluate current practice and embrace a more holistic role for family members of residents in long-term care.
abstract_id: PUBMED:10169492
Financing long-term care in Canada. Financial policies relating to long-term care in Canada are changing rapidly in response to demands for health care reform. This chapter focuses on the financial structure of institutional care, primarily nursing homes, in the western provinces of Alberta and British Columbia. Community-based long-term care is discussed briefly.
Answer: The question of whether ownership affects care outcomes in long-term care facilities in British Columbia, Canada, has been the subject of several studies, and the evidence suggests that ownership does indeed matter.
One study found that for-profit (FP) facilities had higher adjusted hospitalization rates for pneumonia, anemia, and dehydration compared to not-for-profit (NP) facilities, particularly those attached to a hospital or health authority, or that were multisite. However, there was no difference in mortality rates between FP and NP facilities (PUBMED:17001264). Another study reported that not-for-profit facilities had higher staffing levels than for-profit facilities, which suggests that public funding purchases significantly fewer direct-care and support staff hours per resident-day in for-profit long-term care facilities (PUBMED:15738489).
During the COVID-19 pandemic, it was observed that for-profit and nonprofit facilities had higher risks of COVID-19 outbreaks compared to health authority-owned facilities, indicating that ownership by health authorities in BC offered some protection against outbreaks (PUBMED:36944427). Additionally, a retrospective cohort study found that hospitalization risk was higher for residents in for-profit and nonprofit facilities compared with publicly owned and operated facilities, even after adjusting for staffing, facility size, urban location, resident demographics, and case mix (PUBMED:32646822).
Trends in staffing by facility ownership from 1996 to 2006 showed that while total nursing hours per resident-day increased in all facility groups, the rate of increase was greater in not-for-profit facilities operated by health authorities (PUBMED:21269009). This suggests that not-for-profit health-region-owned facilities may provide a higher level of care.
In summary, the evidence from British Columbia, Canada, indicates that ownership of long-term care facilities does matter, with not-for-profit and particularly health authority-owned facilities often associated with better care outcomes, higher staffing levels, and lower risks of hospitalization and COVID-19 outbreaks compared to for-profit facilities. |
Instruction: Does reporting of plain chest radiographs affect the immediate management of patients admitted to a medical assessment unit?
Abstracts:
abstract_id: PUBMED:12943646
Does reporting of plain chest radiographs affect the immediate management of patients admitted to a medical assessment unit? Aim: The purpose of our study was to investigate whether reporting of plain chest radiographs affects immediate management of patients admitted to a medical assessment unit.
Materials And Methods: During a 3 month period we prospectively evaluated 200 patients who had a plain chest radiograph on admission. After the post on-call ward round, an independent medical specialist registrar reviewed the notes, retrieving relevant clinical details. The plain chest films were reported independently by a trainee radiologist and consultant, reaching a consensus report.
Results: There was 93% agreement between trainee and consultant radiologists (95% CI=89-96%). Seventy percent had documented reports by the on-call medical team. There was disagreement between radiology and medical reports in 49% of reported films (95% CI=40-57%). The radiologist's report led to a direct change in the immediate management of 22 patients (11%).
Conclusion: Only 70% of films had documented reports in the clinical notes despite this being a legal requirement. Radiology reporting does cause a direct change in patient management. Chest radiographs of patients admitted to a medical admissions unit should be reported by a radiologist with the minimum of delay.
abstract_id: PUBMED:38037554
The clinical utility of immediate post-operative PACU plain film radiographs following uncomplicated open Latarjet procedure - An institutional series of consecutive patients. Background: Immediate post-operative plain film radiograph x-rays in PACU following open Latarjet procedure are often ordered as routine. However, such radiographs utilize institutional cost and time, whilst potentially exposing patients to often-unnecessary additional radiation. This study sought to evaluate whether routine immediate post-operative radiographs following uncomplicated open Latarjet procedures impacted clinical decision-making in our institution.
Methods: From 2017 to 2020, patients who underwent open Latarjet procedure by one of four fellowship-trained upper limb surgeons at a single institution were included in this study. Post-operative radiographs taken immediately in PACU were reviewed to determine if any reported radiographic findings impacted on clinical decision-making in the immediate post-operative setting. SPSS was used for descriptive statistics.
Results: A total of 337 patients underwent an X-ray in PACU immediate after uncomplicated open Latarjet procedure. Overall, 98.5% were male (n = 332), the mean patient age of included patients was 22.9 ± 4.2 years. No patient had an abnormal finding on their post-operative x-ray. Two patients returned to the operating room in the immediate post-operative period, both requiring washout and debridement due to haemtoma or superficial wound infection.
Conclusion: The findings of this study suggest that the use of post-operative plain films in PACU following open Latarjet procedure remains a costly use of resources, with little ultimate impact on clinical decision making in the short-term post-operatively.
Level Of Evidence: IV - Institutional Case Series of Consecutive Patients.
abstract_id: PUBMED:37671823
Associating a standardized reporting tool for chest radiographs with clinical complications in pediatric acute chest syndrome. Background: Acute chest syndrome (ACS) is an important cause of morbidity in sickle cell disease (SCD). A standardized tool for reporting chest radiographs in pediatric SCD patients did not previously exist.
Objective: To analyze the interobserver agreement among pediatric radiologists' interpretations for pediatric ACS chest radiographs utilizing a standardized reporting tool. We also explored the association of radiographic findings with ACS complications.
Methods: This was a retrospective cohort study of pediatric ACS admissions from a single institution in 2019. ICD-10 codes identified 127 ACS admissions. Two radiologists independently interpreted the chest radiographs utilizing a standardized reporting tool, a third radiologist adjudicated discrepancies, and κ analysis assessed interobserver agreement. Clinical outcomes were correlated with chest radiograph findings utilizing Pearsons' χ2 , t tests, and Mann-Whitney U tests. Odds ratios (ORs) with 95% confidence intervals (CIs) were calculated.
Results: Interobserver agreement was moderate to near-perfect across variables, with κ analysis showing near-perfect agreement for opacity reported in the right upper lobe (0.84), substantial agreement for right lower lobe (0.63), and vertebral bony changes (0.72), with moderate agreement for all other reported variables. On the initial chest radiograph, an opacity located in the left lower lobe (LLL) correlated with pediatric intensive care unit transfer (p = .03). Pleural effusion on the initial chest radiograph had a 3.98 OR (95% CI: 1.35-11.74) of requiring blood products and a 10.67 OR (95% CI: 3.62-31.39) for noninvasive ventilation.
Conclusion: The standardized reporting tool showed moderate to near-perfect agreement between radiologists. LLL opacity, and pleural effusion were associated with increased risk of ACS complications.
abstract_id: PUBMED:37756693
Utility of routine chest radiographs after chest drain removal in paediatric cardiac surgical patients-a retrospective analysis of 1076 patients. Objectives: Chest drains are routinely placed in children following cardiac surgery. The purpose of this study was to determine the incidence of a clinically relevant pneumothorax and/or pleural effusion after drain removal and to ascertain if a chest radiograph can be safely avoided following chest drain removal.
Methods: This single-centre retrospective cohort study included all patients under 18 years of age who underwent cardiac surgery between January 2015 and December 2019 with the insertion of mediastinal and/or pleural drains. Exclusion criteria were chest drain/s in situ ≥14 days and mortality prior to removal of chest drain/s. A drain removal episode was defined as the removal of ≥1 drains during the same episode of analgesia ± sedation. All chest drains were removed using a standard protocol. Chest radiographs following chest drain removal were reviewed by 2 investigators.
Results: In all, 1076 patients were identified (median age: 292 days, median weight: 7.8 kg). There were 1587 drain removal episodes involving 2365 drains [mediastinal (n = 1347), right pleural (n = 598), left pleural (n = 420)]. Chest radiographs were performed after 1301 drain removal episodes [mediastinal (n = 1062); right pleural (n = 597); left pleural (n = 420)]. Chest radiographs were abnormal after 152 (12%) drain removal episodes [pneumothorax (n = 43), pleural effusion (n = 98), hydropneumothorax (n = 11)]. Symptoms/signs were present in 30 (2.3%) patients. Eleven (<1%) required medical management. One required reintubation and 2 required chest drain reinsertion.
Conclusions: The incidence of clinically significant pneumothorax/pleural effusion following chest drain removal after paediatric cardiac surgery is low (<1%). Most patients did not require reinsertion of a chest drain. It is reasonable not to perform routine chest radiographs following chest drain removal in most paediatric cardiac surgical patients.
abstract_id: PUBMED:34143230
Reporting errors in plain radiographs for lower limb trauma-a systematic review and meta-analysis. Introduction: Plain radiographs are a globally ubiquitous means of investigation for injuries to the musculoskeletal system. Despite this, initial interpretation remains a challenge and inaccuracies give rise to adverse sequelae for patients and healthcare providers alike. This study sought to address the limited, existing meta-analytic research on the initial reporting of radiographs for skeletal trauma, with specific regard to diagnostic accuracy of the most commonly injured region of the appendicular skeleton, the lower limb.
Method: A prospectively registered, systematic review and meta-analysis was performed using published research from the major clinical-science databases. Studies identified as appropriate for inclusion underwent methodological quality and risk of bias analysis. Meta-analysis was then performed to establish summary rates for specificity and sensitivity of diagnostic accuracy, including covariates by anatomical site, using HSROC and bivariate models.
Results: A total of 3887 articles were screened, with 10 identified as suitable for analysis based on the eligibility criteria. Sensitivity and specificity across the studies were 93.5% and 89.7% respectively. Compared with other anatomical subdivisions, interpretation of ankle radiographs yielded the highest sensitivity and specificity, with values of 98.1% and 94.6% respectively, and a diagnostic odds ratio of 929.97.
Conclusion: Interpretation of lower limb skeletal radiographs operates at a reasonably high degree of sensitivity and specificity. However, one in twenty true positives is missed on initial radiographic interpretation and safety netting systems need to be established to address this. Virtual fracture clinic reviews and teleradiology services in conjunction with novel technology will likely be crucial in these circumstances.
abstract_id: PUBMED:24792315
Patterns of pulmonary vascularization on plain-film chest X-rays Plain chest films are a fundamental tool in the practice of medicine. The apparent simplicity of plain chest films sometimes leads us to forget that interpreting them correctly can provide very valuable information, especially if the interpretation is grounded in key clinical information. To interpret a plain chest film, it is important to pay attention to the pulmonary vascularization. This article reviews the normal shape and distribution of the pulmonary vessels on plain chest films and the most common pathologic vascular patterns, including those seen in pulmonary hypertension, hyperemia, hypovascularization, and alternative perfusion.
abstract_id: PUBMED:37876410
Comparison of Sacroiliitis Grade Readings on the Same Plain Radiographs by the Same Observer at Different Periods. Background: This study aimed to investigate whether there is a difference between the readings of plain sacroiliac radiographs of patients with sacroiliitis by the same observer.
Materials And Methods: In the study, we included patients diagnosed with sacroiliitis through sacroiliac MRI who had undergone plain radiographs at our center between 2015 and 2022. The radiographic grading of patients was conducted by transferring their demographic and clinical information into a computerized environment so that these details would not be identifiable. The plain radiographs were numbered, and the responses were graded as grade 0, 1, 2, 3, or 4 for the right and left sacroiliac joints. The next day, using the same procedure, the same clinician re-evaluated the same plain radiographs in a different order without viewing the previous responses. This method was employed to prevent bias. The results (kappa value) were evaluated (0.00-0.20: slight agreement, 0.21-0.40: fair agreement, 0.41-0.60: moderate agreement, 0.61-0.80: substantial agreement, 0.81-1.00: perfect agreement).
Results: The study population included 478 patients and 956 sacroiliac joints from plain radiographs, both on the right and left. Following the observer's classification of the sacroiliac joints into 0, 1, 2, 3, and 4, a moderate level of agreement was found in the second evaluation of the same observer a day later with the same grades (p<0.001, kappa: 0.576). When categorized as grade 0-1 and grade 2-4, there was moderate agreement (p<0.001, kappa: 0.519), and categorization into grades 0-2 and 3-4 showed substantial agreement (p<0.001, kappa: 0.715). Analyzing the categorization into grades 0-3 and grade 4 revealed a higher kappa value, indicating substantial agreement (p<0.001, kappa: 0.766).
Conclusion: Intraobserver interpretation of radiographs may be more accurate than the interpretation of different specialists. While interpreting plain radiographs, we observed variability between adjacent grades but less variability between distant grades. However, these results need to be validated.
abstract_id: PUBMED:32125641
Assessment of the young adult hip joint using plain radiographs. Radiographic examination remains the mainstay of the initial assessment of the young adult hip; however, common parameters are required to assist in the formation of accurate diagnoses and appropriate management plans. This paper aims to summarise the most important aspects of the assessment of plain radiographs performed on the young adult hip joint.
abstract_id: PUBMED:22514483
Survey of physicians concerning the use of chest radiography in the diagnosis of pneumonia in out-patients. Objective: To determine how physicians use chest radiography in the diagnosis of pneumonia in ambulatory patients.
Study Population: A convenience sample of 176 Nova Scotia family physicians and internists selected to represent all geographic areas of the province proportional to population. STUDY INSTRUMENT: A 35-item questionnaire covering demographics, experience with out-patients with pneumonia, use of chest radiographs to make this diagnosis and factors that were considered important in the decision to perform initial and follow-up chest radiographs. Two skill-testing questions were also included.
Results: One hundred and fourteen of 176 (64.7%) responded; 88% had treated out-patients with pneumonia in the previous three months. Fifty-seven per cent of physicians requested chest radiographs on 90% to 100% of out-patients in whom they had made a clinical diagnosis of pneumonia. These physicians were more likely to be internists and to have graduated before 1970. Factors that ranked most important in the decision to request the initial chest radiograph were clinical appearance, respiratory distress and physical findings, while age and smoking history contributed most to the decision to perform a follow-up chest radiograph.
Conclusions: There is considerable variability among physicians in requesting chest radiographs on out-patients with a clinical diagnosis of pneumonia. Physician and patient factors contribute to this variability.
abstract_id: PUBMED:24119013
Evaluation and outcomes of patients admitted to a tertiary medical assessment unit with acute chest pain of possible coronary origin. Objectives: The study aims to (i) profile clinical characteristics, risk estimates of acute coronary syndrome (ACS), use and yield of non-invasive cardiac testing, discharge diagnosis and 30-day outcomes among patients admitted with acute chest pain of possible coronary origin; and (ii) construct a risk stratification algorithm that informs management decisions.
Methods: This is a retrospective cohort study of 130 consecutive patients admitted to a tertiary hospital medical assessment unit between 24 January and 22 March 2012. Estimates of ACS risk were based on Australian guidelines and Thrombolysis in Myocardial Infarction (TIMI) scores.
Results: Patients were of mean age 61 years, 45% had known coronary artery disease (CAD), 58% presented with typical ischaemic pain, 82% had intermediate to high ACS risk and 61% underwent testing. Myocardial ischaemia was cardiologist-confirmed discharge diagnosis in 29% of patients, and was associated with known CAD, typical pain, multiple risk factors and high TIMI risk scores (P < 0.001 for all associations). Of 98 non-invasive investigations, 9% (95% CI, 5-17%) were positive for myocardial ischaemia. Major adverse event rate at 30 days was 0.8% (95% CI, <0.1-6%). An algorithm was constructed that integrates known CAD, ACS risk and TIMI scores in identifying low-risk patients capable of rapid discharge from EDs without further investigation, and classifying the remainder into risk groups that informs choice of investigations and need for telemetry.
Conclusions: In patients with indeterminate chest pain, clinical features and risk scores identify most with myocardial ischaemia. An algorithm is presented that might inform triaging, early discharge, choice of testing and need for telemetry.
Answer: Yes, reporting of plain chest radiographs does affect the immediate management of patients admitted to a medical assessment unit. A study found that the radiologist's report led to a direct change in the immediate management of 22 patients (11%) out of 200 evaluated, indicating that radiology reporting can cause a direct change in patient management (PUBMED:12943646). |
Instruction: Are advanced three-dimensional imaging studies always needed to measure the coronal knee alignment of the lower extremity?
Abstracts:
abstract_id: PUBMED:27844117
Are advanced three-dimensional imaging studies always needed to measure the coronal knee alignment of the lower extremity? Background: Coronal malalignment of the lower extremity is closely related to the onset and progression of osteoarthritis. Restoring satisfactory alignment after tibial osteotomy improves the long-term success of this conservative surgery. The purpose of our study was to determine (1) if there is a difference between two-dimensional (2D) and 3D measurements of the hip-knee-ankle (HKA) angle between the mechanical axes of the femur and the tibia, (2) which parameter most affects 2D-3D HKA measurement, and (3) the percentage of patients who are at risk of error in HKA measurement.
Methods: We reviewed imaging studies of the consecutive patients referred to us for hip or knee pain between June and October 2013. Patients with previous pelvis or lower extremity surgery were excluded.
Results: In 51 % (95/186) of lower extremities examined, the 3D method showed more valgus than the 2D method, and in 49 % (91/186), the 3D method showed more varus. In 12 % of extremities (23/186), the knee varus or valgus alignment was completely opposite in 3D images compared to 2D images. Having more than 7° of flexum/recurvatum alignment increased error in 2D HKA measurement by 5.7°. This was calculated to be 0.15° per 1° increase in femoral torsion and 0.05° per 1° increase in tibial torsion. Approximately 20 % of patients might be at risk of error in HKA angle measurement in 2D imaging studies.
Conclusions: Orthopaedic surgeons should assess lower extremity alignment in standing position, with enough exposure of the extremity to find severe alignment or rotational deformities, and consider advanced 3D images of those patients who have them. Otherwise, HKA angle can be measured with good accuracy with 2D techniques.
Level Of Evidence: Level-III diagnostic.
abstract_id: PUBMED:30126713
Relationship Between Coronal Alignment and Rotational Profile of Lower Extremity in Patients With Knee Osteoarthritis. Background: We aimed at determining whether the coronal alignment of lower extremity was related to rotational geometry of distal femur, femoral anteversion, and tibial torsion in patients with knee osteoarthritis.
Methods: A total of 422 lower extremities were divided into 3 groups according to the coronal alignment: valgus (n = 31), neutral (n = 78), and varus group (n = 313). Condylar twisting angle was measured to determine rotational geometry of distal femur as the angle between the clinical transepicondylar axis and the posterior condylar line. Femoral anteversion was assessed using the angle between a line intersecting the femoral neck and the posterior condylar line (pFeAV) and the angle between the same line and transepicondylar axis that is not affected by posterior condylar variations (tFeAV). Tibial torsion was evaluated by measuring the angle between the posterior condyles of the proximal tibia and the transmalleolar axis.
Results: As the coronal alignment changed from varus to valgus, the condylar twisting angle increased (r = 0.253, P < .001; 6.6° in varus, 7.4° in neutral, and 10.2° in valgus group). Although the pFeAV also increased (r = 0.145, P = .003), the tFeAV did not change significantly (P = .218). Mean tFeAV was 4.3° in varus, 4.7° in neutral, and 6.5° in valgus group. In contrast, as the coronal alignment changed from varus to valgus, the external tibial torsion increased (r = 0.374, P < .001; 22.6° in varus, 26.3° in neutral, and 32.6° in valgus group).
Conclusion: The change patterns of the rotational profiles of the lower extremity according to the coronal alignment should be considered in order to obtain satisfactory rotational alignment after TKA.
abstract_id: PUBMED:29150745
Alignment in the transverse plane, but not sagittal or coronal plane, affects the risk of recurrent patella dislocation. Purpose: Abnormalities of lower extremity alignment (LEA) in recurrent patella dislocation (RPD) have been studied mostly by two-dimensional (2D) procedures leaving three-dimensional (3D) factors unknown. This study aimed to three-dimensionally examine risk factors for RPD in lower extremity alignment under the weight-bearing conditions.
Methods: The alignment of 21 limbs in 15 RPD subjects was compared to the alignment of 24 limbs of 12 healthy young control subjects by an our previously reported 2D-3D image-matching technique. The sagittal, coronal, and transverse alignment in full extension as well as the torsional position of the femur (anteversion) and tibia (tibial torsion) under weight-bearing standing conditions were assessed by our previously reported 3D technique. The correlations between lower extremity alignment and RPD were assessed using multiple logistic regression analysis. The difference of lower extremity alignment in RPD between under the weight-bearing conditions and under the non-weight-bearing conditions was assessed.
Results: In the sagittal and coronal planes, there was no relationship (statistically or by clinically important difference) between lower extremity alignment angle and RPD. However, in the transverse plane, increased external tibial rotation [odds ratio (OR) 1.819; 95% confidence interval (CI) 1.282-2.581], increased femoral anteversion (OR 1.183; 95% CI 1.029-1.360), and increased external tibial torsion (OR 0.880; 95% CI 0.782-0.991) were all correlated with RPD. The tibia was more rotated relative to femur at the knee joint in the RPD group under the weight-bearing conditions compared to under the non-weight-bearing conditions (p < 0.05).
Conclusions: This study showed that during weight-bearing, alignment parameters in the transverse plane related to the risk of RPD, while in the sagittal and coronal plane alignment parameters did not correlate with RPD. The clinical importance of this study is that the 3D measurements more directly, precisely, and sensitively detect rotational parameters associated with RPD and hence predict risk of RPD.
Level Of Evidence: III.
abstract_id: PUBMED:32680484
The effect of knee joint rotation in the sagittal and axial plane on the measurement accuracy of coronal alignment of the lower limb. Background: Although the measurement of coronal alignment of the lower limb on conventional full-length weight-bearing anteroposterior (FLWAP) radiographs was reported to be influenced by the knee joint rotation, no comparative analysis was performed considering the effects of knee joint rotation on the sagittal and axial planes simultaneously using the three-dimensional images while taking into account the actual weight-bearing conditions. The aim of this study was to investigate the effect of knee joint rotation on the measurement accuracy of coronal alignment of the lower limb on the FLWAP radiograph.
Methods: Radiographic images of 90 consecutive patients (180 lower limbs) who took both the FLWAP radiograph and the EOS image were retrospectively reviewed. The relationship among delta values of mechanical tibiofemoral angle (mTFA) between the FLWAP radiographs and the EOS images (ΔmTFA), knee flexion/extension angle (sagittal plane rotation) on the EOS images, and patellar rotation (axial plane rotation) on the FLWAP radiographs were analyzed. Further, subgroup analysis according to each direction of knee joint rotation was performed.
Results: There was a significant correlation between ΔmTFA and sagittal plane rotation (r = 0.368, P < 0.001), whereas axial plane rotation was not correlated. In the analysis according to the direction, statistically significant correlation was observed only in the knee flexion group (r = 0.399, P < 0.001). The regression analysis showed a significant linear relationship between ΔmTFA and sagittal plane rotation (r2 = 0.136, P < 0.001). Additional subgroup analysis in patients with the patellar rotation greater than 3% showed a similar result of a linear relationship between ΔmTFA and sagittal plane rotation (r2 = 0.257, P < 0.001), whereas no statistically significant relationship was found in patients with the patellar rotation less than 3%.
Conclusion: The measurement accuracy of coronal alignment of the lower limb on the FLWAP radiographs would be influenced by knee flexion, specifically when there is any subtle rotation of the knee joint in the axial plane. A strict patellar forward position without axial plane rotation of the knee could provide accurate results of the measurement even if there is a fixed flexion contracture of the knee.
abstract_id: PUBMED:36577972
Factor affecting the discrepancy in the coronal alignment of the lower limb between the standing and supine radiographs. Background: Conflicting results have been reported regarding the factors that can predict the discrepancy in the coronal alignment of the lower limb between radiographs taken in the standing and supine positions. Therefore, this study aimed to investigate factors that can predict discrepancies in the coronal alignment of the lower limb between radiographs taken in the standing and supine positions.
Methods: We retrospectively evaluated the medical records of patients who underwent full-length anteroposterior radiographs of the lower limb in both standing and supine positions between January 2019 and September 2021. The discrepancy in the coronal alignment of the lower limb between the standing and supine radiographs was defined as the absolute value of the difference in the hip-knee-ankle (HKA) angle between the two radiographs, which is presented as the ΔHKA angle. Correlation and regression analyses were performed to analyse the relationship among ΔHKA angle, demographic data, and several radiographic parameters.
Results: In total, 147 limbs (94 patients) were included in this study. The mean ΔHKA angle was 1.3 ± 1.1° (range, 0-6.5°). The ΔHKA angle was significantly correlated with body mass index and several radiographic parameters, including the HKA angle, joint line convergence angle, and osteoarthritis grade. Subsequent multiple linear regression analysis was performed using the radiographic parameters measured on the supine radiographs with the two separate models from the two observers, which revealed that body mass index and advanced osteoarthritis (Kellgren-Lawrence grades 3 and 4) had a positive correlation with the ΔHKA angle.
Conclusions: Body mass index and advanced osteoarthritis affected the discrepancy in the coronal alignment of the lower limb between standing and supine radiographs. A discrepancy in the coronal alignment of the lower limb could be more prominent in patients with an increased body mass index and advanced osteoarthritis, corresponding to Kellgren-Lawrence grades 3 and 4.
abstract_id: PUBMED:35794578
Using short knee radiographs to predict the coronal alignment after TKA: Is it an accurate proxy for HKA on full-length images? Background: The postoperative clinical outcomes has been extensively demonstrated to correlate with the coronal alignment after total knee arthroplasty (TKA). However, in different studies, either the hip-knee-ankle angle (HKA) on a full-length radiograph or the femorotibial angle (FTA) on a short knee film was used to categorize the postoperative coronal alignment. Meanwhile, several different FTA ranges were regarded as neutral alignment in different studies. As a result, it is still unknown that how FTA on short knee films and HKA related to each other. The FTA may be able to become an accurate proxy of HKA to predict the coronal alignment. The purpose of this study was to explore the correlation between the FTA and the HKA after TKA and to find the most accurate FTA range.
Methods: About 223 patients were included in this study and standard weight-bearing short knee films as well as full-length radiographs were acquired. The pre- and postoperative FTA, as well as the postoperative anatomical lateral distal femoral angle (aLDFA) and anatomical medial proximal tibial angle (aMPTA) were measured on short knee films by two orthopedic surgeons independently. On full-length films, the pre- and postoperative FTA, the pre- and postoperative HKA, as well as the postoperative mechanical lateral distal femoral angle (mLDFA) and mechanical medial proximal tibial angle (mMPTA) were also recorded by two other surgeons independently. Pearson correlation analysis was performed to compare FTA and HKA, aMTPA and mMTPA, aLDFA and mLDFA, respectively.
Results: The postoperative FTA and HKA had a good correlation (r = 0.86). The agreements were reached 82.7%, 71.0%, and 68.2% of all patients using three previously reported FTA ranges. When analyzing the independent alignment of the tibial tray and the femoral component, 84.1% and 57.9% of all patients was reached an agreement on the classification.
Conclusions: On most occasions, the consistence between the FTA and HKA in assessing the coronal limb alignment of the lower extremity and the tibial component is satisfactory. However, the postoperative full-length film is still needed to evaluate accurately the coronal alignment of the femoral component.
abstract_id: PUBMED:36435375
Three-dimensional biometrics using weight-bearing imaging shows relationship between knee and hindfoot axial alignment. Background: Existence of a relationship between knee and hindfoot alignments is commonly accepted, but not clearly proven. While studied in the coronal plane using 2D imaging, axial alignment has not been studied yet, likely requiring 3D measurements. We aimed to investigate how knee and hindfoot rotational alignments are related using 3D biometrics and modern 3D weight-bearing technologies.
Hypothesis: Hindfoot alignment is correlated with femoral and tibial torsions.
Patients And Methods: All patients who underwent both weight-bearing CT (WBCT) and low dose biplanar radiographs (LDBR) were selected in this retrospective observational study, resulting in a cohort of 157 lower limbs from 99 patients. Patients' pathologies were stratified in subgroups and those with a history of trauma or surgery affecting lower limb alignment were excluded. Foot Ankle Offset was calculated from WBCT; femoral and tibial torsions and coronal alignment were calculated from LDBR, respectively.
Results: Overall, mean Foot Ankle Offset was 1.56% (SD 7.4), mean femoral anteversion was 15.6° (SD 9.5), and mean external tibial torsion was 32.6° (SD 7.6). Moderate negative correlation between Tibial Torsion and Foot Ankle Offset was found in the whole series (rho=-0.23, p=0.003) and for non-pathologic patients (rho=-0.27, p=0.01). Linear models to estimate Tibial Torsion with Foot Ankle Offset and conversely were found, with a low adjusted R2 (3%<R2<7%). No relationship was found between FAO and femoral torsion.
Discussion: External tibial rotation was associated with varus hindfoot configuration in the group without pathologies, suggesting that compensatory mechanisms may occur between knee and hindfoot alignments. In pathological cases, however, the same relationship wasn't found, raising concerns about compensatory failure in spite of the numbers available. We didn't find similar correlations with the femur possibly because the hip has a degree of liberty in the axial plane.
Level Of Evidence: III, retrospective comparative study.
abstract_id: PUBMED:38425076
Research progress of lower extremity alignment in total knee arthroplasty Knee osteoarthritis has become one of the common diseases of the elderly, total knee arthroplasty (TKA) is the most effective treatment for end-stage knee osteoarthritis at present. In TKA, the effective restoration of the lower extremity alignment is one of the key factors for the success of the operation, which greatly affects the postoperative clinical effect and prosthesis survival rate of patients. Mechanical alignment is a TKA alignment method which is first proposed, recognized and widely used in TKA. In recent years, with the in-depth research on the lower limb alignment and the rapid development of computer technology, the alignment technology in TKA has realized the transformation from "unified" to "individualized", two-dimensional to three-dimensional. New alignment methods, such as adjusted mechanical alignment, anatomic alignment, kinematic alignment, inverse kinematic alignment, restricted kinematic alignment and functional alignment have been proposed to provide surgeons with more choices. However, there is no conclusion on which alignment method is the best choice. This paper summarizes the current research status, advantages and disadvantages of various alignment methods in TKA, and aims to provide some reference for the selection of alignment methods in TKA.
abstract_id: PUBMED:36646170
Changes in coronal knee-alignment parameters during the osteoarthritis process in the varus knee. Objectives: The idea to aim for an "individualized" alignment, whereby the constitutional alignment is restored, has gained much interest among knee surgeons. This requires insight into the prediseased, natural alignment of our patients' knees. The aim of this study is (1) to determine how the hip-knee-ankle (HKA) angle is influenced during the arthritic process and (2) to investigate the correlation between joint line changes and the progression of osteoarthritis (OA). It is our hypothesis that the most pronounced coronal parameter changes appear at the proximal tibia and at the joint line.
Methods: One hundred sequential full-length X-rays with a minimum follow-up of 1 year were retrospectively reviewed from a radiographic joint database. Patients had to be at least 50 years of age needed to have an HKA angle of more than 1.3° varus to be included. Patients with ipsilateral total hip arthroplasty, femoral or tibial fracture, osteotomy, or ligamentous repair were excluded. Fifteen alignment parameters were investigated on the sequential full-length X-rays. Moreover, the relationship between the alignment parameters and the Kellgren-Lawrence grade (KL grade) was determined by using linear mixed models.
Results: A progressive KL grade is associated with an increase of the HKA (p < 0.001). Mostly, HKA differs due to decrease of the medial tibial plateau (MPTA) angle (0.93°) and an increase of the joint line angle (JLCA) (0.86°). The mLDFA demonstrated the most pronounced changes in the beginning of OA (KL grade 1-2) (p = 0.049). In particular, the MPTA becomes considerably smaller (p = 0.004) in the later stage of OA (KL grade 3). Also, a progressive increase of the JLCA (p < 0.001) is observed upwards of KL grade 3.
Conclusion: By comparing consecutive full-length X-rays in the same patients, it is possible to define the coronal alignment changes during the arthritic process. The HKA angle increases according the arthritic progression, whereby the most pronounced changes appear at the proximal tibia (MPTA) and at the joint line (JLCA).The alignment changes in varus OA knees can be divided in three stadia: (1) erosion of the distal medial femoral condyle, (2) erosion of the medial tibial plateau, and (3) a progressive increase of the joint line angle.
Level Of Evidence: Therapeutic Study, Level III.
abstract_id: PUBMED:38293728
Definition of normal, neutral, deviant and aberrant coronal knee alignment for total knee arthroplasty. Purpose: One of the most pertinent questions in total knee arthroplasty (TKA) is: what could be considered normal coronal alignment? This study aims to define normal, neutral, deviant and aberrant coronal alignment using large data from a computed tomography (CT)-scan database and previously published phenotypes.
Methods: Coronal alignment parameters from 11,191 knee osteoarthritis (OA) patients were measured based on three dimensional reconstructed CT data using a validated planning software. Based on these measurements, patients' coronal alignment was phenotyped according to the functional knee phenotype concept. These phenotypes represent an alignment variation of the overall hip knee ankle angle (HKA), femoral mechanical angle (FMA) and tibial mechanical angle (TMA). Each phenotype is defined by a specific mean and covers a range of ±1.5° from this mean. Coronal alignment is classified as normal, neutral, deviant and aberrant based on distribution frequency. Mean values and distribution among the phenotypes are presented and compared between two populations (OA patients in this study and non-OA patients from a previously published study).
Results: The arithmetic HKA (aHKA), combined normalised data of FMA and TMA, showed that 36.0% of knees were neutral within ±1 SD from the mean in both angles, 44.3% had either a TMA or a FMA within ±1-2 SD (normally aligned), 15.3% of the patients were deviant within ±2-3 SD and only 4.4% of them had an aberrant alignment (±3-4 SD in 3.4% and >4 SD in 1.0% of the patients respectively). However, combining the normalised data of HKA, FMA and TMA, 15.4% of patients were neutral in all three angles, 39.7% were at least normal, 27.7% had at least one deviant angle and 17.2% had at least one aberrant angle. For HKA, the males exhibited 1° varus and females were neutral. For FMA, the females exhibited 0.7° more valgus in mean than males and grew 1.8° per category (males grew 2.1° per category). For TMA, the males exhibited 1.3° more varus than females and both grew 2.3° and 2.4° (females) per category. Normal coronal alignment was 179.2° ± 2.8-5.6° (males) and 180.5 > ± 2.8-5.6° (females) for HKA, 93.1 > ± 2.1-4.2° (males) and 93.8 > ± 1.8-3.6° (females) for FMA and 86.7 > ± 2.3-4.6° (males) and 88 > ± 2.4-4.8° (females) for TMA. This means HKA 6.4 varus or 4.8° valgus (males) or 5.1° varus to 6.1° valgus was considered normal. For FMA HKA 1.1 varus or 7.3° valgus (males) or 0.2° valgus to 7.4° valgus was considered normal. For TMA HKA 7.9 varus or 1.3° valgus (males) or 6.8° varus to 2.8° valgus was considered normal. Aberrant coronal alignment started from 179.2° ± 8.4° (males) and 180.5 > ± 8.4° (females) for HKA, 93.1 > ± 6.3° (males) 93.8 > ± 5.4° (females) for FMA and 86.7 > ± 6.9° (males) and 88 > ± 7.2° (females) for TMA. This means HKA > 9.2° varus or 7.6° valgus (males) or 7.9° varus to 8.9° valgus was considered aberrant.
Conclusion: Definitions of neutrality, normality, deviance as well as aberrance for coronal alignment in TKA were proposed in this study according to their distribution frequencies. This can be seen as an important first step towards a safe transition from the conventional one-size-fits-all to a more personalised coronal alignment target. There should be further definitions combining bony alignment, joint surfaces' morphology, soft tissue laxities and joint kinematics.
Level Of Evidence: III.
Answer: Advanced three-dimensional (3D) imaging studies are not always needed to measure the coronal knee alignment of the lower extremity. Two-dimensional (2D) techniques can measure the hip-knee-ankle (HKA) angle with good accuracy in most cases. However, there are certain circumstances where 3D imaging may be more appropriate. For instance, a study found that in 12% of extremities examined, the knee varus or valgus alignment was completely opposite in 3D images compared to 2D images, and approximately 20% of patients might be at risk of error in HKA angle measurement in 2D imaging studies (PUBMED:27844117). This suggests that for patients with more than 7° of flexum/recurvatum alignment, which increases error in 2D HKA measurement, or those with severe alignment or rotational deformities, 3D imaging should be considered.
Additionally, the relationship between coronal alignment and rotational profile of the lower extremity in patients with knee osteoarthritis indicates that changes in the rotational profiles according to coronal alignment should be considered to obtain satisfactory rotational alignment after total knee arthroplasty (TKA) (PUBMED:30126713). Furthermore, alignment in the transverse plane, but not sagittal or coronal plane, affects the risk of recurrent patella dislocation, highlighting the importance of 3D measurements for detecting rotational parameters associated with this condition (PUBMED:29150745).
The effect of knee joint rotation in the sagittal and axial plane on the measurement accuracy of coronal alignment also suggests that knee flexion can influence the accuracy of 2D measurements, and a strict patellar forward position without axial plane rotation of the knee could provide accurate results even if there is a fixed flexion contracture of the knee (PUBMED:32680484).
In summary, while 2D imaging is generally accurate for measuring coronal knee alignment, 3D imaging may be necessary in cases with significant rotational deformities, flexum/recurvatum alignment, or when precise rotational alignment is critical, such as in TKA or when assessing the risk of recurrent patella dislocation. |
Instruction: Screening of illegal intracorporeal containers ("body packing"): is abdominal radiography sufficiently accurate?
Abstracts:
abstract_id: PUBMED:22952384
Screening of illegal intracorporeal containers ("body packing"): is abdominal radiography sufficiently accurate? A comparative study with low-dose CT. Purpose: To evaluate the diagnostic performance of abdominal radiography in the detection of illegal intracorporeal containers (hereafter, packets), with low-dose computed tomography (CT) as the reference standard.
Materials And Methods: This study was approved by the institutional ethical review board, with written informed consent. From July 2007 to July 2010, 330 people (296 men, 34 women; mean age, 32 years [range, 18-55 years]) suspected of having ingested drug packets underwent supine abdominal radiography and low-dose CT. The presence or absence of packets at abdominal radiography was reported, with low-dose CT as the reference standard. The density and number of packets (≤ 12 or >12) at low-dose CT were recorded and analyzed to determine whether those variables influence interpretation of results at abdominal radiography.
Results: Packets were detected at low-dose CT in 53 (16%) suspects. Sensitivity of abdominal radiography for depiction of packets was 0.77 (41 of 53), and specificity was 0.96 (267 of 277). The packets appeared isoattenuated to the bowel contents at low-dose CT in 16 (30%) of the 53 suspects with positive results. Nineteen (36%) of the 53 suspects with positive low-dose CT results had fewer than 12 packets. Packets that were isoattenuated at low-dose CT and a low number of packets (≤12) were both significantly associated with false-negative results at abdominal radiography (P = .004 and P = .016, respectively).
Conclusion: Abdominal radiography is mainly limited by low sensitivity when compared with low-dose CT in the screening of people suspected of carrying drug packets. Low-dose CT is an effective imaging alternative to abdominal radiography.
abstract_id: PUBMED:28441549
Plain radiography may underestimate the burden of body packer ingestion: A case report. Body packing refers to the intracorporeal concealment of illicit drugs. Here we report the case of a 55-year-old body packer who presented with palpitations, visual hallucinations, and a sense of impending death. Abdominal radiography demonstrated five ovoid foreign bodies overlying the rectum. At subsequent gastrotomy and cecotomy, thirty-eight cocaine-containing packets were retrieved from the stomach and ascending colon as well as from the rectum. As the contraband market evolves new techniques to evade detection, evaluation of the burden of body packer ingestion has become increasingly challenging. As demonstrated in this case, plain radiography can grossly underestimate the burden of ingestion.
abstract_id: PUBMED:28528792
Commentary on false negative findings of plain radiographs in body packing. The use of terms "body packing" and "body pushing", both encompassed in the idiomatic expression "body packing", is still misunderstood by clinicians. "Body packing" is a general term used to indicate the internal transportation of drug packages within the gastrointestinal tract; while "Body pushing" refers to the insertion of drugs in anatomical cavities or body orifices, such as the anus, the vagina, and the ears. With the present paper, we would like to analyze and clarify some issues concerning the confounding definitions of body packing and the main reasons why some drug packages may be undetected at plain abdominal radiography, providing important false negative findings, as in the case commented.
abstract_id: PUBMED:3219012
Body packing: the value of modern imaging procedures in the detection of intracorporeal transport media The necessity of rapid objective appraisal of the suspicion of intracorporeal drug smuggling (body-packing) by effective methods of investigation in the course of the first criminal investigation department measures raises the questions of suitable and admissible methods. The proportion of undetected crimes in intracorporeal narcotic smuggling must be rated as very high according to the data of the Federal Criminal Investigation Department. In the present paper, the suitability of various imaging techniques for drug detection are reported in terms of their risk, practicability and costs. For this purpose, the value of digital radiography, two-spectra radiography and X-ray computer tomography as compared to conventional X-ray investigations is examined in human experiments. A reality-oriented narcotics dummy (glucose pressed hard in preservative) was developed and administered to nine volunteers per os with a variable initial alimentary situation. Four radiograms were taken at fixed times up to 12 hours after administration with each imaging technique. The highest rate of detection was attained with computer tomography. In contrast to the other methods, more than 90% of the body packs could be identified here. Nevertheless, an application in criminal investigation practice cannot be recommended owing to various disadvantages. The recovery rate of the remaining methods is between 20% and 25%.
abstract_id: PUBMED:32908836
Body-Packing: A Rare Diagnosis to Keep in Mind. Body packing was first described in 1973 and refers to the intracorporeal concealment of illegal drugs, which are swallowed or placed in anatomical cavities and/or body orifices. The body packer can be asymptomatic or can have signs of systemic drug toxicity (neurological, cardiac, abdominal, renal and cutaneous) due to rupture of the packet(s) or symptoms of gastrointestinal obstruction or perforation. The diagnosis is established based on a suggestive history, findings on physical examination and laboratory findings and/or imaging. The vast majority of patients are asymptomatic and are treated conservatively. However, complex situations may require surgical intervention. We present a case of a 50-year-old man who was admitted in the emergency department with a generalized tonic-clonic seizure and vomiting with plastic film, which raised the suspicion of foreign body ingestion, confirmed by imaging and laboratory tests. He underwent exploratory laparotomy to remove the packages.
Learning Points: Body packing is a potentially lethal activity.Body-packers can be asymptomatic, or have signs/symptoms of systemic drug toxicity or gastrointestinal obstruction or perforation.It is essential to recognize this condition so that the correct clinical approach, diagnosis and management can be established.
abstract_id: PUBMED:26932867
Systematic review of the toxicological and radiological features of body packing. Body packing is the term used for the intracorporeal concealment of illicit drugs, mainly cocaine, heroin, methamphetamine, and cannabinoids. These drugs are produced in the form of packages and are swallowed or placed in various anatomical cavities and body orifices. Basing on these two ways of transportation a distinction between body stuffers and body pushers can be made, with the former described as drug users or street dealers who usually carry small amounts of drugs and the latter as professional drug couriers who carry greater amounts of drugs. A review of the literature regarding body packing is presented, with the aim to highlight the toxicological and radiological features related to this illegal practice. Raising awareness about the encountered mean body levels of the drugs and the typical imaging signs of the incorporated packages could be useful for clinicians and forensic pathologists to (a) identify possible unrecognized cases of body packing and (b) prevent the serious health consequences and deaths frequently occurring after the packages' leakage or rupture or the packages' mass obstructing the gastrointestinal lumen.
abstract_id: PUBMED:31476242
Outpatient management of individuals transporting drugs by body packing is feasible. Introduction: Drug traffickers are increasingly making use of the human body for illegal drug transport. Three ways of intracorporeal drug transport are practiced, namely "body packing", "body stuffing" and "body pushing". Since police and border guards cannot accurately detect intracorporeal drug transport, authorities require medical professionals for examination and radiological imaging. The aim of this retrospective study was to assess outcomes in all presentations of suspected intracorporeal drug transport referred to the Emergency Department (ED) of the University Hospital of Basel.
Methods: We screened the electronic health records (EHRs) of all presentations to the ED between 1 January 2008 and 31 December 2017 for combinations of keywords "body", "pack", "stuff" and "push" in the diagnosis and history of presenting complaints. All presentations with suspicion of intracorporeal drug transport were included. Patient characteristics, imaging modality and the results of imaging were assessed. Outcomes were length of stay, hospitalisation, admission to the intensive care unit, surgical intervention and mortality. The main outcome was the rate of surgical interventions during follow-up in hospital and in prison.
Results: We included 363 presentations in 347 patients. The median age was 35 years and 46 (12.7%) were female. Positive results of imaging were found in 81 of 353 (22.9%) presentations assessed by imaging. In four presentations (1.1%), the result of imaging was indeterminate; in 10 presentations, no imaging was obtained owing to lack of consent or pregnancy. We observed 36 instances of body packing, 10 of body stuffing and 15 of body pushing, and 20 mixed or indeterminate presentations. The number of suspected presentations has risen over the last decade, and the relative number of positive results has almost remained stable over the last six years. No severe or life-threatening complications, interventions, or deaths were observed. Among the presentations with positive imaging results, ten (12.3%) were observed in hospital, as compared with four (1.5%) of those with negative results.
Conclusions: Presentations have increased over the last decade while no severe complications or deaths were observed. The consistently low complication rate supports outpatient observation. Considering the ongoing discussion in media and politics, we suggest validation of medical, legal, and ethical guidelines.
abstract_id: PUBMED:26048697
Low-tube voltage 100 kVp MDCT in screening of cocaine body packing: image quality and radiation dose compared to 120 kVp MDCT. Purpose: The aim of this study was to evaluate the impact of a reduced tube potential (100 kVp) for non-enhanced abdominal low-dose CT on radiation dose and image quality (IQ) in the detection of body packing.
Methods: This retrospective study was approved by the local research ethics committee of our clinic. From March 2012 to July 2014, 99 subjects were referred to our institute with suspected body packing. 50 CT scans were performed using a 120 kVp protocol (group A), and 49 CTs were performed using a low-dose protocol with a tube voltage of 100 kVp (group B). Subjective and objective IQ were assessed. DLP and CTDIvol were analyzed.
Results: All examinations were of diagnostic IQ. Objective IQ was not significantly different between the 120 kVp and 100 kVp protocol. Mean density of solid and liquid body packets was 210 ± 60.2 HU at 120 kVp and 250.6 ± 29.7 HU at 100 kVp. Radiation dose was significantly lower in group B as compared to group A (p < 0.05). In group A, body packs were detected in 16 (32%) of the 50 patients. In group B, packets were observed in 15 (31%) of 49 patients. Laboratory analysis detected cocaine in all smuggled body packs.
Conclusions: Low-tube voltage 100 kVp MDCT with automated tube current modulation in screening of illegal drugs leads to a diagnostic IQ and significant dose reduction compared to 120 kVp low-tube voltage protocols. Despite lower radiation dose, liquid and solid cocaine containers retain high attenuation and are easily detected.
abstract_id: PUBMED:32769041
A fatal case of cannabis body-packing: About an autopsy case report. The body packing represents an illegal drug trafficking practice across the borders of certain countries. It has experienced enormous growth in recent years. The medical literature is rich in publications interested in body packaging of cocaine or heroin with sometimes lethal consequences. However, reported cases of cannabis body packing appear to be rare, sometimes underestimated, despite the notoriety of cannabis in the illegal drug market and its wide consumption around the world. We report in this work a forensic case of a cannabis body packer deceased due toa stercoral peritonitis secondary to a double perforation of thesigmoid and rectal colon.
abstract_id: PUBMED:26063072
Imaging of body packing: errors and medico-legal issues. Body packing is the ingestion or insertion in the human body of packed illicit substances. Over the last 20 years, drug smuggling has increased global and new means of transport of narcotics have emerged. Among these, the most frequent one is the gastrointestinal tract: from mouth to anus, vagina, and ears. Cocaine is one of the most traded drugs, followed by heroin. Condoms, latex gloves, and balloons are typically used as drug packets for retention in the body. There are different radiologic modalities to detect illicit drugs in body packing: Plain radiography, computed tomography (CT), ultrasound, and magnetic resonance. Current protocols recommend the use of radiography to confirm packet retention and, in case of doubt, the use of abdominal CT scan with reduced mAs. In case of packet rupture, catastrophic effects can occur. Management of patients carrying packets of drugs is a recurrent medico-legal problem. To improve diagnostic accuracy and prevent hazardous complications, radiologists and emergency physicians should be familiar with radiologic features of body packing. The radiologist plays both a social and a medico-legal role in their assessment, and it should not be limited only to the identification of the packages but must also provide accurate information about their number and their exact location. In this review, we focus on diagnostic errors and medico-legal issues related to the radiological assessment of body packers.
Answer: The screening of illegal intracorporeal containers, commonly referred to as "body packing," presents challenges in terms of detection accuracy when using abdominal radiography. Studies have shown that abdominal radiography has limitations in sensitivity when compared to low-dose computed tomography (CT). One comparative study with low-dose CT as the reference standard found that the sensitivity of abdominal radiography for the depiction of drug packets was 0.77, and specificity was 0.96. Factors such as packets being isoattenuated to the bowel contents at low-dose CT and a low number of packets (≤12) were significantly associated with false-negative results at abdominal radiography (PUBMED:22952384).
Case reports have also indicated that plain radiography can grossly underestimate the burden of body packer ingestion, with instances where a significantly higher number of packets were retrieved than initially detected by radiography (PUBMED:28441549). The use of terms "body packing" and "body pushing" is often misunderstood, and the detection of drug packages may be challenging due to various reasons, including the possibility of false-negative findings on plain abdominal radiographs (PUBMED:28528792).
Modern imaging procedures have been evaluated for their effectiveness in detecting intracorporeal transport media. Computer tomography (CT) has shown the highest rate of detection, identifying more than 90% of body packs, but its application in criminal investigation practice is not widely recommended due to various disadvantages (PUBMED:3219012).
In summary, while abdominal radiography is a commonly used method for screening suspected body packers, it is not sufficiently accurate when compared to low-dose CT, which is considered a more effective imaging alternative. However, the choice of imaging technique may also depend on the clinical context, availability of technology, and the need to balance accuracy with radiation exposure and other practical considerations (PUBMED:26932867, PUBMED:31476242, PUBMED:26048697, PUBMED:32769041, PUBMED:26063072). |
Instruction: Aspirin use in middle-aged men with cardiovascular disease: are opportunities being missed?
Abstracts:
abstract_id: PUBMED:9281867
Aspirin use in middle-aged men with cardiovascular disease: are opportunities being missed? Background: Since the 1980s, clinical trial evidence has supported aspirin use in the secondary prevention of cardiovascular disease (CVD).
Aim: To explore aspirin use among British men with known CVD in a population-based study.
Method: Longitudinal study (British Regional Heart Study), in which subjects have been followed up for cardiovascular morbidity and mortality since 1978-1980. Aspirin use was assessed by questionnaires to study participants in November 1992 (Q92); cardiovascular diagnoses are based on general practice notifications to October 1992. A total of 5751 men aged 52-73 years (87% of survivors) completed questions on aspirin use.
Results: Overall, 547 men (9.5%) were taking aspirin daily, of whom 321 (59%) had documented CVD. Among men with pre-existing disease, 153 out of 345 (44%) men with myocardial infarction, 42 out of 109 (39%) with stroke, and 75 out of 247 (29%) with angina were taking aspirin daily. Among men with angina (54% versus 26%) or myocardial infarction (59% versus 42%), those who had undergone coronary artery bypass surgery (CABG) or angioplasty were more likely to be receiving aspirin. Higher rates of aspirin use were also found in those whose last major event occurred after January 1990 (47% versus 34%). There was no association between aspirin use and social class or region of residence.
Conclusion: Despite strong evidence of its effectiveness, many patients with established CVD were not receiving aspirin. Daily aspirin treatment was less likely in men with less recent major CVD events and in those who had not received invasive treatment.
abstract_id: PUBMED:18504339
Long-term cardiovascular mortality among middle-aged men with gout. Background: There are limited data available on the association of gouty arthritis (gout) in middle age with long-term cardiovascular disease (CVD) mortality.
Methods: We performed a 17-year follow-up study of 9105 men, aged 41 to 63 years and at above-average risk for coronary heart disease, who were randomized to the Multiple Risk Factor Intervention Trial and who did not die or have clinical or electrocardiographic evidence of coronary artery disease during the 6-year trial. Risk of CVD death and other causes subsequent to the sixth annual examination associated with gout was assessed by means of Cox proportional hazards regressions.
Results: The unadjusted mortality rates from CVD among those with and without gout were 10.3 per 1000 person-years and 8.0 per 1000 person-years, respectively, representing an approximately 30% greater risk. After adjustment for traditional risk factors, use of diuretics and aspirin, and serum creatinine level, the hazard ratio (gout vs no gout) for coronary heart disease mortality was 1.35 (95% confidence interval [CI], 1.06-1.72). The hazard ratio for death from myocardial infarction was 1.35 (95% CI, 0.94-1.93); for death from CVD overall, 1.21 (95% CI, 0.99-1.49); and for death from any cause, 1.09 (95% CI, 1.00-1.19) (P = .04). The association between hyperuricemia and CVD was weak and did not persist when analysis was limited to men with hyperuricemia without a diagnosis of gout.
Conclusion: Among middle-aged men, a diagnosis of gout accompanied by an elevated uric acid level imparts significant independent CVD mortality risk.
Trial Registration: clinicaltrials.gov Identifier: NCT00000487.
abstract_id: PUBMED:23435157
All-cause and cardiovascular mortality in middle-aged people with type 2 diabetes compared with people without diabetes in a large U.K. primary care database. Objective: Middle-aged people with diabetes have been reported to have significantly higher risks of cardiovascular events than people without diabetes. However, recent falls in cardiovascular disease rates and more active management of risk factors may have abolished the increased risk. We aimed to provide an up-to-date assessment of the relative risks associated with type 2 diabetes of all-cause and cardiovascular mortality in middle-aged people in the U.K.
Research Design And Methods: Using data from the General Practice Research Database, from 2004 to 2010, we conducted a cohort study of 87,098 people, 40-65 years of age at baseline, comparing 21,798 with type 2 diabetes and 65,300 without diabetes, matched on age, sex, and general practice. We produced hazard ratios (HRs) for mortality and compared rates of blood pressure testing, cholesterol monitoring, and use of aspirin, statins, and antihypertensive drugs. RESULTS People with type 2 diabetes, compared with people without diabetes, had a twofold increased risk of all-cause mortality (HR 2.07 [95% CI 1.95-2.20], adjusted for smoking) and a threefold increased risk of cardiovascular mortality (3.25 [2.87-3.68], adjusted for smoking). Women had a higher relative risk than men, and people <55 years of age had a higher relative risk than those >55 years of age. Monitoring and medication rates were higher in those with diabetes (all P < 0.001).
Conclusions: Despite efforts to manage risk factors, administer effective treatments, and develop new therapies, middle-aged people with type 2 diabetes remain at significantly increased risk of death.
abstract_id: PUBMED:32275320
Association of Glomerular Hyperfiltration and Cardiovascular Risk in Middle-Aged Healthy Individuals. Importance: Glomerular hyperfiltration is associated with increased risk of cardiovascular disease in high-risk conditions, but its significance in low-risk individuals is uncertain.
Objective: To determine whether glomerular hyperfiltration is associated with increased cardiovascular risk in healthy individuals.
Design, Setting, And Participants: This was a prospective population-based cohort study, for which enrollment took place from August 2009 to October 2010, with follow-up available through March 31, 2016. Analysis of the data took place in October 2019. The cohort was composed of 9515 healthy individuals, defined as individuals without hypertension, diabetes, cardiovascular disease, estimated glomerular filtration rate (eGFR) less than 60 mL/min/1.73 m2, or statin and/or aspirin use, identified among 20 004 patients aged 40 to 69 years with health information accessed through the CARTaGENE research platform.
Exposures: Individuals with glomerular hyperfiltration (eGFR >95th percentile after stratification for sex and age) were compared with individuals with normal filtration rate (eGFR 25th-75th percentiles).
Main Outcomes And Measures: Adverse cardiovascular events were defined as a composite of cardiovascular mortality, myocardial infarction, unstable angina, heart failure, stroke, and transient ischemic attack. Risk of adverse cardiovascular events was assessed using Cox and fractional polynomial regressions and propensity score matching.
Results: From the 20 004 CARTaGENE participants, 9515 healthy participants (4050 [42.6%] male; median [interquartile range] age, 50.4 [45.9-55.6] years) were identified. Among these, 473 had glomerular hyperfiltration (median [interquartile range] eGFR, 112 [107-115] mL/min/1.73 m2) and 4761 had a normal filtration rate (median [interquartile range] eGFR, 92 [87-97] mL/min/1.73 m2). Compared with the normal filtration rate, glomerular hyperfiltration was associated with an increased cardiovascular risk (hazard ratio, 1.88; 95% CI, 1.30-2.74; P = .001). Findings were similar with propensity score matching. The fractional polynomial regression showed that only the highest eGFR percentiles were associated with increased cardiovascular risk. The cardiovascular risk of individuals with glomerular hyperfiltration was similar to that of the 597 participants with an eGFR between 45 and 60 mL/min/1.73 m2 (hazard ratio, 0.90; 95% CI, 0.56-1.42; P = .64).
Conclusions And Relevance: These findings suggest that glomerular hyperfiltration is independently associated with increased cardiovascular risk in middle-aged healthy individuals. This risk profile appears to be similar to stage 3a chronic kidney disease.
abstract_id: PUBMED:23478556
Cardiovascular considerations in middle-aged athletes at risk for coronary artery disease. Cardiovascular disease remains the leading cause of death in the United States despite a 50% decrease in deaths from myocardial infarction and stroke in the past 30 years associated with improvements in blood pressure and lipid control. The National Health and Nutrition Evaluation Survey found that the least prevalent metrics of cardiovascular health in adults were healthy diets, normal weights, and optimal levels of exercise. A further reduction in rates of cardiovascular disease will require an increase in exercise. Clinicians who encourage exercise in middle-aged patients face several dilemmas. This article reviews exercise-related risks for sudden death and the performance of a global cardiovascular risk assessment. The need for additional preexercise risk stratification with electrocardiogram, graded exercise testing, or echocardiography is outlined. In addition, the optimum choice of medications for hypertension or dyslipidemia treatment and the effects of these medications and aspirin on endurance exercise are reviewed.
abstract_id: PUBMED:18779292
Associations of plasma carotenoids with risk factors and biomarkers related to cardiovascular disease in middle-aged and older women. Background: Cardiovascular disease (CVD) risk factors may potentially influence plasma concentrations of carotenoids. However, data on the association of plasma carotenoids with CVD related biomarkers are only limited.
Objective: We examined the cross-sectional association of plasma carotenoids with blood lipids, glycated hemoglobin (Hb A(1c)), and C-reactive protein (CRP) in middle-aged and older women initially free of CVD and cancer.
Design: Participants from 3 nested case-control studies in the Women's Health Study were pooled. Baseline plasma carotenoids, including alpha-carotene, beta-carotene, beta-cryptoxanthin, lycopene, and lutein-zeaxanthin, blood lipids, Hb A(1c), and CRP were available for 2895 women.
Results: Women who were current smokers or obese had lower plasma concentrations of most carotenoids expect for lycopene. After adjusting for age, race, lifestyle factors, clinical factors, plasma total cholesterol, and dietary carotenoids, an increase of 30 mg/dL in LDL cholesterol was associated with a 17% increase in alpha-carotene, a 16% increase in beta-carotene, and an 8.5% increase in lycopene; an increase of 10 mg/dL in HDL cholesterol was associated with a 5.3% decrease in lycopene; an increase of 0.3% in Hb A(1c) was associated with a 1.4% increase in lycopene; and an increase of 2 mg/L in CRP was associated with a 1.3% decrease in beta-carotene (all P < 0.01).
Conclusions: In middle-aged and older women free of CVD and cancer, plasma carotenoids were associated with smoking, obesity, LDL cholesterol, HDL cholesterol, Hb A(1c), and CRP. The associations differ among individual carotenoids, possibly reflecting metabolic effects of lifestyle and physiologic factors on plasma carotenoids, and may partially explain the inverse association of plasma carotenoids with CVD outcomes observed in epidemiologic studies.
abstract_id: PUBMED:37606674
Aspirin for Secondary Prevention of Cardiovascular Disease in 51 Low-, Middle-, and High-Income Countries. Importance: Aspirin is an effective and low-cost option for reducing atherosclerotic cardiovascular disease (CVD) events and improving mortality rates among individuals with established CVD. To guide efforts to mitigate the global CVD burden, there is a need to understand current levels of aspirin use for secondary prevention of CVD.
Objective: To report and evaluate aspirin use for secondary prevention of CVD across low-, middle-, and high-income countries.
Design, Setting, And Participants: Cross-sectional analysis using pooled, individual participant data from nationally representative health surveys conducted between 2013 and 2020 in 51 low-, middle-, and high-income countries. Included surveys contained data on self-reported history of CVD and aspirin use. The sample of participants included nonpregnant adults aged 40 to 69 years.
Exposures: Countries' per capita income levels and world region; individuals' socioeconomic demographics.
Main Outcomes And Measures: Self-reported use of aspirin for secondary prevention of CVD.
Results: The overall pooled sample included 124 505 individuals. The median age was 52 (IQR, 45-59) years, and 50.5% (95% CI, 49.9%-51.1%) were women. A total of 10 589 individuals had a self-reported history of CVD (8.1% [95% CI, 7.6%-8.6%]). Among individuals with a history of CVD, aspirin use for secondary prevention in the overall pooled sample was 40.3% (95% CI, 37.6%-43.0%). By income group, estimates were 16.6% (95% CI, 12.4%-21.9%) in low-income countries, 24.5% (95% CI, 20.8%-28.6%) in lower-middle-income countries, 51.1% (95% CI, 48.2%-54.0%) in upper-middle-income countries, and 65.0% (95% CI, 59.1%-70.4%) in high-income countries.
Conclusion And Relevance: Worldwide, aspirin is underused in secondary prevention, particularly in low-income countries. National health policies and health systems must develop, implement, and evaluate strategies to promote aspirin therapy.
abstract_id: PUBMED:33667470
Aspirin prescribing for cardiovascular disease in middle-aged and older adults in Ireland: Findings from The Irish Longitudinal Study on Ageing. Aspirin use for cardiovascular indications is widespread despite evidence not supporting use in patients without cardiovascular disease (CVD). This study characterises aspirin prescribing among people aged ≥50 years in Ireland for primary and secondary prevention, and factors associated with prescription. This cross-sectional study includes participants from wave 3 (2014-2015) of The Irish Longitudinal Study on Ageing. We identified participants reporting use of prescribed aspirin, other antiplatelets/anticoagulants, and doctor-diagnosed CVD (MI, angina, stroke, TIA) and other cardiovascular conditions. We examined factors associated with aspirin use for primary and secondary prevention in multivariate regression. For a subset, we also examined 10-year cardiovascular risk (using the Framingham general risk score) as a predictor of aspirin use. Among 6618 participants, the mean age was 66.9 years (SD 9.4) and 55.6% (3679) were female. Prescribed aspirin was reported by 1432 participants (21.6%), and 77.6% of aspirin users had no previous CVD. Among participants with previous CVD, 16.5% were not prescribed aspirin/another antithrombotic. This equates to 201,000 older adults nationally using aspirin for primary prevention, and 16,000 with previous CVD not prescribed an antithrombotic. Among those without CVD, older age, male sex, free health care, and more GP visits were associated with aspirin prescribing. Cardiovascular risk was significantly associated with aspirin use (adjusted relative risk 1.15, 95%CI 1.08-1.23, per 1% increase in cardiovascular risk). Almost four-fifths of people aged ≥50 years on aspirin have no previous CVD, equivalent to 201,000 adults nationally, however prescribing appears to target higher cardiovascular risk patients.
abstract_id: PUBMED:33598936
Age-Related Trajectories of Cardiovascular Risk and Use of Aspirin and Statin Among U.S. Adults Aged 50 or Older, 2011-2018. Objectives: To examine age-related trajectories of cardiovascular risk and use of aspirin and statin among U.S. adults aged 50 or older.
Design: Repeated cross-sectional study using data from 2011 to 2018 National Health and Nutrition Examination Surveys.
Setting: Nationally representative health interview survey in the United States.
Participants: Non-institutionalized adults aged 50 years and older (n = 11,392 unweighted).
Measurements: Primary prevention was defined as the prevention of a first cardiovascular event including coronary heart disease, angina/angina pectoris, heart attack, or stroke, whereas secondary prevention was defined as those with a history of these clinical conditions. Medication use was determined by self-report; aspirin use included dose and frequency, and statin use included generic names, days of prescription fills, and indications. We examined linear trends between age and each medication use, after controlling for period, sex, and race/ethnicity.
Results: Prevalence of those eligible for primary prevention treatment increased with age from 31.8% in ages 50-54 to 52.0% in ages ≥75 (p < 0.001). Similarly, those eligible for secondary prevention treatment increased with age from 2.7% in ages 50-54 to 21.1% in ages ≥75 (p < 0.001). Low-dose daily aspirin use increased with age (p < 0.001), and 45.3% of adults aged ≥75 took low-dose aspirin daily for primary prevention. Statin use also increased with age (p < 0.001), and 56.4% of adults aged ≥75 had long-term statin use for secondary prevention.
Conclusion: While adults aged ≥75 do not benefit from the use of aspirin to prevent the first CVD, many continue to take aspirin on a regular basis. In spite of the clear benefit of statin use to prevent a subsequent CVD event, many older adults in this risk category are not taking a statin. Further education and guidance for both healthcare providers and older adults regarding the appropriate use of aspirin and statins to prevent CVD is needed.
abstract_id: PUBMED:17478266
Aspirin use among adults aged 40 and older in the United States: results of a national survey. Background: Aspirin is effective for the primary and secondary prevention of cardiovascular events, but its use has been suboptimal.
Methods: Investigators performed a nationally representative Internet-based survey of U.S. consumers aged 40 and older using online databases maintained by Harris Interactive((R)) to measure use of aspirin for cardiovascular disease (CVD) prevention and factors associated with its use. Respondents reported whether they used aspirin therapy regularly for cardiovascular prevention; and provided information about their cardiovascular risk factors, discussions with their healthcare provider about aspirin therapy, and their perceptions about risks and benefits of aspirin. Objective risk of cardiovascular events was estimated using counts of self-reported risk factors. Survey results were weighted to be representative of the general U.S. population. Researchers performed bivariate and multivariate analyses to understand factors associated with aspirin use.
Results: A total of 1299 adults aged 40 or older completed the survey. Mean age was 55.9, 53% were women, 79% self-identified as white, 10% African American, and 9% Latino. Current regular aspirin use for CVD prevention was reported by 41% of respondents. The factor most strongly associated with aspirin use was reporting a previous conversation with a healthcare provider about aspirin (88% aspirin use among respondents reporting such discussion versus 17% who did not report discussion; odds ratio 36.6, 95% confidence interval 25.9-51.7).
Conclusions: Aspirin use is low, even among patients at increased risk. Better provider-patient communication about aspirin prevention is associated with greater use, and should be a target for future interventions.
Answer: The use of aspirin in middle-aged men with cardiovascular disease (CVD) appears to be suboptimal, with opportunities for its beneficial use being missed. Despite clinical trial evidence supporting aspirin use in the secondary prevention of CVD since the 1980s, many patients with established CVD were not receiving aspirin. A study among British men with known CVD found that only 9.5% were taking aspirin daily, and among those with pre-existing disease, the percentages were 44% for myocardial infarction, 39% for stroke, and 29% for angina. Men who had undergone coronary artery bypass surgery (CABG) or angioplasty were more likely to be receiving aspirin, and higher rates of aspirin use were found in those whose last major event occurred after January 1990. There was no association between aspirin use and social class or region of residence (PUBMED:9281867).
Furthermore, aspirin use for secondary prevention of CVD is underused worldwide, particularly in low-income countries. A cross-sectional analysis using pooled data from nationally representative health surveys in 51 countries found that among individuals with a history of CVD, aspirin use for secondary prevention was 40.3% in the overall pooled sample, with lower rates in low-income countries (16.6%) and higher rates in high-income countries (65.0%) (PUBMED:37606674).
In Ireland, among people aged ≥50 years, 21.6% reported using prescribed aspirin, and 77.6% of aspirin users had no previous CVD. Among participants with previous CVD, 16.5% were not prescribed aspirin or another antithrombotic, indicating a gap in secondary prevention (PUBMED:33667470).
In the United States, a survey revealed that aspirin use for CVD prevention was reported by 41% of respondents aged 40 or older. The most significant factor associated with aspirin use was having a previous conversation with a healthcare provider about aspirin (PUBMED:17478266).
These findings suggest that despite the known benefits of aspirin for secondary prevention of CVD, there is a significant gap in its use among middle-aged men with CVD, indicating that opportunities for its use are indeed being missed. |
Instruction: Can Iron Treatments Aggravate Epistaxis in Some Patients With Hereditary Hemorrhagic Telangiectasia?
Abstracts:
abstract_id: PUBMED:27107394
Can Iron Treatments Aggravate Epistaxis in Some Patients With Hereditary Hemorrhagic Telangiectasia? Objectives/hypothesis: To examine whether there is a rationale for iron treatments precipitating nosebleeds (epistaxis) in a subgroup of patients with hereditary hemorrhagic telangiectasia (HHT).
Study Design: Survey evaluation of HHT patients, and a randomized control trial in healthy volunteers.
Methods: Nosebleed severity in response to iron treatments and standard investigations were evaluated by unbiased surveys in patients with HHT. Serial blood samples from a randomized controlled trial of 18 healthy volunteers were used to examine responses to a single iron tablet (ferrous sulfate, 200 mg).
Results: Iron tablet users were more likely to have daily nosebleeds than non-iron-users as adults, but there was no difference in the proportions reporting childhood or trauma-induced nosebleeds. Although iron and blood transfusions were commonly reported to improve nosebleeds, 35 of 732 (4.8%) iron tablet users, in addition to 17 of 261 (6.5%) iron infusion users, reported that their nosebleeds were exacerbated by the respective treatments. These rates were significantly higher than those reported for control investigations. Serum iron rose sharply in four of the volunteers ingesting ferrous sulfate (by 19.3-33.1 μmol/L in 2 hours), but not in 12 dietary controls (2-hour iron increment ranged from -2.2 to +5.0 μmol/L). High iron absorbers demonstrated greater increments in serum ferritin at 48 hours, but transient rises in circulating endothelial cells, an accepted marker of endothelial damage.
Conclusions: Iron supplementation is essential to treat or prevent iron deficiency, particularly in patients with pathological hemorrhagic iron losses. However, in a small subgroup of individuals, rapid changes in serum iron may provoke endothelial changes and hemorrhage.
Level Of Evidence: 4. Laryngoscope, 126:2468-2474, 2016.
abstract_id: PUBMED:33081831
Dietary iron intake and anemia: food frequency questionnaire in patients with hereditary hemorrhagic telangiectasia. Background: Hereditary hemorrhagic telangiectasia (HHT) is a multisystemic inherited vascular disease characterized by a heterogeneous clinical presentation and prognosis. Dietary evaluation is relevant in HHT patients to provide adequate iron and nutrient intake. Additionally, different dietary items have been reported to precipitate epistaxis in this setting. Our primary aim was to investigate the dietary habits of HHT patients through a food-frequency questionnaire (FFQ) to evaluate the presence of precipitants and/or protective factors for epistaxis and the occurrence of possible dietary modifications. The secondary aims were to evaluate the nutritional intake of iron in HHT patients and the self-reported effect of iron treatments on epistaxis. From April 2018 to October 2018, a 138-item FFQ was provided to HHT patients followed up at the HHT Referral Center of Crema Maggiore Hospital. The relationship between food items and epistaxis was ascertained on a separate form. Daily iron intake was calculated to establish the mean iron content of food items reported in the FFQ.
Results: One hundred forty-nine questionnaires were evaluated [72 females, median age 54 years (12-76). Overall, 26 (18%) patients reported dietary items that improved epistaxis (mostly blueberries and red fruits, green vegetables and legumes), while 38 (26%) reported some dietary items that exacerbated epistaxis (spices, chocolate, alcohol, strawberries and ginger). Dietary modifications were reported in up to 58% of cases. In HHT patients, the mean daily iron intake was 8.46 ± 2.78 mg, and no differences were observed in the iron intake of patients reporting a diet modification and those who did not.
Conclusions: In the comprehensive management of HHT a healthy and balanced diet, with increased consumption of dietary items with a high iron content, should be encouraged.
abstract_id: PUBMED:37746404
Multifocal Abscesses, Necrotizing Fasciitis, Iron Deficiency Anemia, and Hypophosphatemia Induced by Ferric Carboxymaltose Infusions: Report of a Case of Hereditary Hemorrhagic Telangiectasia. Hereditary hemorrhagic telangiectasia (HHT) is a rare autosomal dominant vascular dysplasia in which disrupted angiogenesis leads to increased formation of mucocutaneous telangiectasias or major vascular malformations. Iron deficiency anemia and recurrent abscesses are commonly reported in these patients, reinforcing screening and targeted therapies for these conditions. We report a 50-year-old man with HHT affected by repeated episodes of iron deficiency anemia secondary to recurrent epistaxis requiring frequent intravenous iron infusions. He eventually developed hypophosphatemia and hyperphosphaturia secondary to ferric carboxymaltose. He also had a history of recurrent multifocal abscesses, including a severe presentation of necrotizing fasciitis, requiring multiple surgical interventions. Despite the identification of hypogammaglobulinemia, only after consistent dental treatment and antibiotic prophylaxis did the abscesses stop recurring. We highlight the need for careful consideration of all possible complications inherent to the disease itself but also those related to comorbidities or existing treatments.
abstract_id: PUBMED:24146883
Hemorrhage-adjusted iron requirements, hematinics and hepcidin define hereditary hemorrhagic telangiectasia as a model of hemorrhagic iron deficiency. Background: Iron deficiency anemia remains a major global health problem. Higher iron demands provide the potential for a targeted preventative approach before anemia develops. The primary study objective was to develop and validate a metric that stratifies recommended dietary iron intake to compensate for patient-specific non-menstrual hemorrhagic losses. The secondary objective was to examine whether iron deficiency can be attributed to under-replacement of epistaxis (nosebleed) hemorrhagic iron losses in hereditary hemorrhagic telangiectasia (HHT).
Methodology/principal Findings: The hemorrhage adjusted iron requirement (HAIR) sums the recommended dietary allowance, and iron required to replace additional quantified hemorrhagic losses, based on the pre-menopausal increment to compensate for menstrual losses (formula provided). In a study population of 50 HHT patients completing concurrent dietary and nosebleed questionnaires, 43/50 (86%) met their recommended dietary allowance, but only 10/50 (20%) met their HAIR. Higher HAIR was a powerful predictor of lower hemoglobin (p = 0.009), lower mean corpuscular hemoglobin content (p<0.001), lower log-transformed serum iron (p = 0.009), and higher log-transformed red cell distribution width (p<0.001). There was no evidence of generalised abnormalities in iron handling Ferritin and ferritin(2) explained 60% of the hepcidin variance (p<0.001), and the mean hepcidinferritin ratio was similar to reported controls. Iron supplement use increased the proportion of individuals meeting their HAIR, and blunted associations between HAIR and hematinic indices. Once adjusted for supplement use however, reciprocal relationships between HAIR and hemoglobin/serum iron persisted. Of 568 individuals using iron tablets, most reported problems completing the course. For patients with hereditary hemorrhagic telangiectasia, persistent anemia was reported three-times more frequently if iron tablets caused diarrhea or needed to be stopped.
Conclusions/significance: HAIR values, providing an indication of individuals' iron requirements, may be a useful tool in prevention, assessment and management of iron deficiency. Iron deficiency in HHT can be explained by under-replacement of nosebleed hemorrhagic iron losses.
abstract_id: PUBMED:25188751
Combination treatment with an erythropoiesis-stimulating agent and intravenous iron alleviates anaemia in patients with hereditary haemorrhagic telangiectasia. Background: Patients with hereditary haemorrhagic telangiectasia (HHT) suffer from recurrent epistaxis and bleeding from gastrointestinal telangiectasias that occur despite otherwise normal haemostasis and result in iron deficiency anaemia with increasing severity. In advanced disease, anaemia may be severe, be irresponsive to iron supplementation, and may lead to red blood cell transfusion dependency.
Methods: We conducted a retrospective study at our Centre for Osler's Disease to evaluate the effectiveness of adding an erythropoiesis-stimulating agent (ESA) to intravenous iron supplementation in the management of anaemic HHT patients. Blood values and treatment parameters were collected for nine months before combination therapy (iron supplementation only) and 12 months during combination therapy (iron supplementation plus ESA).
Results: Four patients received intravenous iron and an ESA with mean weekly doses of 126 mg and 17,300 units (U), respectively. Mean haemoglobin improved significantly during combination therapy, from 106 g/L to 119 g/L (p < 0.001).
Conclusion: Anaemia can be alleviated in patients with HHT who are irresponsive to intravenous iron supplementation, by addition of an ESA. The proposed mechanism behind the iron irresponsiveness is that the anaemia is caused by a combination of recurrent haemorrhage and anaemia of chronic disease.
abstract_id: PUBMED:24273414
Woman presenting with chronic iron deficiency anemia associated with hereditary hemorrhagic telangiectasia: a case report. Background: Hereditary hemorrhagic telangiectasia is an autosomal dominant disorder associated with frequent nose bleeds that can be troublesome and difficult to contain. A further manifestation is telangiectasia, which may develop in the upper and lower gastrointestinal tract. The associated blood loss can be chronic, resulting in iron deficiency anemia which, when severe, has historically been treated by blood transfusions. Further pulmonary, neurologic, and hepatic complications may appear in later life, and are well documented. Administering blood transfusions requires provision, storage, and serological testing to select suitable units. Recognition of the inherent potential risks of donated blood, the expense, and the concerns regarding blood supply, has resulted in a national policy for conservation and appropriate use of blood. For an individual patient, there may be development of alloantibodies which complicates future cross-matching for transfusions.
Case Report: SG is a 66-year-old Caucasian woman who first presented to our hematology department in 2003, having just moved to the area. She had suffered with nose bleeds since her teenage years and presented with a low hemoglobin level and symptoms of iron deficiency anemia. Medical and nonmedical interventions failed to arrest the blood loss, which had not been massive or associated with hypovolemic shock. Pursuant to conserving blood supplies, and based on experience of patients with other causes of iron deficiency anemia, a regimen of high-dose iron supplementation was adopted. The aim was to sustain iron stores as a substrate for erythropoiesis and thereby achieve adequate hemoglobin levels whilst minimizing the need for blood transfusion.
Discussion: This approach has maintained the patient's hemoglobin levels at 6.4-11.6 g/dL over a period of 9 years. Until the time of writing in 2011, the maximum number of blood transfusions she has received in a year has been six, albeit there has been a steady slow increase since 2006. Her quality of life has been good throughout, with good levels of activity, a normal lifestyle, and no pain. The high-dose iron regimen is estimated to have avoided administration of up to 90 units of blood in 2011, at a saving to the National Health Service of at least £7000.
abstract_id: PUBMED:27116331
Reported cardiac phenotypes in hereditary hemorrhagic telangiectasia emphasize burdens from arrhythmias, anemia and its treatments, but suggest reduced rates of myocardial infarction. Introduction: Cardiac phenotypes should be pronounced in hereditary hemorrhagic telangiectasia (HHT) due to frequent systemic arteriovenous malformations (AVMs), iron deficiency anemia, hypoxemia, hyperdynamic circulations, venous thromboemboli, and paradoxical emboli through pulmonary AVMs.
Methods/results: In an international survey, 1025 respondents (median age 55years) met HHT diagnostic criteria: 942 (91.9%) reported nosebleeds, 452 (44.1%) at least daily. AVMs were commonly reported in pulmonary (544, 53%), hepatic (194, 18.9%) and/or cerebral (92, 9.0%) circulations. 770/1025 (75%) had used iron tablets, 256 (25.0%) intravenous iron, and 374 (36.5%) received blood transfusions. Arrhythmias were reported by 113/1025 (11%, including 44 (4.3%) with atrial fibrillation), angina by 36 (3.5%), and cardiac failure by 26 (2.5%). In multivariate logistic regression, these phenotypes were associated with hepatic AVMs/pulmonary hypertension (relatively interchangeable variables), blood transfusions, and intravenous iron. Cardiac insufficiency/failure often provokes intensive anemia treatments, but associations with arrhythmias, particularly with a greater transfusion burden, were less easy to explain. Myocardial infarction (23/1025; 2.2%), and abnormal coronary angiogram (≤31/76, ≤54%) rates appeared low. Provocative preliminary data were obtained including HHT-affected respondents' parents and grandparents in whom HHT could be confidently assigned, or excluded based on autosomal dominant inheritance patterns: in crude and survival analyses, myocardial infarctions were reported less frequently for individuals with HHT, particularly for males (p=0.001).
Conclusion: Arrhythmias are the most common cardiac phenotype in HHT, and likely to be aggravated by iron deficiency anemia, its treatments, and/or high output states due to AVMs. Myocardial infarction rates may be reduced in this apparently high risk population.
abstract_id: PUBMED:22227516
Iron deficiency anemia related to hereditary hemorrhagic telangiectasia: response to treatment with bevacizumab. Hereditary hemorrhagic telangiectasia (HHT) is a rare autosomal dominant condition associated with arteriovenous malformations (AVMs) or telangiectasias of the pulmonary, gastrointestinal or hepatic circulations. The authors present a case of a 52-year-old woman with a known diagnosis of HHT who presented for evaluation of anemia. She had an extensive history of iron sucrose infusions, frequent blood transfusions and hospitalizations for anemia related to gastrointestinal bleeding and epistaxis. The patient was treated with bevacizumab at a dose of 5 mg/kg infusion every 2 weeks for 4 cycles. In the next 4 months, her hemoglobin improved to 13.7 g/dL and she did not require iron or packed red blood cell transfusions for the next 8 months. Abnormal angiogenesis primarily due to mutations in the transforming growth factor β receptor endoglin and the activin receptor-like kinases is a central contributor to the formation of AVMs in HHT. Bevacizumab is a monoclonal antibody against vascular endothelial growth factor and therefore may be a useful treatment against AVM formation in patients with HHT. The authors do caution that therapy has to be individualized as there are no randomized trials regarding its usage in patients with HHT.
abstract_id: PUBMED:28347346
7-day weighed food diaries suggest patients with hereditary hemorrhagic telangiectasia may spontaneously modify their diet to avoid nosebleed precipitants. Hereditary hemorrhagic telangiectasia (HHT) poses substantial burdens due to nosebleeds and iron deficiency resulting from recurrent hemorrhagic iron losses. Recent studies by our group found surprising links between HHT nosebleeds and certain food groups. In this letter, we report 7-day weighed food diary assessments of an unselected group of 25 UK patients with HHT whose nosebleeds ranged from mild to severe (median epistaxis severity score 4.66, range 0.89- 9.11). The diaries provide evidence that food items most commonly reported to provoke nosebleeds were ingested by fewer HHT patients, compared to food items less commonly reported to provoke nosebleeds (chi-squared p <0.001).
abstract_id: PUBMED:34160081
Sclerotherapy Versus Cautery/Laser Treatment for Epistaxis in Hereditary Hemorrhagic Telangiectasia. Objectives/hypothesis: Surgical interventions for epistaxis management in hereditary hemorrhagic telangiectasia (HHT) demonstrate short-term success and require repeated procedures for disease control. Although electrocautery and/or laser photocoagulation (C ± L) are most frequently performed, sodium tetradecyl sclerotherapy (STS) is emerging as a promising newer treatment. We hypothesized that in a 24-month time period, STS would require fewer treatments than C ± L to maintain epistaxis severity within the mild range.
Study Design: Retrospective study.
Methods: We retrospectively assessed 67 patients with HHT with moderate and severe epistaxis that were treated periodically with C ± L (34 patients) versus STS (33 patients). The primary outcome was the number of procedures needed to maintain the epistaxis severity score (ESS) as mild. Secondary outcomes assessed for differences in postoperative complications, hemoglobin levels, iron stores, hematologic support, and quality-of-life (QoL) scores.
Results: To maintain ESS in the mild range, 1.6 STS procedures (range, 1-4) were performed versus 3.6 C ± L procedures (range, 1-8) (P = .003). Significant postoperative differences included reduction in nasal crusting (3% vs. 32%, P = .001), foul odor (3% vs. 35%, P < .001), and septal perforation (3% vs. 29%, P = .006) after STS. There were no significant differences between the two treatments in hemoglobin levels, iron stores, hematologic support, or QoL scores.
Conclusion: STS is able to attain satisfactory epistaxis control with significantly fewer procedures and lower postoperative complications than C ± L. STS should be considered as the initial surgical intervention for epistaxis in patients with HHT.
Level Of Evidence: 4 Laryngoscope, 132:920-925, 2022.
Answer: Yes, iron treatments can aggravate epistaxis in some patients with Hereditary Hemorrhagic Telangiectasia (HHT). A survey evaluation of HHT patients indicated that while iron and blood transfusions were commonly reported to improve nosebleeds, a subset of patients reported that their nosebleeds were exacerbated by iron treatments. Specifically, 4.8% of iron tablet users and 6.5% of iron infusion users reported worsening of their nosebleeds following these treatments. These rates were significantly higher than those reported for control investigations. The study suggested that in a small subgroup of individuals, rapid changes in serum iron might provoke endothelial changes and hemorrhage (PUBMED:27107394).
Furthermore, a food frequency questionnaire study in HHT patients found that 26% of respondents reported certain dietary items that exacerbated epistaxis, although it did not specifically link this to iron intake (PUBMED:33081831). Another case report highlighted the potential complications related to iron treatments, including a patient with HHT who developed hypophosphatemia and hyperphosphaturia secondary to ferric carboxymaltose infusions, as well as a severe presentation of necrotizing fasciitis (PUBMED:37746404).
These findings suggest that while iron supplementation is essential for treating or preventing iron deficiency, particularly in patients with pathological hemorrhagic iron losses, healthcare providers should be aware of the potential for iron treatments to worsen epistaxis in a subset of HHT patients. It is important to monitor these patients closely and consider individual responses to iron therapy when managing their condition. |
Instruction: Microlumbar discectomy. Is it safe as an outpatient procedure?
Abstracts:
abstract_id: PUBMED:8029744
Microlumbar discectomy. Is it safe as an outpatient procedure? Study Design: This study examines the long-term follow up and satisfaction level of patients who underwent microlumbar discectomy as an outpatient procedure.
Objective: To confirm that microdiscectomy on an outpatient basis is safe, cost effective, and allows for rapid recovery.
Summary Of Background Data: In 1985, the author performed the first outpatient microlumbar discectomy, the first to be reported in the literature.
Methods: From 1985 to 1989, 103 patients underwent microlumbar discectomy. General anesthesia was used, and the microlumbar discectomy technique was mostly consistent with the William and Casper principles. The microlumbar discectomy was performed through a 1-inch incision with minimal or no bone removal. Patients were discharged a few hours after the procedure.
Results: All 103 patients were given questionnaires pertaining to their recovery, which averaged 34.6 months. Eighty-three responded. Of those, 73 (88%) reported excellent and good results. All but four patients (4.8%) were satisfied.
Conclusions: Microlumbar discectomy is safe and cost effective when performed as an outpatient procedure. It produces immediate improvement in all patients, and allows early return to jobs and regular activities.
abstract_id: PUBMED:16826004
Success and safety in outpatient microlumbar discectomy. Currently, many spine surgeons perform microlumbar discectomies on an outpatient basis. Yet, it is often customary for patients to have a 1-night stay in the hospital. Many studies have shown the efficacy of microlumbar discectomy (MLD) and its preference among surgeons for the treatment of lumbar disc herniation. It has also been shown to be safe, successful, and cost-effective. However, a large comprehensive study of this magnitude, gauging safety, success, and patient satisfaction for these procedures on an outpatient basis has not yet been done. One thousand three hundred seventy-seven MLD procedures have been done from 1992 to 2001 by 1 surgeon. A retrospective chart review was done on all procedures. Patients were then contacted by either telephone or mail to complete an outcome questionnaire. Seven hundred thirteen patients (53.9%) completed the questionnaire. Follow-up questionnaires were not completed due to deaths, incorrect contact information, and refused responses. Out of all MLD procedures, 55 (4.0%) were done with a hospital stay-only 24 of these (1.7%) were originally intended outpatient procedures. Of those that were done on an outpatient basis, 8.6% had a complication, including 6.4% who had a recurrent disc herniation. When asked, 81.6% said they would undergo the procedure again as an outpatient. In 82.1% the surgery's outcome was good, very good, or excellent. MLD is a routine procedure that can be performed on an outpatient basis safely, successfully, and with high patient satisfaction.
abstract_id: PUBMED:663770
Microlumbar discectomy: followup of 147 patients. Microlumbar discectomy is a new surgical technique for the treatment of herniated lumbar disc. The operating microscope and special instruments enable the surgeon to remove the herniated portion of the disc without laminectomy through a 1-inch skin incision. A transfusion was never necessary. In a followup study of 147 patients operated on over a 2 1/4-year period, the surgical cure rate was 96%, and the postoperative hospital stay was reduced to less than 3 days. One year after surgery, all noncompensation cases were working as were 80% of the compensation cases. Microlumbar discectomy is safe, effective, and economic for the patient.
abstract_id: PUBMED:37834767
Magnetic Resonance Imaging Evaluation of Multifidus Muscle in Patients with Low Back Pain after Microlumbar Discectomy Surgery. Cross-sectional area (CSA) and signal intensity ratio (SIR) of the multifidus muscle (MFM) on magnetic resonance imaging (MRI) was used to evaluate the extent of injury and atrophy of the MFM in patients with negative treatment outcomes following microlumbar discectomy (MLD). Negative treatment outcome was determined by pain score improvement of <50% compared to baseline. Patients in groups 1, 2, and 3 were evaluated at <4 weeks, 4-24 weeks, and >24 weeks postoperatively, respectively. The associations between the follow-up, surgery time and the changes in the MFM were evaluated. A total of 79 patients were included, with 22, 27, and 30 subjects in groups 1, 2, and 3, respectively. The MFM SIR of the ipsilateral side had significantly decreased in groups 2 (p = 0.001) and 3 (p < 0.001). The ipsilateral MFM CSA significantly decreased postoperatively in groups 2 (p = 0.04) and 3 (p = 0.006). The postoperative MRI scans found significant MFM changes on the ipsilateral side in patients with negative treatment outcomes regarding pain intensity following MLD. As the interval to the postoperative MRI scan increased, the changes in CSA of the MFM and change in T2 SIR of the MFM showed a tendency to increase.
abstract_id: PUBMED:3388121
Microlumbar discectomy (MLD). This is a retrospective study of microlumbar discectomy (MLD), performed between 1983 and 1987. During that period, 60 patients underwent the procedure. At follow-up, after an average of 33.3 months, MLD provided excellent and good results in 93.3% of cases, fair in 3.3%, and failure in 3.3%.
abstract_id: PUBMED:7946021
Long-term results of microlumbar discectomy. A long-term prospective study was carried out of 100 consecutive patients undergoing microlumbar discectomy (MLD) and fulfilling stringent selection criteria. A 95% long-term follow-up result was obtained at a mean duration of 8.6 years. At the 7-11-year assessment, 88% of patients had an excellent result, 5% a good result and 7% had either a poor result or new symptoms. Ten patients (10.5%) underwent repeat MLD during the course of the study; nine of the ten reoperations were performed at the same level as the original surgery. The percentage with an excellent result remained relatively constant (88-89%) throughout the study. No reliable predictors of long-term outcome were identified. The results suggest that microlumbar discectomy compares favourably with other surgical techniques with regard to long-term outcome.
abstract_id: PUBMED:15069257
Double-hook retractor for microlumbar discectomy and foraminotomy. Aiming to achieve better results in microlumbar discectomy and foraminotomy, a double-hook retractor has been designed to retract lumbar paraspinal muscles away from the spinous process. A double-hook retractor obviates the limitations of single-hook systems.
abstract_id: PUBMED:31788165
A Survey on the Short-term Outcome of Microlumbar Discectomy with General versus Spinal Anesthesia. Background: Surgery on the lower thoracic and lumbosacral spine is possible with both general and spinal anesthesia, but most spine surgeons are reluctant to perform the surgery with spinal anesthesia. We aimed to conduct a survey on the short-term outcome of microlumbar discectomy in the patients who had been treated under general or spinal anesthesia.
Methods: In this prospective study, we performed a survey on 72 patients who underwent microlumbar discectomy under general anesthesia (group A) or spinal anesthesia (group B). Demographic characteristics, American Society of Anesthesiologists physical status, duration of operation, blood loss, and complications were all documented. Preoperative and early postoperative (at the time of discharge) disability and pain were assessed by using Japanese Orthopedic Association (JOA) scoring system and a visual analog scale questionnaire.
Results: The two groups were homogenous preoperatively. The mean intraoperative blood loss was less and the mean operating time was shorter in group A than in group B, but there was no statistically significant difference between groups. The rate of postoperative improvement in JOA score and improvement in pain were similar between groups. Anesthetic complications were unremarkable.
Conclusions: Simple lumbar disc operations in the otherwise healthy patients can be safely performed under either spinal or general anesthesia. Both anesthetic methods led to comparable outcomes with minimal complications.
abstract_id: PUBMED:29542626
Microlumbar discectomy state of art treatment for prolapsed lumbar intervertebral disc. Microlumbar discectomy is latest state of art treatment for prolapsed lumbar intervertebral disc. A series of 250 consecutive cases operated over a period of 4 and 1/2 years have been reviewed. There were no significant complications. L4/5 was the most common level followed by L5/SI. All the six cases operated bilaterally were at L4/5 level. Relief of sciatic pain was seen in 98 percent of the patients and six patients (2.4 percent) required to be re-operated over a period of three and half years. The indication for microlumbar discectomy drops sharply after the age of fifty years. Patients also have other pathologies with advancing age such as lateral recess stenosis and hypertrophy of facets requiring alternative treatment. The overall results of microlumbar discectomy are superior in comparison to standard laminectomy.
abstract_id: PUBMED:27475323
Outpatient anterior cervical discectomy and fusion: A meta-analysis. Anterior cervical discectomy and fusion (ACDF) performed as an outpatient has become increasingly common for treating cervical spine pathology, largely due to its cost savings compared with inpatient ACDF. Nearly all outpatient ACDF patient reports have originated from single-center studies, with the procedure yet to be addressed via a meta-analysis of the peer-reviewed literature. The Entrez gateway of the PubMed database was used to conduct a comprehensive literature search for articles published in English up to 3/9/16. Data from studies meeting inclusion criteria (minimum of 25 patients, control group of inpatient ACDF patients, non-duplicative data source) was then categorized and assimilated for analysis. Seven studies met inclusion criteria, encompassing a 21-year timespan. Each provided Oxford Center for Evidence-Based Medicine Level 3 evidence. The studies yielded a total of 2448 outpatient ACDF patients; only 125 (5.1%) originated from studies published prior to 2011. Single-level surgery occurred in 63.8% of patients, with 0.5% extending beyond two-level fusions. The overall complication rate was 1.8% (mean follow-up of 141.2days); only 2% of patients required readmission. In conclusion, outpatient ACDF has become increasingly popular, with more than 95% of patients represented by studies published since 2011. Nearly two-thirds of outpatient ACDFs underwent single-level fusion, with virtually none undergoing 3+ level ACDF. Outpatient ACDF is safe, with a low readmission rate and complication rates comparable to those (2-5%) associated with inpatient ACDF. These findings support an argument for increasing ACDFs performed on an outpatient basis in appropriately selected patients.
Answer: Microlumbar discectomy (MLD) has been reported to be safe and effective as an outpatient procedure. The first outpatient MLD was performed in 1985, and a study examining patients who underwent the procedure between 1985 and 1989 found that 88% of respondents reported excellent and good results, with a high satisfaction rate and only 4.8% of patients being dissatisfied. The study concluded that MLD is safe, cost-effective, and allows for rapid recovery, with patients experiencing immediate improvement and an early return to jobs and regular activities (PUBMED:8029744).
Further supporting the safety and success of outpatient MLD, a large retrospective study of 1,377 MLD procedures performed between 1992 and 2001 reported that only 4.0% required a hospital stay, with 8.6% experiencing complications and 6.4% having a recurrent disc herniation. Despite this, 81.6% of patients would undergo the procedure again as an outpatient, and 82.1% rated the surgery's outcome as good to excellent. This study concluded that MLD can be performed on an outpatient basis safely, successfully, and with high patient satisfaction (PUBMED:16826004).
Additional studies have consistently shown high success rates and patient satisfaction with outpatient MLD. One study reported a surgical cure rate of 96% with a reduced postoperative hospital stay (PUBMED:663770), while another retrospective study found excellent and good results in 93.3% of cases (PUBMED:3388121). A long-term prospective study with a follow-up of 7-11 years found that 88% of patients had an excellent result, further suggesting that MLD compares favorably with other surgical techniques regarding long-term outcomes (PUBMED:7946021).
In summary, the evidence from multiple studies indicates that microlumbar discectomy is a safe and effective outpatient procedure with high rates of patient satisfaction and success, minimal complications, and a low rate of reoperation or hospital stay. |
Instruction: Is %ΔSUVmax a useful indicator of survival in patients with advanced nonsmall-cell lung cancer?
Abstracts:
abstract_id: PUBMED:24228017
Is %ΔSUVmax a useful indicator of survival in patients with advanced nonsmall-cell lung cancer? Purpose: To investigate the impact of the maximum standardized uptake value (SUVmax), size of primary lung lesion, and %ΔSUVmax on outcome (overall survival (OS) and 2-year disease-free survival (2-year DFS)) of patients with advanced nonsmall-cell lung cancer (NSCLC).
Materials And Methods: 86 stage III-IV NSCLC patients underwent 18 F-FDGPET/CT, before and after chemotherapy, and were classified into subgroups according to the response criteria of the European Organization for Research and Treatment of Cancer. SUVmax values and tumor size with the best prognostic significance were searched. Correlation between the SUVmax value and the initial response to therapy (best response) and the relationship between %ΔSUVmax and OS were assessed.
Results: In patients in PD (20/86), the average pretreatment SUVmax was 11.8 ± 5.23, and the mean size of the primary lesion was 43.35 mm ± 16.63. In SD, PR, and CR patients (66/86), the average pretreatment SUVmax was 12.7 ± 8.05, and the mean size of the primary lesion was 41.6 mm ± 21.15. Correlation was identified only for %ΔSUVmax; patients with PD (ΔSUVmax > +25%) showed a worse OS than patients with ΔSUVmax < +25% (CR, PR, and SD) (P = 0.0235).
Conclusions: In stage III-IV NSCLC, among the assessed factors, only %ΔSUVmax may be considered as a useful prognostic factor.
abstract_id: PUBMED:27284249
Prognostic value of the standardized uptake value maximum change calculated by dual-time-point (18)F-fluorodeoxyglucose positron emission tomography imaging in patients with advanced non-small-cell lung cancer. Purpose: The purpose of this study was to investigate the prognostic value of the standardized uptake value maximum (SUVmax) change calculated by dual-time-point (18)F-fluorodeoxyglucose positron emission tomography (PET) imaging in patients with advanced non-small-cell lung cancer (NSCLC).
Patients And Methods: We conducted a retrospective review of 115 patients with advanced NSCLC who underwent pretreatment dual-time-point (18)F-fluorodeoxyglucose PET acquired at 1 and 2 hours after injection. The SUVmax from early images (SUVmax1) and SUVmax from delayed images (SUVmax2) were recorded and used to calculate the SUVmax changes, including the SUVmax increment (ΔSUVmax) and percent change of the SUVmax (%ΔSUVmax). Progression-free survival (PFS) and overall survival (OS) were determined by the Kaplan-Meier method and were compared with the studied PET parameters, and the clinicopathological prognostic factors in univariate analyses and multivariate analyses were constructed using Cox proportional hazards regression.
Results: One hundred and fifteen consecutive patients were reviewed, and the median follow-up time was 12.5 months. The estimated median PFS and OS were 3.8 and 9.6 months, respectively. In univariate analysis, SUVmax1, SUVmax2, ΔSUVmax, %ΔSUVmax, clinical stage, and Eastern Cooperative Oncology Group (ECOG) scores were significant prognostic factors for PFS. Similar results were significantly correlated with OS, except %ΔSUVmax. In multivariate analysis, ΔSUVmax and %ΔSUVmax were significant factors for PFS. On the other hand, ECOG scores were only identified as independent predictors of OS.
Conclusion: Our results demonstrated the prognostic value of the SUVmax change in predicting the PFS of patients with advanced NSCLC. However, SUVmax change could not predict OS.
abstract_id: PUBMED:31839507
The effect of different treatment modalities on survival in elderly patients with locally advanced non-small cell lung cancer. Purpose: The aim of this study is to investigate the effect of treatment modalities on survival among unoperat ed and locally-advanced non-small cell lung cancer (NSCLC) patients aged 70 years and older, representing real-life data.
Methods: From 2005 through 2017, medical records of 2259 patients with lung cancer from Okmeydani Training and Research Hospital-Istanbul/Turkey were reviewed retrospectively. Patients with locally advanced NSCLC ≥ 70 years of age who did not undergo surgery for lung cancer were reviewed. In total, 130 patients were eligible for the final analysis. Patients were stratified into four groups as: chemotherapy (CT), concurrent chemoradiotherapy (cCRT), sequential chemoradiotherapy (sCRT), and radiotherapy (RT) only.
Results: Of the 130 patients included in the analysis; CT, cCRT, sCRT, and RT only were applied to 25(19.2%), 30(23.1%), 31(23.8%), and 44(33.8%) patients, retrospectively. Twelve (9.2%) patients were female. Median age was 72 years (range, 70-88). Sixty (46.2%) patients had stage IIIA disease and 70(53.8%) patients had stage IIIB disease. Median progression-free survival(mPFS) in patients treated with CT, cCRT, sCRT, and RT were 8.0, 15, 10, and 9.0 months, respectively(p = 0.07). Corresponding median overall survival (mOS) were 10, 33, 20, and 15 months (p = 0.04). In multivariate analysis, stage IIIB disease [hazard ratio (HR), 2.8], ECOG-PS 2(HR, 2.10), and ECOG-PS 3-4(HR, 5.13) were found to be the negative factors affecting survival, while cCRT (HR, 0.45) and sCRT (HR, 0.50) were the independent factors associated with better survival.
Conclusion: This study showed that the use of combined treatment modality was associated with better survival in elderly patients with locally advanced NSCLC, with the greatest survival observed in patients treated with cCRT. We therefore suggest that cCRT, when feasible, should be strongly considered in locally advanced NSCLC patients 70 years and over.
abstract_id: PUBMED:33124197
Predictors of survival among Japanese patients receiving first-line chemoimmunotherapy for advanced non-small cell lung cancer. Background: First-line chemoimmunotherapy (CIT) has improved overall survival (OS) and progression-free survival (PFS) outcomes among patients with non-small cell lung cancer (NSCLC). The immunological and nutritional statuses of patients fluctuate during treatment using immune checkpoint inhibitors, and are closely related to treatment outcomes. However, it is unclear whether these markers are significant in patients who are receiving CIT.
Methods: This retrospective single-center study evaluated 34 consecutive Japanese patients with NSCLC who were treated using first-line CIT. Previously reported markers that reflect immunological and nutritional statuses were evaluated at three time points: at the start of CIT, after three weeks, and at the end of induction therapy.
Results: The median PFS was 7.2 months (95% confidence interval: 6.3 months-not reached) and the median OS was not reached (95% confidence interval: 9.6 months-not reached). The PFS duration was significantly associated with the baseline neutrophil-to-lymphocyte ratio and the three-week values for the modified Glasgow prognostic score, C-reactive protein-albumin ratio, prognostic nutrition index, and advanced lung cancer inflammation index. The OS duration was significantly associated with the pre-treatment values for the neutrophil-to-lymphocyte ratio and advanced lung cancer inflammation index, as well as the prognostic nutrition index at the end of induction therapy.
Conclusions: Immunological and nutritional markers could be useful for predicting the outcomes of CIT for Japanese patients with advanced non-small cell lung cancer. The timing of their evaluation may also be important.
Key Points: SIGNIFICANT FINDINGS OF THE STUDY: Overall survival in patients receiving first-line chemoimmunotherapy for advanced lung cancer were associated with pretreatment values of neutrophil-to-lymphocyte ratio, advanced lung cancer inflammation index, and the prognostic nutrition index at the end of induction therapy.
What This Study Adds: Repetitive evaluation of immunological and nutritional markers may be useful for guiding prognostication and treatment selection for Japanese patients with advanced lung cancer.
abstract_id: PUBMED:35165922
The association between proton pump inhibitor use and systemic anti-tumour therapy on survival outcomes in patients with advanced non-small cell lung cancer: A systematic review and meta-analysis. Aims: Proton pump inhibitors (PPIs) are often prescribed to prevent or treat gastrointestinal disease. Whether the combination of systemic anti-tumour therapy and PPIs leads to poor outcomes in patients with advanced non-small cell lung cancer (NSCLC) is unclear. This systematic review explored the relationship between PPIs and survival outcomes of patients with advanced NSCLC who are receiving systemic anti-tumour therapy.
Methods: We searched studies reporting the overall survival (OS) and/or progression-free survival (PFS) of advanced NSCLC patients who are receiving systemic anti-tumour therapy with or without PPIs on PubMed, EMBASE and the Cochrane Library for literature published prior to 31 August 2021. The meta-analysis used a random effects model to estimate the hazard ratio (HR) with 95% confidence intervals (CI) and I2 to assess statistical heterogeneity. Publication bias and sensitivity analysis were performed.
Results: Fourteen retrospective studies comprising 13 709 advanced NSCLC patients were identified. Subgroup analyses showed that the use of PPI was correlated with the OS or PFS of patients receiving chemotherapy, targeted therapy, and immunotherapy (PPI users' group vs non-users' group: HR for OS = 1.35, 95% CI = 1.21-1.51, P < .00001; HR for PFS = 1.50, 95% CI = 1.25-1.80, P < .0001). Publication bias and sensitivity analyses confirmed that the results were robust.
Conclusion: Meta-analysis demonstrated that PPI use in advanced NSCLC patients who were undergoing systemic anti-tumour therapy was correlated with increased mortality risk. Until results are further confirmed, caution should be applied when administering PPIs and systemic anti-tumour therapy to advanced NSCLC patients.
abstract_id: PUBMED:35218691
Clinical impact of post-progression survival in patients with locally advanced non-small cell lung cancer after chemoradiotherapy. Background: The efficacy of first-line chemoradiotherapy for overall survival (OS) might be confounded by the subsequent treatments in patients with locally advanced non-small cell lung cancer (NSCLC). In this study, we assessed the associations of progression-free survival (PFS) and post-progression survival (PPS) with OS after chemoradiotherapy for locally advanced NSCLC using patient-level data.
Patients And Methods: Between January 2011 and December 2018, 45 patients with locally advanced NSCLC who had received first-line chemoradiotherapy and in whom recurrence occurred were analysed. The associations of PFS and PPS with OS were analysed at the individual level.
Results: Linear regression and Spearman rank correlation analyses revealed that PPS was strongly correlated with OS (r = 0.72, p < 0.05, R2 = 0.54), whereas PFS was moderately correlated with OS (r = 0.58, p < 0.05, R2 = 0.34). The Glasgow prognostic score and liver metastases at recurrence were significantly associated with PPS (p < 0.001).
Conclusions: The current analysis of individual-level data of patients treated with first-line chemoradiotherapy implied that PPS had a higher impact on OS than PFS in patients with locally advanced NSCLC. Additionally, current perceptions indicate that treatment beyond progression after first-line chemoradiotherapy might strongly affect OS.
abstract_id: PUBMED:35628059
Associations between Measured and Patient-Reported Physical Function and Survival in Advanced NSCLC. Background: There is a lack of tools for selecting patients with advanced lung cancer who benefit the most from systemic treatment. Patient-reported physical function (PRPF) has been identified as a prognostic factor in this setting, but little is known about the prognostic value in advanced non-small-cell lung cancer (NSCLC). The aim of this study was to investigate if measured physical performance was an independent or stronger prognostic factor than PRPF in patients with advanced NSCLC receiving platinum-doublet chemotherapy. Methods: We analyzed patients from a randomized trial comparing immediate and delayed pemetrexed therapy in stage III/IV NSCLC (n = 232) who performed timed up and go (TUG) and 5 m walk test (5 mWT) and reported physical function on the EORTC QLQ-C30 before chemotherapy commenced. Results: Overall, 208 patients performed TUG and 5 mWT and were included in the present study. Poor physical function was significantly associated with poor survival (TUG: HR 1.05, p < 0.01, 5 mWT: HR 1.05, p = 0.03, PRPF: 1.01, p < 0.01), but only PRPF remained an independent prognostic factor in multivariable analyses adjusting for baseline characteristics (HR 1.01, p = 0.03). Conclusions: Patient-reported, but not measured, physical performance was an independent prognostic factor for survival in patients with advanced NSCLC receiving platinum-doublet chemotherapy.
abstract_id: PUBMED:38076123
Comparison of survival outcomes between clinical trial participants and non-participants of patients with advanced non-small cell lung cancer: A retrospective cohort study. Background: Clinical trials for advanced non-small cell lung cancer (NSCLC) have been conducted extensively. However, the effect of participation in clinical trials on survival outcomes remains unclear. This study aimed to assess whether participation in clinical trials was an independent prognostic factor for survival in patients with advanced NSCLC.
Methods: We analyzed the medical records of patients aged ≥18 years who were newly diagnosed with stage IIIB or IV NSCLC and received chemotherapy or immunotherapy from September 2016 to June 2020 in this retrospective cohort study. To reduce the impact of confounding factors, propensity score matching (PSM) was performed. The Kaplan-Meier method and log-rank test were used to calculate and compare the overall survival (OS) and progression-free survival (PFS) of the patients. Finally, Cox proportional hazards regression was employed to examine the correlation between clinical trial participation and survival outcomes.
Results: The study enrolled 155 patients in total, of which 62 (40.0 %) patients participated in NSCLC clinical trials. PSM identified 50 pairs of patients in total. The median PFS and OS of clinical trial participants and non-participants were 17.2 vs. 13.9 months (p = 0.554) and 32.4 vs. 36.5 months (p = 0.968), respectively. According to the results of multivariate Cox proportional hazards regression analysis, clinical trial participation was not an independent prognostic factor for advanced NSCLC patients (HR: 0.89, 95 % CI: 0.50-1.61; p = 0.701).
Conclusions: The clinical trial participants with advanced NSCLC displayed similar survival outcomes compared with the non-participating patients in this cohort.
abstract_id: PUBMED:35991233
Prognostic factors of 112 elderly patients with advanced non-small cell lung cancer. Objectives: In this study we retrospectively analyzed the prognostic factors of patients with advanced non-small cell lung cancer (NSCLC).
Methods: Clinical data of 112 patients with advanced NSCLC treated in the tumor center of our hospital from January 2016 to December 2017 were analyzed retrospectively, follow up the survival of patients, the effects of gender, age, tumor stage, pathological type, performance status (PS) score, smoking history and treatment on the survival of elderly patients with advanced NSCLC were analyzed. Results: The median survival time was 12.0 months, and the median age was 74 years. The 3-year survival rate after confirmation of advanced lung cancer was 6.25%. Kaplan Meier univariate analysis showed that age, PS score, smoking status and treatment correlated with the prognosis(P<0.05). Cox multivariate analysis showed that age >70 years, PS score>2, smoking and no targeted therapy were independent adverse prognostic factors for elderly patients with advanced NSCLC(P<0.05).
Conclusions: Age, PS score, smoking and treatment mode affect the prognosis and survival of elderly patients with advanced NSCLC. Effective treatment should be given according to the principle of evidence-based medicine.
abstract_id: PUBMED:27386285
Quality of Life in Spanish advanced non-small-cell lung cancer patients: determinants of global QL and survival analyses. Purpose: This paper studies the Quality of Life (QL) of Spanish advanced non-small-cell lung cancer (NSCLC) patients receiving platinum-doublet chemotherapy, compares our results with those from studies from other cultural areas, and identifies factors associated with global QL and survival prognostic variables.
Methods: EORTC QLQ-C30 and QLQ-LC13 questionnaires were completed three times by 39 patients along treatment and follow-up. Univariate and multivariate logistic regression analyses were performed to study global QL determinants (≤50 points considered low global-QL score). Analyses of prognostic variables for death were performed (Cox proportional hazards models).
Results: QL mean scores in the whole sample were moderately high, with limitations (>30) in physical, role, social functioning, emotional areas, fatigue, pain, neuropathy and global QL. Differences with studies from other cultural areas were mainly found in the lower score for dyspnoea (≥15 points). There were no significant differences in QL scores between the first and second assessments. In six areas, the third assessment was lower than the first and second: fatigue, hair loss (>20 points); physical, social functioning, neuropathy (10-20 points); emotional functioning (5-10 points). The best model to explain the chances of low QL includes, as explanatory variables, high emotional functioning as protective factor and fatigue as risk factor (R(2) = 0.70). Eight QL areas (four pain-related) and performance status showed a statistically significant association with survival.
Conclusion: Patients adapted well to their disease and treatments. Platinum-doublet can be administered in advanced NSCLC patients. Our QL data are in line with those from other cultural areas.
Answer: Yes, %ΔSUVmax appears to be a useful indicator of survival in patients with advanced non-small cell lung cancer (NSCLC). According to the study with PUBMED:24228017, %ΔSUVmax was the only factor among those assessed (including SUVmax and size of the primary lung lesion) that could be considered a useful prognostic factor in stage III-IV NSCLC. The study found that patients with progressive disease (PD) who had a %ΔSUVmax greater than +25% showed worse overall survival (OS) than patients with a %ΔSUVmax less than +25% (complete response (CR), partial response (PR), and stable disease (SD)).
However, another study with PUBMED:27284249 indicated that while the SUVmax change, including %ΔSUVmax, was a significant factor for predicting progression-free survival (PFS), it could not predict OS in patients with advanced NSCLC. This suggests that while %ΔSUVmax may have some prognostic value, it may not be a comprehensive indicator of overall survival.
These findings suggest that %ΔSUVmax could be considered as part of a multifactorial approach to prognostication in advanced NSCLC, but it may not be sufficient as a standalone predictor of survival outcomes. Clinicians may need to consider other factors, such as clinical stage, performance status, and treatment modalities, to accurately assess prognosis in advanced NSCLC patients. |
Instruction: Media coverage of celebrity DUIs: teachable moments or problematic social modeling?
Abstracts:
abstract_id: PUBMED:19221172
Media coverage of celebrity DUIs: teachable moments or problematic social modeling? Aim: Alcohol in the media influences norms around use, particularly for young people. A recent spate of celebrity arrests for drinking and driving (DUI) has received considerable media attention. We asked whether these newsworthy events serve as teachable moments or problematic social modeling for young women.
Method: Qualitative analysis of US media coverage of four female celebrities (Michelle Rodriguez, Paris Hilton, Nicole Richie and Lindsay Lohan) was conducted over the year following their DUI arrest (December 2005 through June 2008). The media sample included five television and three print sources and resulted in 150 print and 16 television stories.
Results: Stories were brief, episodic and focused around glamorous celebrity images. They included routine discussion of the consequences of the DUI for the individual celebrities without much evidence of a consideration of the public health dimensions of drinking and driving or possible prevention measures.
Conclusions: Our analysis found little material in the media coverage that dealt with preventing injury or promoting individual and collective responsibility for ensuring such protection. Media attention to such newsworthy events is a missed opportunity that can and should be addressed through media advocacy efforts.
abstract_id: PUBMED:30838670
Modeling of variables related to problematic social media usage: Social desirability tendency example. Social media usage has been popular for the last decade. Individuals use their social media environments for various reasons such as to socialize, play games, have fun and share posts. Overuse of these environments may lead to negative psychological and behavioral consequences for individuals. Additionally, it increases the worries about potential addicted/problematic use of social media. In this study, it is aimed to determine the level of problematic social media usage of participants who are active social media users and to analyze the relationships between problematic social media usage and various personal characteristics and social variables. Study in relational screening model is carried out with the participation of 580 volunteers. Partial least squares (PLS) structural equation modeling is used to analyze the data obtained through various scale according to the research model. The structural equation modeling analysis shows that there is a significant relationship between problematic social media usage and the daily time of social media usage, the use of frequency of social media for recognition, publicity, communication/interaction and education, loneliness, and social anxiety. The variable which shows higher correlation between problematic social media usage is social anxiety.
abstract_id: PUBMED:37840788
Do media coverage of suicides and search frequency on suicides predict the number of tweets seeking others for a suicide pact? We examined whether media coverage of suicides and frequencies of searching for suicide methods or suicide pacts predicted the number of users posting tweets seeking others for a suicide pact. Analyses of 6,119 tweets containing "suicide pact" posted on Twitter during a 6-month period revealed that the number of users posting tweets seeking others for a suicide pact had a positive association with media coverage of celebrity suicides, but not with that of suicide pact victims, and a greater positive association with the search frequency for suicide methods than for suicide pacts. We found that the search frequency on suicide methods was positively associated with media coverage of celebrity suicides, while that on suicide pacts was more strongly related to media coverage of suicide pacts.
abstract_id: PUBMED:33241738
Celebrity Suicide. Background: Increased suicides following media coverage of celebrities' suicide deaths have been documented in several countries. Recommendations for Reporting on Suicide were published to provide guidance for media professionals when covering suicide. Research indicates guidelines have been poorly followed. Aim: We aimed to determine whether the recommendations were similarly observed when studying two online news organizations' coverage of a celebrity's suicide. Method: In the 3 days following a high-profile celebrity's death, two US cable networks' news websites were studied to compare how the death was reported. Online articles were reviewed using a coding rubric organized by six themes and 21 coding categories. Results: Between the two organizations, 34 articles were published. Regarding the recommendations, neither source followed all of the recommendations, as measured in this study. Source A fared better in providing help-seeking information. Limitations: Only two news organizations were studied for a 3-day period. Online videos, print articles, and social media were excluded. Conclusion: The suicide of a celebrity received repetitive media coverage with little emphasis on prevention or help-seeking. The recommendations were not consistently followed by the two news websites included in this review.
abstract_id: PUBMED:35189764
Media Coverage of Senior and Celebrity Suicides and Its Effects on Copycat Suicides among Seniors. This study examined whether suicide rates in the elderly population are associated with media coverage of senior or celebrity suicides. Analyzing data from 2012 to 2015, we found that seniors were likely to be more influenced by media coverage of senior suicides than by celebrity suicides. Furthermore, the effects of media coverage of senior suicides were more significant when the reported reason was either health (mental or physical problems) or financial issues, such as poverty than other reasons.
abstract_id: PUBMED:38028729
Social Media Use Motives as Mediators of the Link Between Covert Narcissism and Problematic Social Media Use. Background: The present study was carried out to investigate the mediating effect of the social media use motives between covert narcissism and problematic social media use in the Korean population.
Methods: College students using social networking service (SNS) (n = 603, 43.6% male) filled out self-report questionnaires of covert narcissism, social media use motives, and problematic social media use.
Results: Participants who reported more covert narcissism reported more problematic social media use. In addition, the relations between covert narcissism and problematic social media use was mediated by information, enhancement, coping, and conformity motives.
Conclusion: The findings of this study can help to establish an intervention program -suitable for a specific population group and identify high-risk groups for problematic social media use.
abstract_id: PUBMED:36820174
Problematic use of social media: The influence of social environmental forces and the mediating role of copresence. People's dependence on technology in the digital environment has increasingly become the focus of academic and social attention. Social media, in particular, with the functions of connecting with others and maintaining interactions, has become an inseparable part of people's lives. Although the formation of problematic use of social media has been extensively discussed by scholars, it is mainly confined to the individual level and lacks a macro perspective from the external environment. This study draws on the perspective of institutional theory and introduces copresence as a mediating role, aiming to investigate the influence mechanism of social environmental forces on individuals' problematic use of social media. An online survey (N = 462) was conducted to collect data and test the research model. Our data were analyzed using the structural equation modeling (SEM) approach. Results show that social environmental forces exert an impact on problematic use of social media through the sense of copresence, and only mimetic force can directly affect behavior outcomes while the other two forces can not. Besides, social environmental forces have a relationship with people's sense of copresence while using social media. Among them, mimetic force and normative force positively correlate with copresence while coercive force is negatively related to copresence. Furthermore, copresence is found to influence problematic use of social media positively. Practical and theoretical implications are discussed.
abstract_id: PUBMED:33391137
Exploring the Role of Social Media Use Motives, Psychological Well-Being, Self-Esteem, and Affect in Problematic Social Media Use. Given recent advances in technology, connectivity, and the popularity of social media platforms, recent literature has devoted great attention to problematic Facebook use. However, exploring the potential predictors of problematic social media use beyond Facebook use has become paramount given the increasing popularity of multiple alternative platforms. In this study, a sample of 584 social media users (Mage = 32.28 years; 67.81% female) was recruited to complete an online survey assessing sociodemographic characteristics, patterns, and preferences of social media use, problematic social media use (PSMU), social media use motives, psychological well-being, self-esteem, and positive and negative affect. Results indicated that 6.68% (n = 39) of all respondents could be potentially classed as problematic users. Moreover, further analysis indicated that intrapersonal motive (β = 0.38), negative affect (β = 0.22), daily social media use (β = 0.18), surveillance motive (β = 0.12), and positive affect (β = -0.09) each predicted PSMU. These variables accounted for about 37% of the total variance in PSMU, with intrapersonal motive driving the greatest predictive contribution, over and above the effects of patterns of social media use and sociodemographic variables. These findings contribute to the increasing literature on PSMU. The results of this study are discussed in light of the existing literature on PSMU.
abstract_id: PUBMED:32467839
Social norms and e-motions in problematic social media use among adolescents. Introduction: Being constantly connected on social media is a "way of being" among adolescents. However, social media use can become "problematic" for some users and only a few studies have explored the concurrent contribution of social context and emotion regulation to problematic social media use. The current study aimed to test: (i) the influence of friends (i.e., their social media use and group norms about social media use); and (ii) the effects of difficulties in emotion regulation and so-called "e-motions" on adolescents' problematic social media use.
Methods: A cross-sectional study was conducted in Italian secondary schools. An online questionnaire was administered to 761 adolescents (44.5% females; Mage = 15.49 years; SDage = 1.03).
Results: Path analysis showed that social norms were directly associated with problematic social media use and friends' social media use was associated with the frequency of social media use, which, in turn, was associated with problematic use. Difficulties in emotion regulation were directly and indirectly linked to problematic social media use via frequency of use and facilitating use of e-motions.
Conclusions: These findings provide support for the importance of both peer influence and emotion regulation in this context. Social norms and emotion regulation should be considered in prevention programs addressing problematic social media use in adolescents.
abstract_id: PUBMED:32467856
How maladaptive cognitions contribute to the development of problematic social media use. The current study investigates the effects of perfectionism discrepancies (PD) and social hopelessness (SH) on problematic social media use as conceptualized by the cognitive-behavioral model.
Methods: A sample of 400 university students (52.3% women; mean age = 22.01 ± 1.99) completed measures assessing PD, SH, and problematic social media use.
Results: Structural equation modeling showed that both social hopelessness and feeling discrepant from personal and prescribed standards predicted the preference for online social interactions (POSI). POSI predicted the motivation to use online social media as a means of alleviating distressing feelings, the inability to regulate social media use and the negative outcomes resulting from use of SNS.
Conclusions: In line with the cognitive-behavioral model of problematic Internet use, the present study suggests the primary importance of maladaptive cognitions about the self (i.e. perfectionism discrepancies) and the world (i.e. social hopelessness) for the development of a preference for online social interactions. In particular, the present study shows that individuals are likely to opt for online social interactions as a function of their pessimistic social expectancies and the sense of inadequacy that comes from perceptions of falling short of expectations.
Answer: The media coverage of celebrity DUIs tends to be problematic social modeling rather than teachable moments. The qualitative analysis of US media coverage of four female celebrities' DUI arrests found that the stories were brief, episodic, and focused on glamorous celebrity images. They included routine discussion of the consequences for the celebrities themselves without much consideration of the public health dimensions of drinking and driving or possible prevention measures. This suggests that media attention to such events is a missed opportunity that could be addressed through media advocacy efforts to promote individual and collective responsibility for preventing injury (PUBMED:19221172).
In general, media coverage can have a significant influence on social behavior and public perception. For instance, media coverage of suicides and search frequency on suicides can predict the number of tweets seeking others for a suicide pact, indicating that media portrayal can have a direct impact on individuals' actions in relation to sensitive issues (PUBMED:37840788). Similarly, the media's repetitive coverage of a celebrity suicide with little emphasis on prevention or help-seeking information demonstrates that media often does not follow recommendations for responsible reporting, which could otherwise serve as teachable moments (PUBMED:33241738).
Moreover, the effects of media coverage on behavior are not limited to immediate actions but can also influence long-term patterns, such as the association between media coverage of senior suicides and increased suicide rates among the elderly (PUBMED:35189764).
In the context of social media, the problematic use of these platforms is influenced by various personal characteristics and social variables, including social anxiety and the motives behind social media use, such as seeking recognition or coping with loneliness (PUBMED:30838670, PUBMED:38028729, PUBMED:36820174). These findings suggest that media portrayals, including those of celebrity DUIs, can contribute to problematic social modeling rather than serving as educational opportunities. |
Instruction: Margin Status in Shave Biopsies of Nonmelanoma Skin Cancers: Is It Worth Reporting?
Abstracts:
abstract_id: PUBMED:27116089
Margin Status in Shave Biopsies of Nonmelanoma Skin Cancers: Is It Worth Reporting? Context: -The practice of reporting margin status in biopsies is relatively unique to biopsies of the skin and highly variable among pathologists.
Objective: -To address the accuracy of margin evaluation in shave biopsies of nonmelanoma skin cancers.
Design: -We collected shave biopsies of squamous and basal cell carcinomas that appeared to have uninvolved margins on routine sign out. We obtained deeper levels on corresponding tissue blocks until blocks were exhausted and examined them for tumor at biopsy margins.
Results: -Forty-seven consecutive cases were collected, including 20 squamous cell (43%) and 27 basal cell (57%) carcinomas. Eleven of 47 cases (23%) with negative margins at initial diagnosis demonstrated positive margins upon deeper-level examination. Margins of 8 of 27 basal cell carcinomas (30%) and 3 of 20 squamous cell carcinomas (15%) were erroneously classified as "negative" on routine examination.
Conclusions: -No guidelines exist regarding the reporting of margins in nonmelanoma skin cancer biopsies, and reporting practices vary extensively among pathologists. We found that nearly one-quarter of positive margins in shave biopsies for cutaneous carcinomas are missed on standard histologic examination. Moreover, reporting of a positive margin may also be misleading if the clinician has definitively treated the skin cancer at the time of biopsy. For these reasons, and as routine exhaustion of all tissue blocks is impractical, the decision to include or exclude a comment regarding the margin status should be given conscious consideration, accounting for the clinical intent of the biopsy and any known information regarding postbiopsy treatment.
abstract_id: PUBMED:32419172
Prospective study of pigmented lesions managed by shave excision with no deep margin transection of melanomas. Shave excision is a simple and cost-effective technique for the removal of suitable skin lesions. We performed a prospective study over six months, collecting data from pigmented lesions that were treated with shave excision by dermatologists. Only shave excisions with the intent to remove the lesion in toto were included. A total of 349 lesions were included in this study, 50 (14%) of these were melanomas and no melanoma diagnosed had deep margin involvement, while 13 (26%) had lateral margin involvement.
abstract_id: PUBMED:30005107
Margin Assessment for Punch and Shave Biopsies of Dysplastic Nevi. Introduction: Biopsies of atypical melanocytic nevi are among the most commonly performed procedures by dermatologists. Margin assessment is often used to guide re-excision, but can be a point of confusion as negative margins reported in the planes of sections examined do not always reflect complete removal of a lesion. This study investigates the rates of false negative margins after both punch and shave biopsies.
Methods: We performed a retrospective analysis of 50 consecutive punch and shave biopsy specimens (1) diagnosed as DN, and (2) reported as having clear margins in the planes of section examined. Identified specimen blocks were then sectioned through to examine true margin involvement.
Results: Of the 50 specimens identified, 20% (n = 10) were found to have positive margins upon additional sectioning. We found no difference between the groups with respect to biopsy technique, type of nevus, degree of atypia, or gender.
Conclusion: This study observed false negative peripheral margin status in a sizeable proportion of biopsy specimens, which did not vary significantly based on biopsy technique or pathologic characteristics. This finding reflects a limitation of standard tissue processing, in which a limited proportion of the true margin is evaluated, and may be of note to many dermatologists who base their decision to re-excise on the reporting of margin involvement. J Drugs Dermatol. 2018;17(7):810-812.
abstract_id: PUBMED:36739830
Clinical Impact and Accuracy of Shave Biopsy for Initial Diagnosis of Cutaneous Melanoma. Introduction: Effective treatment of malignant melanomas is dependent upon accurate histopathological staging of preoperative biopsy specimens. While narrow excision is the gold standard for melanoma diagnosis, superficial shave biopsies have become the preferred method by dermatologists but may transect the lesion and result in inaccurate Breslow thickness assessment. This is a retrospective cohort study evaluating an initial method of biopsy for diagnosis of cutaneous melanoma and indication for reoperation based on inaccurate initial T-staging.
Methods: We retrospectively analyzed consecutive patients referred to the Medical College of Wisconsin, a tertiary cancer center, with a diagnosis of primary cutaneous melanoma. Adult patients seen between 2015 and 2018 were included. Fisher's exact test was used to assess the association between method of initial biopsy and need for unplanned reoperation.
Results: Three hundred twenty three patients with cutaneous melanoma from the head and neck (H&N, n = 101, 31%), trunk (n = 90, 15%), upper extremity (n = 84, 26%), and lower extremity (n = 48, 28%) were analyzed. Median Breslow thickness was 0.54 mm (interquartile range = 0.65). Shave biopsy was the method of initial biopsy in 244 (76%), excision in 23 (7%), and punch biopsy in 56 (17%). Thirty nine (33%) shave biopsies had a positive deep margin, as did seven (23%) punch biopsies and 0 excisional biopsies. Residual melanoma at definitive excision was found in 131 (42.5%) of all surgical specimens: 95 (40.6%) shave biopsy patients, 32 (60.4%) punch biopsy patients, and four (19.0%) excision biopsy patients. Recommendations for excision margin or sentinel lymph node biopsy changed in 15 (6%) shave biopsy patients and five (9%) punch biopsy patients.
Conclusions: Shave biopsy is the most frequent method of diagnosis of cutaneous melanoma in the modern era. While shave and punch biopsies may underestimate true T-stage, there was no difference in need for reoperation due to T-upstaging based on initial biopsy type, supporting current diagnostic practices. Partial biopsies can thus be used to guide appropriate treatment and definitive wide local excision when adjusting for understaging.
abstract_id: PUBMED:30037763
Initial Misidentification of Thumb Poroma by Shave Biopsy. Poromas are benign adnexal neoplasms originating from the intraepidermal portion of sweat gland ducts. With the possibility of malignant transformation, accurate clinical diagnosis and treatment are crucial. Numerous reports of hand poroma lesions have been reported. We present an unusual case of a distal thumb poroma originally identified as a squamous cell lesion in a shave biopsy and eventually accurately identified after excisional biopsy. This report highlights the limitations of shave biopsy associated with soft tissue hand lesions and the need to consider poroma when evaluating a soft tissue lesion of the hand.
abstract_id: PUBMED:10997312
Shave excision of melanocytic nevi of the skin: indications, technique, results Background And Objective: Shave excision of nevi is a technique still under debate. Speed, simplicity, and the fact that it provides excised material for histologic examination are contrasted with the lack of excision margins and a higher rate of nevus recurrences. In this study, the pros and cons of shave excision were evaluated.
Patients And Methods: Conventional excisions (268 nevi with intracutaneous butterfly sutures) and shave excisions (403 nevi) were compared with the patients' subjective assessments and objective parameters as recurrence, color, depth, surface smoothness of the scars, and the healing process. The nevi, found on the entire integument, ranged in diameter from 2 to 15 mm, with an average of 5 mm. A second excision was performed only in cases in which an early malignant melanoma could not be excluded.
Results: Shave excisions were evaluated subjectively as being better. Shave excisions resulted in fewer complications (7.9% versus 15%), but recurrences were more frequent (18.1% versus 6.0%). There was no close relationship between histopathologic finding of complete excision and recurrences.
Conclusions: Small nevi without clinical suspicion of malignant melanoma can be removed with the shave excision technique with good results. Patients should be informed about the higher rate of recurrences. The appliance of the shave technique requires exact knowledge and experience, enabling good histopathologic examinations.
abstract_id: PUBMED:33782802
Impact of Shave Biopsy on Diagnosis and Management of Cutaneous Melanoma: A Systematic Review and Meta-Analysis. Background: Melanoma is the most lethal skin cancer. Excision biopsy is generally recommended for clinically suspicious pigmented lesions; however, a proportion of cutaneous melanomas are diagnosed by shave biopsy. A systematic review was undertaken to investigate the impact of shave biopsy on tumor staging, treatment recommendations, and prognosis.
Methodology: The MEDLINE, Embase, and Cochrane Library databases were searched for relevant articles. Data on deep margin status on shave biopsy, tumor upstaging, and additional treatments on wide local excision (WLE), disease recurrence, and survival effect were analyzed across studies.
Results: Fourteen articles from 2010 to 2020 were included. In total, 3713 patients had melanoma diagnosed on shave biopsy. Meta-analysis revealed a positive deep margin in 42.9% of shave biopsies. Following WLE, change in tumor stage was reported in 7.7% of patients. Additional treatment was recommended for 2.3% of patients in the form of either further WLE and/or sentinel lymph node biopsy. There was high heterogeneity across studies in all outcomes. Four studies reported survival, while no studies found any significant difference in disease-free or overall survival between shave biopsy and other biopsy modalities.
Conclusions: Just over 40% of melanomas diagnosed on shave biopsy report a positive deep margin; however, this translated into a change in tumor stage or treatment recommendations in relatively few patients (7.7% and 2.3%, respectively), with no impact on local recurrence or survival among the studies analyzed.
abstract_id: PUBMED:27436823
Dysplastic (or Atypical) Nevi Showing Moderate or Severe Atypia With Clear Margins on the Shave Removal Specimens Are Most Likely Completely Excised. Background: Dysplastic nevi (DN) are graded by their degree of atypia into 3 categories of mild, moderate, and severe. In many practices, DN with moderate or severe atypia are generally excised regardless of the status of the shave specimen margins.
Objective: With a new approach toward the margins on the shave removal specimens (SRS), the goal herein is to assess whether the shave removal procedure can sufficiently remove DN with moderate or severe atypia.
Methods: A total of 426 SRS diagnosed with DN showing moderate or severe atypia between January and December 2015 along with their post-shave excision specimens were reviewed. Based on the author's experience, clear or negative margins on the SRS were defined as neoplastic melanocytes confined within >0.2 mm of the lateral and deep specimen margins. The biopsy specimens were accompanied by Melan-A highlighting the subtle neoplastic cells.
Results: With a negative predictive value (NPV) of 98.4% (confidence interval: 97.2% to 100%, P < .001), DN showing moderate or severe atypia with clear margins are most likely removed by the shave procedure.
Conclusion: Routine excision of DN showing moderate or severe atypia with clear margins on SRS is not necessary. Regular surveillance is sufficient.
abstract_id: PUBMED:21944522
The impact of shave biopsy on the management of patients with thin melanomas. Disagreement persists regarding the role that various biopsy methods should play in the diagnosis of primary cutaneous melanoma. We analyzed the indications for sentinel lymph node (SLN) biopsy and the rates of SLN involvement among biopsy techniques and deep margin status to attempt to determine impact of shave biopsy on surgical management of patients with thin melanoma. All patients who underwent SLN biopsy for melanoma with Breslow thickness less than 1 mm between 1998 and 2006 were identified. Patient and tumor characteristics were compared using χ(2) tests for categorical variables. Continuous variables were reported as a mean ± standard deviation and analyzed using t test. Of the 260 patients diagnosed with thin melanomas, 159 (61.2%) were diagnosed by shave biopsy; 101 (38.8%) were diagnosed by other techniques. Of the 159 patients diagnosed by shave biopsy, 18.2 per cent (n = 29) underwent SLN biopsy with the only indication being positive deep margin. The frequency of SLN positivity did not differ between the biopsy groups (3.1% vs 4.0%, P = 0.726) or between groups that had positive or negative deep margins (3.0% vs 3.3%, P = 0.839, respectively). For patients unable to undergo general anesthesia, the increased rate of performing SLN biopsy resulting from shave biopsy should limit its use in these patients. However, shave biopsy is a reasonable diagnostic method for patients at low risk for general anesthesia, particularly because it results in comparably low rates of positive SLN. Thus each patient's unique clinical situation should be considered when deciding which biopsy technique is appropriate.
abstract_id: PUBMED:25229784
To scoop or not to scoop: the diagnostic and therapeutic utility of the scoop-shave biopsy for pigmented lesions. Background: Concern over transection of melanomas has inhibited many practitioners from using the scoop-shave for removal of pigmented lesions.
Objective: To assess the safety and efficacy of the scoop-shave for pigmented lesions.
Materials And Methods: The practitioner's clinical diagnosis, intent (sample or completely remove), and removal technique (excision, punch, shave biopsy, or scoop-shave) were recorded. Pathology results including the status of the peripheral and deep margins were subsequently documented.
Results: Over an 8-month period, 333 procedures were performed. Of the 11 melanomas (6 in situ and 5 invasive) removed by the scoop-shave, none had positive deep margins and 6 (2 in situ and 4 invasive) were completely removed. One of the 50 dysplastic nevi removed by scoop-shave had a positive deep margin (moderately dysplastic). Forty-six dysplastic nevi were completely removed by the scoop-shave. When the practitioner's intent was "complete removal," the lesion was completely removed 73.1% of the time by scoop-shave, 91% by standard excision, 18.1% by shave biopsy, and 78.6% by punch excision (p < .0001).
Conclusion: The scoop-shave is a safe and effective technique for diagnosis and treatment of melanocytic lesions.
Answer: The practice of reporting margin status in shave biopsies of nonmelanoma skin cancers is variable and lacks standardized guidelines. A study found that nearly one-quarter of positive margins in shave biopsies for cutaneous carcinomas are missed on standard histologic examination, with 30% of basal cell carcinomas and 15% of squamous cell carcinomas having margins erroneously classified as "negative" on routine examination (PUBMED:27116089). This suggests that reporting margin status can be inaccurate and may not always reflect the true surgical margin status. Additionally, reporting a positive margin may be misleading if the clinician has already definitively treated the skin cancer at the time of biopsy. Given these findings, the decision to include or exclude a comment regarding the margin status should be carefully considered, taking into account the clinical intent of the biopsy and any known information regarding postbiopsy treatment (PUBMED:27116089).
In the context of melanoma, shave biopsy is the most frequent method of diagnosis, and while it may underestimate the true T-stage, there was no difference in the need for reoperation due to T-upstaging based on initial biopsy type (PUBMED:36739830). However, for nonmelanoma skin cancers, the clinical utility of reporting margin status in shave biopsies remains a subject of debate, and the decision to report should be individualized based on the specific clinical scenario and the potential impact on patient management. |
Instruction: Do neuropsychological tests have the same meaning in Spanish speakers as they do in English speakers?
Abstracts:
abstract_id: PUBMED:33356892
Native Spanish-speaker's test performance and the effects of Spanish-English bilingualism: results from the neuropsychological norms for the U.S.-Mexico Border Region in Spanish (NP-NUMBRS) project. Objective: We aimed to investigate whether or not demographically-corrected test scores derived from the Neuropsychological Norms for the U.S.-Mexico Border Region in Spanish (NP-NUMBRS) would be less accurate if applied to Spanish-speakers with various degrees of English fluency. Spanish-English Method: One hundred and seventy primarily Spanish-speaking adults from the NP-NUMBRS project completed a comprehensive neuropsychological test battery. T-scores adjusted for age, education, and sex (but not degree of bilingualism), were derived for each test utilizing population-specific normative data. English fluency was assessed via the Controlled Oral Word Association Test in English (F-A-S), and Spanish fluency with "P-M-R," and degree of relative English fluency was calculated as the ratio of English language words over total words produced in both languages. Effects of degree of bilingualism on the NUMBRS battery test scores (raw scores and T-scores) were examined via Pearson's product moment correlation coefficients, and language groups (Spanish dominant vs. relative bilingual) were compared on demographically adjusted T-scores via independent samples t-tests. Results: Higher Spanish-English bilingualism was associated with higher education and SES, and was significantly associated with higher raw scores on all tests, but only associated with higher T-scores on a limited number of tests (i.e., WAIS-III Digit Symbol, Symbol Search, Letter-Number Sequencing and Trails B). Conclusion: Degree of Spanish-English bilingualism generally did not account for significant variance in the normed tests beyond the standard demographic adjustments on most tests. Overall, the normative adjustments provided by the NP-NUMBRS project appear applicable to native Spanish speakers from the U.S.-Mexico border region with various degrees of Spanish-English bilingualism.
abstract_id: PUBMED:32141802
The state of neuropsychological test norms for Spanish-speaking adults in the United States. Objective: The present review paper aimed to identify published neuropsychological test norms developed for Spanish-speakers living in the United States (U.S.). Methods: We conducted a systematic review of the literature via an electronic search on PubMed using keywords "Normative data," "Neuropsychological test," "norms", "Hispanic/Latinos," "Spanish Speakers," and "United States." We added other studies and published manuals as identified by citations in papers from the original search. Results Eighteen sources of normative data for Spanish-speakers in the U.S. were identified. Of the 18 citations identified, only four provide normative data on comprehensive batteries of tests for Spanish-Speakers. Two of these are based on persons living in the southwest of the U.S., who tend to be of Mexican origin. Overall, a number of the studies are focused on older persons and although the majority include participants with wide ranges of education, participants in the ends of the education distribution tend to be underrepresented. Conclusion: Here we provide a detailed description of the neuropsychological normative data currently available for Spanish-speakers living in the U.S. While there has been increased attention towards developing norms for neuropsychological batteries in Spanish-speaking countries (e.g., Latin America and Spain), there is still an urgent need to standardize neuropsychological tests among diverse groups of Spanish-speaking adults living in the U.S. The present review presents a list of norms for U.S.-dwelling Spanish-speakers, thus providing an important tool for clinicians and researchers.
abstract_id: PUBMED:32985352
Demographically adjusted norms for the Trail Making Test in native Spanish speakers: Results from the neuropsychological norms for the US-Mexico border region in Spanish (NP-NUMBRS) project. Objective: Despite the wide use of the Trail Making Test (TMT), there is a lack of normative data for Spanish speakers living in the USA. Here we describe the development of regional norms for the TMT for native Spanish speakers residing in the Southwest Mexico-Border Region of the USA.
Method: Participants were 252 healthy native Spanish speakers, 58% women, from ages 19 to 60, and ranging in education from 0 to 20 years, recruited in San Diego, CA and Tucson, AZ. All completed the TMT in Spanish along with a comprehensive neuropsychological test battery as part of their participation in the Neuropsychological Norms for the US-Mexico Border Region in Spanish (NP-NUMBRS) project. Univariable and interactive effects of demographics on test performance were examined. T-scores were calculated using fractional polynomial equations to account for linear and any non-linear effects of age, education, and sex.
Results: Older age and lower education were associated with worse scores on both TMT A and B. No sex differences were found. The newly derived T-scores showed no association with demographic variables and displayed the expected 16% rates of impairment using a -1 SD cut point based on a normal distribution. By comparison, published norms for English-speaking non-Hispanic Whites applied to the current data yielded significantly higher impairment for both TMT A and B with more comparable rates using non-Hispanic African Americans norms.
Conclusions: Population-specific, demographically adjusted regional norms improve the utility and diagnostic accuracy of the TMT for use with native Spanish speakers in the US-Mexico Border region.
abstract_id: PUBMED:20438217
Do neuropsychological tests have the same meaning in Spanish speakers as they do in English speakers? Objective: The purpose of this study was to examine whether neuropsychological tests translated into Spanish measure the same cognitive constructs as the original English versions.
Method: Older adult participants (N = 2,664), who did not exhibit dementia from the Washington Heights Inwood Columbia Aging Project (WHICAP), a community-based cohort from northern Manhattan, were evaluated with a comprehensive neuropsychological battery. The study cohort includes both English (n = 1,800) and Spanish speakers (n = 864) evaluated in their language of preference. Invariance analyses were conducted across language groups on a structural equation model comprising four neuropsychological factors (memory, language, visual-spatial ability, and processing speed).
Results: The results of the analyses indicated that the four-factor model exhibited partial measurement invariance, demonstrated by invariant factor structure and factor loadings but nonequivalent observed score intercepts.
Conclusion: The finding of invariant factor structure and factor loadings provides empirical evidence to support the implicit assumption that scores on neuropsychological tests are measuring equivalent psychological traits across these two language groups. At the structural level, the model exhibited invariant factor variances and covariances.
abstract_id: PUBMED:33868081
Perspective-Taking With Deictic Motion Verbs in Spanish: What We Learn About Semantics and the Lexicon From Heritage Child Speakers and Adults. In English, deictic verbs of motion, such as come can encode the perspective of the speaker, or another individual, such as the addressee or a narrative protagonist, at a salient reference time and location, in the form of an indexical presupposition. By contrast, Spanish has been claimed to have stricter requirements on licensing conditions for venir ("to come"), only allowing speaker perspective. An open question is how a bilingual learner acquiring both English and Spanish reconciles these diverging language-specific restrictions. We face this question head on by investigating narrative productions of young Spanish-English bilingual heritage speakers of Spanish, in comparison to English monolingual and Spanish dominant adults and children. We find that the young heritage speakers produce venir in linguistic contexts where most Spanish adult speakers do not, but where English monolingual speakers do, and also resemble those of young monolingual Spanish speakers of at least one other Spanish dialect, leading us to generate two mutually-exclusive hypotheses: (a) the encoding of speaker perspective in the young heritage children is cross-linguistically influenced by the more flexible and dominant language (English), resulting in a wider range of productions by these malleable young speakers than the Spanish grammar actually allows, or (b) the young Spanish speakers are exhibiting productions that are in fact licensed in the grammar, but which are pruned away in the adult productions, being supplanted by other forms as the lexicon is enriched. Given independent evidence of the heritage speakers' robust Spanish linguistic competence, we turn to systematically-collected acceptability judgments of three dialectal varieties of monolingual adult Spanish speakers of the distribution of perspective-taking verbs, to assess their competence and adjudicate between (a) and (b). We find that adults accept venir in contexts in which they do not produce it, leading us to argue that (a) venir is not obligatorily speaker-oriented in Spanish, as has been claimed, (b) adults may not produce venir in these contexts because they instead select more specific motion verbs, and (c) for heritage bilingual children, the more dominant language (English) may support the grammatically licensed but lexically-constrained productions in Spanish.
abstract_id: PUBMED:30886827
Facebook for recruiting Spanish- and English-speaking smokers. Background: Recruitment for research is usually expensive and time consuming. Facebook (FB) recruitment has become widely utilized in recent years. The main aim of this study was to assess FB as a recruitment tool in a study for Spanish- and English-speaking smokers. Additionally, the study set out to compare performance of ads by language (Spanish vs. English), location (U.S. vs. San Francisco) and content (self-efficacy ad vs. fear appeal ad).
Methods: Participants of a one-condition smoking cessation webapp study were recruited utilizing FB ads and posts through two phases: a recruitment-focused phase and an experimental phase comparing language, location and content.
Results: During the recruitment phase 581 participants in total (U.S. = 540, San Francisco = 41) provided consent. Of the U.S. participants 275 were Spanish-speakers and 265 English-speakers. The cost-per-consent was $25.81 for Spanish-speakers, and $15.49 for English-speakers. During the experimental phase U.S. users performed better (i.e. more clicks, engagement and social reach) than San Francisco users, Spanish-speakers engaged more than English-speakers, and the self-efficacy ad performed better than the fear appeal ad.
Conclusions: This study showed that although there were differences in cost-per-consent for Spanish- and English-speakers, recruitment of Spanish-speakers through Facebook is feasible. Furthermore, comparing performance of ads by location, language, and ad content may contribute to developing more efficient campaigns.
abstract_id: PUBMED:24418057
Performance of Spanish-speaking community-dwelling elders in the United States on the Uniform Data Set. Background: Spanish is the second-most common language spoken in the United States, and Spanish speakers represent one third of the aging population. The National Alzheimer's Coordinating Center's Uniform Data Set implemented a Spanish neuropsychological battery. Previous work described the neuropsychological performance for English speakers. Here we describe performance on the Spanish version.
Methods: Data from 276 Spanish speakers with normal cognition were summarized, with descriptive tables of performance on individual cognitive tests. Regression techniques were used to evaluate the effect of demographics on cognitive performance.
Results: Spanish speakers were younger (70.0 vs 74.0 years) and less educated (10.7 vs 15.7 years) with more females (76% vs 63% female) than the previously described English speakers. Higher education and lower age were associated with better performance.
Conclusion: This national cohort of well-characterized Spanish-speaking elders provides descriptive data on cognitive performance, an important tool for clinical and research efforts.
abstract_id: PUBMED:37325730
The use of mazes over time in Spanish heritage speakers in the US. Introduction: Mazes are linguistic disfluencies such as filled pauses, repetitions, or revisions of grammatical, phonological, or lexical aspects of words that do not contribute to the meaning of a sentence. Bilingual children are believed to increase the numbers of mazes in their native or heritage language, the minority language, as they become more proficient in the second language, the societal language. Mazes may increase over time in bilingual Spanish-speaking children as they become more proficient in English, the societal language in the United States. However, current studies have not been conducted longitudinally. Higher rates of mazes in the heritage language over time may be due to changes in language proficiency and differences in processing demands in the children as they use more complex language. Moreover, children with developmental language disorder (DLD) can also present higher rates of mazes than children with typical language. Heritage speakers, therefore, are at risk of being misdiagnosed with DLD due to high rates of mazes. Currently, we do not understand what the typical rates of mazes are as heritage speakers get older and become more proficient in the societal language. The current study examined the type and frequency of Spanish mazes longitudinally in a group of 22 Spanish heritage speakers with and without DLD and determined the changes over time.
Methods: A total of 11 children with typical language development (TLD) and 11 with DLD participated in this 5-year longitudinal study. Using a wordless picture book, children completed a retelling task in Spanish during the spring of each academic year (PK to 3rd grade) as part of a 5-h testing battery. Narratives were transcribed and coded for types of mazes (filled pauses, repetitions, grammatical revisions, phonological revisions, and lexical revisions).
Results And Conclusion: The results of the study indicate that TLD children increased their overall percentage of mazed words and utterances. The opposite pattern was observed in the DLD group, which decreased their percentage of mazed words and utterances. In contrast, both groups demonstrated a decrease in repetitions in first grade and an increase in third grade. Additionally, the TLD and DLD children decreased in the percentage of fillers in first grade and then increased in the third grade. Results suggest that maze use is quite variable in heritage speakers and does not necessarily differentiate groups. Clinicians should not rely solely on mazes to determine ability status. In fact, high use of mazes can reflect typical language development.
abstract_id: PUBMED:31898931
Chinese-English Speakers' Perception of Pitch in Their Non-Tonal Language: Reinterpreting English as a Tonal-Like Language. Changing the F0-contour of English words does not change their lexical meaning. However, it changes the meaning in tonal languages such as Mandarin. Given this important difference and knowing that words in the two languages of a bilingual lexicon interact, the question arises as to how Mandarin-English speakers process pitch in their bilingual lexicon. The few studies that addressed this question showed that Mandarin-English speakers did not perceive pitch in English words as native English speakers did. These studies, however, used English words as stimuli failing to examine nonwords and Mandarin words. Consequently, possible pre-lexical effects and L1 transfer were not ruled out. The present study fills this gap by examining pitch perception in Mandarin and English words and nonwords by Mandarin-English speakers and a group of native English controls. Results showed the tonal experience of Chinese-English speakers modulated their perception of pitch in their non-tonal language at both pre-lexical and lexical levels. In comparison to native English controls, tonal speakers were more sensitive to the acoustic salience of F0-contours in the pre-lexical processing due to top-down feedback. At the lexical level, Mandarin-English speakers organized words in their two languages according to similarity criteria based on both F0 and segmental information, whereas only the segmental information was relevant to the control group. These results in perception together with consistently reported production patterns in previous literature suggest that Mandarin-English speakers process pitch in English as if it was a one-tone language.
abstract_id: PUBMED:32077791
Demographically-adjusted norms for the Grooved Pegboard and Finger Tapping tests in Spanish-speaking adults: Results from the Neuropsychological Norms for the U.S.-Mexico Border Region in Spanish (NP-NUMBRS) Project. Objective: We developed demographically-corrected norms for Spanish-speakers from the U.S.-Mexico border regions of California and Arizona on two tests of motor skills - the Grooved Pegboard Test (Pegboard) and Finger Tapping Test (Tapping) - as part of a larger normative effort.
Method: Participants were native Spanish-speakers from the Neuropsychological Norms for the U.S.-Mexico Border Region in Spanish (NP-NUMBRS) Project (Pegboard: N = 254; Tapping: N = 183; age: 19-60 years; education: 0-20 years; 59% women). We examined the association of demographics (age, education and gender) with raw scores. Raw test scores were then converted to demographically-corrected T-scores via fractional polynomial equations. We also examined rates of impairment (T-score < 40) based on the current norms and on previously published norms for English-speaking non-Hispanic Whites and Blacks.
Results: Having more years of education was associated with better raw test score performance on both tests (p < .001), and increased age was associated with worse performance on Pegboard (p < .001). Men outperformed women on Tapping, and older age was associated with lower raw scores in men only on the Tapping non-dominant hand trial (p = .02). The normed T-scores were confirmed to be normally distributed and free from demographic influences, and resulted in expected rates of impairment. Applying existing norms for English-speaking non-Hispanic Whites and Blacks to the raw scores of Spanish-speakers generally yielded lower than expected impairment rates (2-13%), with one exception: non-dominant Pegboard, for which non-Hispanic White norms overestimated impairment (23%).
Conclusions: Present findings underscore the importance of appropriate, population-specific normative data, even for tests of motor ability.
Answer: The research suggests that neuropsychological tests translated into Spanish measure similar cognitive constructs as the original English versions, but there are nuances to consider. A study examining older adult participants from the Washington Heights Inwood Columbia Aging Project (WHICAP) found that a four-factor neuropsychological model (memory, language, visual-spatial ability, and processing speed) exhibited partial measurement invariance across English and Spanish speakers. This means that the factor structure and factor loadings were invariant, but observed score intercepts were not equivalent, indicating that the tests are measuring equivalent psychological traits across the two language groups (PUBMED:20438217).
However, the application of demographically-corrected test scores derived from the Neuropsychological Norms for the U.S.-Mexico Border Region in Spanish (NP-NUMBRS) to Spanish speakers with varying degrees of English fluency showed that the degree of Spanish-English bilingualism did not account for significant variance in the normed tests beyond standard demographic adjustments on most tests. This suggests that the normative adjustments provided by the NP-NUMBRS project are applicable to native Spanish speakers from the U.S.-Mexico border region with various degrees of Spanish-English bilingualism (PUBMED:33356892).
Moreover, the development of regional norms for the Trail Making Test (TMT) for native Spanish speakers residing in the Southwest Mexico-Border Region of the USA indicated that population-specific, demographically adjusted regional norms improve the utility and diagnostic accuracy of the TMT for use with native Spanish speakers in the US-Mexico Border region (PUBMED:32985352).
In summary, while neuropsychological tests translated into Spanish aim to measure the same cognitive constructs as their English counterparts, the meaning of test scores can be influenced by factors such as the test-taker's language proficiency, bilingualism, and cultural background. Therefore, it is essential to use population-specific and demographically adjusted norms to ensure accurate interpretation of test results for Spanish speakers (PUBMED:20438217; PUBMED:33356892; PUBMED:32985352). |
Instruction: Are patients with psoriasis susceptible to the classic risk factors for actinic keratoses?
Abstracts:
abstract_id: PUBMED:15262690
Are patients with psoriasis susceptible to the classic risk factors for actinic keratoses? Background: An increased prevalence of benign solar damage (eg, facial wrinkles) but not neoplastic lesions was observed among patients with psoriasis who were exposed to Dead Sea climatotherapy compared with controls.
Objectives: To compare the prevalence of actinic keratosis in psoriatic patients and controls and to assess whether known risk factors behave similarly in both groups.
Design: Multicenter cross-sectional study.
Setting: Dermatology clinics in 4 participating Israeli hospitals and at a Dead Sea clinic.
Participants: Adult subjects (n = 460) with plaque-type psoriasis were recruited from the Israel Psoriasis Association (volunteer sample) and from dermatology clinics (convenience sample). The control group (n = 738) consisted of nonimmunosuppressed patients attending these clinics for benign conditions unrelated to sun exposure, such as atopic or contact dermatitis.
Main Outcome Measures: Prevalence and distribution of actinic keratoses and odds ratios associated with skin, hair, and eye color and propensity or history of sunburn adjusted for age, ethnicity, and sun exposure.
Results: Actinic keratoses were observed in 200 controls (27%) and 51 subjects (11%) (P<.001). This increased prevalence occurred in both sexes, participants aged 35 years or older, all ethnic groups, smokers, and nonsmokers. The anatomical distribution of lesions did not substantially differ between subjects and controls. In multivariate analysis, psoriasis conferred a protective effect (odds ratio, <1), as did dark skin, dark eyes, and a history of severe sunburn in childhood. However, significant interactions were observed between psoriasis and hair color as well as psoriasis and propensity to sunburn, whereby a linear association was observed for controls but not for patients with psoriasis.
Conclusions: Psoriasis confers protection against actinic keratosis. Hair color and propensity to sunburn exert differential effects among psoriatic patients and controls.
abstract_id: PUBMED:30064923
Psoriasis and the risk of acute coronary syndrome in the elderly. Background: Psoriasis has been associated with a higher prevalence of cardiovascular disease risk factors. However, there is inadequate quantification on the association between psoriasis and acute coronary syndrome (ACS), particularly in the elderly. Therefore, the aim of the present study was to assess the risk of ACS according to history of psoriasis in subjects aged 75 years and older.
Methods: We carried out a case control study based on 1455 cases and 1108 controls. Cases were all the patients admitted in the randomized Elderly ACS 2 trial. Controls were selected from subjects aged ≥75 years included in the Prevalence of Actinic Keratoses in the Italian Population Study (PraKtis), based on a representative sample of the general Italian population. Odds ratios (OR) of ACS according to history of psoriasis were obtained using a multiple logistic regression model including terms for age, sex and smoking.
Results: The prevalence of psoriasis was lower among cases (12/1455, 0.8%) than among controls (18/1108, 1.6%). The multivariate OR of ACS according to history of psoriasis was 0.51 (95% confidence interval: 0.23-1.09).
Conclusions: Our data does not support an association between psoriasis and risk of ACS in the elderly.
abstract_id: PUBMED:11924829
Human papillomavirus infection and skin cancer risk in organ transplant recipients. Warts and squamous cell carcinomas are important cutaneous complications in organ transplant recipients. The role of infection with human papillomaviruses (HPV) in the development of cutaneous squamous cell carcinoma is still unclear. An extremely diverse group of HPV types, mainly consisting of epidermodysplasia-verruciformis (EV)-associated HPV types, can be detected in benign, premalignant, and malignant skin lesions of organ transplant recipients. Frequently, there are multiple HPV types present in single skin biopsies. Typically, the prevalence of viral warts rises steadily after transplantation and a strong association exists between the number of HPV-induced warts and the development of skin cancer. The interval between the transplantation to the development of warts is clearly shorter than the interval from transplantation to the diagnosis of the first skin cancer. A comparison of transplant recipients with and without skin cancer, however, showed an equally high prevalence of EV-HPV DNA in keratotic skin lesions in both groups of patients and the detection rate and spectrum of HPV infection in hyperkeratotic papillomas, actinic keratoses, and squamous cell carcinomas was also similar. HPV DNA can frequently be detected in patients with hyperproliferative disorders like psoriasis and antibodies against HPV in patients with regenerating skin (e.g., after extensive second degree burns). Latent infection with EV-HPV seems to be widespread. The hair follicle region might be the reservoir of EV-HPV. The E6 protein from a range of cutaneous HPV types effectively inhibits apoptosis in response to UV-light induced damage. It is therefore conceivable that individuals who are infected by EV-HPV are at an increased risk of developing actinic keratoses and squamous cell carcinomas, possibly by chronically preventing UV-light induced apoptosis.
abstract_id: PUBMED:1552048
PUVA and skin cancer. A historical cohort study on 492 patients. Background: The safety of psoralen plus ultraviolet A (PUVA) light therapy has been an issue of debate. A few multiple-center cooperative studies have reported an increase of basal cell and squamous cell carcinomas among PUVA-treated patients. In our institute, more than 1000 patients have been treated with PUVA since 1975.
Objective: We investigated the incidence of skin cancer among patients who received high doses of PUVA to see whether such incidence increased.
Methods: This is a historical cohort study of two comparison groups of patients. Subjects under study were 492 psoriasis patients who received PUVA treatments between 1975 and 1989. One group of 103 patients, defined as the high-dose group, received an accumulated PUVA dose of 1000 joules/cm2 or more; another group of 389 patients, as the low-dose group, received 200 joules/cm2 or less. The occurrence of skin cancer in the two comparison groups is analyzed.
Results: In the high-dose group we observed an increased number of patients with squamous cell carcinoma, keratoacanthoma, and actinic keratosis. We did not see any patients with genital cancer, melanoma, or an increased number of patients with basal cell carcinoma.
Conclusion: The risk of squamous cell carcinoma developing in patients who received a high dose of PUVA is confirmed. We speculate a combination of factors, including PUVA, may contribute to this risk.
abstract_id: PUBMED:32367558
Dermatological diseases presented before COVID-19: Are patients with psoriasis and superficial fungal infections more vulnerable to the COVID-19? Recent studies have focused on the comorbid conditions of the COVID-19. According to the current studies, numerous diseases including lung disease, cardiovascular disease and immunosuppression appear to be at higher risk for severe forms of the COVID-19. To date, there are no data in the literature on the comorbid dermatologic diseases and COVID-19. We tried to analyze the previous dermatological comorbidity of 93 patients with COVID-19 (51 males, 42 females) who presented to the dermatology outpatient clinics for the last 3 years. The most common dermatologic diseases in patients with COVID-19 who have dermatologic diseases for the last 3 years were superficial fungal infections (24, 25.8%), seborrheic dermatitis (11, 11.8%), actinic keratosis (10, 10.8%), psoriasis (6, 6.5%), and eczema (6, 6.5%), respectively. In addition, the number of COVID-19 patients who presented to dermatology in the last 3 months was 17 (11 men, 6 women). The median age of these patients was 58 (minimum 18, maximum 80) years, and the most common dermatologic diseases before diagnosed COVID-19 were superficial fungal infections (5, 25%), psoriasis (4, 20%), and viral skin diseases (3, 15%). The possible similarity between cutaneous and mucosal immunity and immunosuppression suggests that patients with some dermatologic diseases especially superficial fungal infections and psoriasis may be more vulnerable to the COVID-19.
abstract_id: PUBMED:3982991
Risk of cutaneous carcinoma in psoriatic patients treated with PUVA. A total of 1047 psoriatic patients treated with PUVA were examined for lentiginosis, actinic keratosis and skin cancer. 128 patients had earlier been treated with arsenic, 4 with ionizing radiation and 6 with methotrexate. Lentiginosis was found in 6.2% and actinic keratosis in 1.8% of the patients. 2 patients had histologically verified basal cell carcinoma, 1 had squamous cell carcinoma and 1 Bowen's disease. In patients followed for more than 4 years, those who had received a cumulative dose of UVA greater than or equal to 1000 J/cm2, as well as those who had received more than 150 irradiations had significantly greater frequency of lentiginosis and actinic keratosis. This seemed to be the case also in patients with earlier arsenic treatment. The number of skin malignomas detected did not significantly exceed the expected incidence in the normal Finnish population of the same age-group.
abstract_id: PUBMED:6192635
Skin carcinomas and treatment with photochemotherapy (PUVA). A 3 1/2-year follow-up study of 198 patients treated with photochemotherapy (PUVA) revealed a total of 18 carcinomas developed in 11 patients. There were 12 basal cell carcinomas and 6 squamous cell carcinomas, localized mainly on non-sun-exposed areas. Furthermore 9 actinic keratoses were diagnosed in 8 patients. All patients with carcinomas had been exposed to at least one of the following possible risk factors; ionizing radiation, methotrexate (MTX), arsenic, topical nitrogen mustard, or had a history of skin carcinoma previous to PUVA therapy. No significant differences in accumulated UVA dose existed between patients with carcinoma or keratosis and patients without tumours. A subgroup of 38 psoriatics previously treated with MTX was compared with a control group of 101 psoriatics treated with MTX--but never with PUVA. The control group was matched for sex, age and presence of the risk factors: ionizing radiation, arsenic and history of carcinoma. The carcinoma incidence in the PUVA-MTX group was 9% and in the control group 11%. The difference was not significant (p = 0.9).
abstract_id: PUBMED:30536640
Risk of skin cancer in psoriasis patients receiving long-term narrowband ultraviolet phototherapy: Results from a Taiwanese population-based cohort study. Background: Narrowband ultraviolet B (NB-UVB) phototherapy is a widely used treatment for various dermatoses. The risk of skin cancer following long-term NB-UVB phototherapy has rarely been explored in skin phototypes III-V.
Methods: We conducted a nationwide-matched cohort study and identified a total of 22 891 psoriasis patients starting NB-UVB phototherapy from the Taiwan National Health Insurance Database during the period 2000-2013. Cumulative incidences of skin cancers were compared between subjects receiving less than 90 UVB treatments (S-cohort, N = 13 260) and age- as well as propensity score-matched subjects receiving more than or equal to 90 UVB treatments (L-cohort, N = 3315).
Results: There were no significant differences in the overall cumulative incidences of skin cancers between the two cohorts (log-rank t test, P = 0.691) during the follow-up periods. The S-cohort had a significantly lower prevalence of actinic keratosis when compared with the L-cohort (0.54% vs 1.00%, P = 0.005).
Conclusion: Long-term NB-UVB phototherapy does not increase skin cancer risk compared with short-term NB-UVB phototherapy in psoriasis patients with skin phototypes III-V.
abstract_id: PUBMED:35783563
Risk of Skin Cancer with Phototherapy in Moderate-to-Severe Psoriasis: An Updated Systematic Review. Phototherapy is a standard treatment for moderate-to-severe psoriasis. However, concern remains regarding the associated cutaneous carcinogenic risk. Our objective is to conduct a systematic review of skin cancer risk for psoriasis patients treated with phototherapy. To achieve our goal, we searched Cochrane, PubMed, and Embase databases. We aimed to evaluate existing literature (from July 1, 2010, to December 31, 2020) on phototherapy for all Fitzpatrick skin phototypes (FSP) which includes 71 articles, and eight articles being categorized in this review. Five studies did not report an increased skin cancer risk with narrowband-ultraviolet blue (UVB) and unspecified UVB for FSP II through VI, with one study not reporting FSP. Three studies did report an increased risk of skin cancer with narrowband-UVB and broadband-UVB for FSP I-VI, with one study also not specifying skin phototypes or UVB phototherapy type. Additionally, a study with psoralen and ultraviolet A with and without narrowband-UVB demonstrated an increased risk of skin cancer in phototypes III and IV. The most commonly reported secondary outcomes with phototherapy were actinic keratosis (123) and solar lentigines (10). Numerous patients were also on additional therapies including methotrexate, acitretin, and biologics. Study limitations include publication bias due to limited number of studies published on this topic in the last ten years along with heterogeneity in reporting. The relationship between phototherapy, psoriasis, and cutaneous oncogenic risk remains contradictory. While phototherapy for psoriasis is an efficacious therapy, further studies are needed to understand the cutaneous oncogenic risk based on FSP to help clinicals tailor treatment recommendations based on skin phototypes.
abstract_id: PUBMED:7130492
Epidermal dystrophy and actinic keratoses in psoriasis patients following oral psoralen photochemotherapy (PUVA). Follow-up study. Focal dystrophy of epidermal cells, which was initially reported in 19 of 37 patients with psoriasis who had been treated with psoralen and ultraviolet A (PUVA), has now been observed in more than half of 70 patients 1 year or more following onset of PUVA therapy. These dystrophic changes, which are similar to those found in actinic keratoses, were present in clinically uninvolved skin of sunlight-protected and sunlight-exposed areas. Control biopsies obtained prior to PUVA therapy of non-sun-exposed skin in 62 patients and sun-exposed skin of 22 of these revealed no such changes. The presence of epidermal dystrophic changes in 9 patients who had a several-month interruption in PUVA therapy indicates that these changes cannot all be attributed to acute effects of PUVA. In 104 PUVA-treated patients returning for dermatologic follow-up after the first year of therapy, 17 (16.3%) developed actinic keratoses during the course of, or following the cessation of, treatment with PUVA. Careful examination of these patients prior to PUVA therapy had revealed no such lesions. In the majority of patients, keratoses were multiple; they occurred in sun-exposed areas, suggesting a possible acceleration or promotion by PUVA of actinically induced lesions.
Answer: Patients with psoriasis do not appear to be more susceptible to the classic risk factors for actinic keratoses. In fact, psoriasis may confer a protective effect against actinic keratosis. A multicenter cross-sectional study comparing the prevalence of actinic keratosis in psoriatic patients and controls found that actinic keratoses were less prevalent in subjects with psoriasis (11%) compared to controls (27%). The study also observed that psoriasis conferred a protective effect, and there were significant interactions between psoriasis and hair color as well as psoriasis and propensity to sunburn, indicating that the linear association observed for controls was not present for patients with psoriasis (PUBMED:15262690).
Additionally, a study on the risk of skin cancer in psoriasis patients receiving long-term narrowband ultraviolet phototherapy found no significant differences in the overall cumulative incidences of skin cancers between cohorts receiving short-term and long-term phototherapy. The study even reported a significantly lower prevalence of actinic keratosis in the short-term treatment cohort compared to the long-term treatment cohort (PUBMED:30536640).
However, it is important to note that psoriasis patients who have undergone certain treatments, such as PUVA (psoralen plus ultraviolet A light therapy), may have an increased risk of developing squamous cell carcinoma, keratoacanthoma, and actinic keratosis, particularly when high doses of PUVA are used (PUBMED:1552048).
Overall, while psoriasis itself may offer some protection against actinic keratosis, the risk can be influenced by the type of treatments patients with psoriasis undergo, such as PUVA therapy. It is also important to consider individual patient factors and treatment history when assessing the risk of actinic keratosis and other skin cancers in patients with psoriasis. |
Instruction: Do screening trial recruitment logs accurately reflect the eligibility criteria of a given clinical trial?
Abstracts:
abstract_id: PUBMED:24685345
Do screening trial recruitment logs accurately reflect the eligibility criteria of a given clinical trial? Early lessons from the RAVES 0803 trial. Aims: Maintaining clinical trial screening logs and reporting data from such logs are given importance due to the relevance of a trial's patient population to the generalisability of its findings. However, screening logs may not always reflect a clinical trial's true target population. The aim of the present study was to define and compare 'apparent recruitment' to a trial as captured in a clinical trial screening log with 'true recruitment', which considers all potentially eligible patients. The Trans Tasman Radiation Oncology Group (TROG) 0803 RAVES clinical trial was used to examine the above.
Materials And Methods: A prospective, surgical database was interrogated for the 12 month period to identify patients potentially eligible for the TROG 0803 RAVES trial. Information on whether patients were referred to a RAVES trial recruitment site and reasons for non-referral were obtained.
Results: Of 92 men undergoing radical prostatectomy, 28 met the RAVES clinical trial eligibility criteria. Fifteen of the 28 eligible men were assessed at a RAVES trial site, with five being ultimately recruited to RAVES (33% 'apparent recruitment fraction' as captured by the site's trial screening log). The 'true recruitment fraction' was 5/28 (18%).
Conclusion: Screening logs at a recruiting trial site may underestimate the trial's target population and overestimate recruitment. Only a subpopulation of all eligible patients may be captured in trial screening logs and subsequently reported on. This may affect the generalisability of the trial's reported findings.
abstract_id: PUBMED:32641097
The implementation and utility of patient screening logs in a multicentre randomised controlled oncology trial. Background: The utility of patient screening logs and their impact on improving trial recruitment rates are unclear. We conducted a retrospective exploratory analysis of screening data collected within a multicentre randomised controlled trial investigating chemotherapy for upper tract urothelial carcinoma.
Methods: Participating centres maintained a record of patients meeting basic screening criteria stipulated in the trial protocol, submitting logs regularly to the clinical trial coordinating centre (CTC). Sites recorded the number of patients ineligible, not approached, declined and randomised. The CTC monitored proportions of eligible patients, approach rate (proportion of eligible patients approached) and acceptance rate (proportion recruited of those approached). Data were retrospectively analysed to identify patterns of screening activity and correlation with recruitment.
Results: Data were collected between May 2012 and August 2016, during which time 71 sites were activated-a recruitment period of 2768 centre months. A total of 1138 patients were reported on screening logs, with 2300 requests for logs sent by the CTC and 47% of expected logs received. A total of 758 patients were reported as ineligible, 36 eligible patients were not approached and 207 declined trial participation. The approach rate was 91% (344/380), and the acceptance rate was 40% (137/344); these rates remained consistent throughout the data collection. The main reason patients provided for declining (99/207, 48%) was not wanting to receive chemotherapy. There was a moderately strong correlation (r = 0.47) between the number reported on screening logs and the number recruited per site. Considerable variation in data between centres was observed, and 54/191 trial participants (28%) enrolled during this period were not reported on logs.
Conclusions: Central collection of screening logs can identify reasons for patients declining trial participation and help monitor trial activity at sites; however, obtaining complete data can be challenging. There was a correlation between the number of patients reported on logs and recruitment; however, this was likely confounded by sites' available patient population. The use of screening logs may not be appropriate for all trials, and their use should be carefully considered in relation to the associated workload. No evidence was found that central collection of screening logs improved recruitment rates in this study, and their continued use warrants further investigation.
Trial Registration: ISRCTN98387754 . Registered on 31 January 2012.
abstract_id: PUBMED:32609092
An Ensemble Learning Strategy for Eligibility Criteria Text Classification for Clinical Trial Recruitment: Algorithm Development and Validation. Background: Eligibility criteria are the main strategy for screening appropriate participants for clinical trials. Automatic analysis of clinical trial eligibility criteria by digital screening, leveraging natural language processing techniques, can improve recruitment efficiency and reduce the costs involved in promoting clinical research.
Objective: We aimed to create a natural language processing model to automatically classify clinical trial eligibility criteria.
Methods: We proposed a classifier for short text eligibility criteria based on ensemble learning, where a set of pretrained models was integrated. The pretrained models included state-of-the-art deep learning methods for training and classification, including Bidirectional Encoder Representations from Transformers (BERT), XLNet, and A Robustly Optimized BERT Pretraining Approach (RoBERTa). The classification results by the integrated models were combined as new features for training a Light Gradient Boosting Machine (LightGBM) model for eligibility criteria classification.
Results: Our proposed method obtained an accuracy of 0.846, a precision of 0.803, and a recall of 0.817 on a standard data set from a shared task of an international conference. The macro F1 value was 0.807, outperforming the state-of-the-art baseline methods on the shared task.
Conclusions: We designed a model for screening short text classification criteria for clinical trials based on multimodel ensemble learning. Through experiments, we concluded that performance was improved significantly with a model ensemble compared to a single model. The introduction of focal loss could reduce the impact of class imbalance to achieve better performance.
abstract_id: PUBMED:37128426
Parsable Clinical Trial Eligibility Criteria Representation Using Natural Language Processing. Successful clinical trials offer better treatments to current or future patients and advance scientific research.1,2,3 Clinical trials define the target population using specific eligibility criteria to ensure an optimal enrollment sample.4 Clinical trial eligibility criteria are often described in unstructured free-text5 which makes automation of the recruitment process challenging. This contributes to the long-standing problem of insufficient enrollment of clinical trials.6,7 This study uses a machine learning approach to extract clinical trial eligibility criteria, and convert them into structured queryable formats using descriptive statistics based on medical entity frequency and binary entity relationships. We present a JSON-based structural representation of clinical trials eligibility criteria for clinical trials to follow.
abstract_id: PUBMED:33393014
Real-World Data for Planning Eligibility Criteria and Enhancing Recruitment: Recommendations from the Clinical Trials Transformation Initiative. The growing availability of real-world data (RWD) creates opportunities for new evidence generation and improved efficiency across the research enterprise. To varying degrees, sponsors now regularly use RWD to make data-driven decisions about trial feasibility, based on assessment of eligibility criteria for planned clinical trials. Increasingly, RWD are being used to support targeted, timely, and personalized outreach to potential trial participants that may improve the efficiency and effectiveness of the recruitment process. This paper highlights recommendations and resources, including specific case studies, developed by the Clinical Trials Transformation Initiative (CTTI) for applying RWD to planning eligibility criteria and recruiting for clinical trials. Developed through a multi-stakeholder, consensus- and evidence-driven process, these actionable tools support researchers in (1) determining whether RWD are fit for purpose with respect to study planning and recruitment, (2) engaging cross-functional teams in the use of RWD for study planning and recruitment, and (3) understanding patient and site needs to develop successful and patient-centric approaches to RWD-supported recruitment. Future considerations for the use of RWD are explored, including ensuring full patient understanding of data use and developing global datasets.
abstract_id: PUBMED:33813032
A knowledge base of clinical trial eligibility criteria. Objective: We present the Clinical Trial Knowledge Base, a regularly updated knowledge base of discrete clinical trial eligibility criteria equipped with a web-based user interface for querying and aggregate analysis of common eligibility criteria.
Materials And Methods: We used a natural language processing (NLP) tool named Criteria2Query (Yuan et al., 2019) to transform free text clinical trial eligibility criteria from ClinicalTrials.gov into discrete criteria concepts and attributes encoded using the widely adopted Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) and stored in a relational SQL database. A web application accessible via RESTful APIs was implemented to enable queries and visual aggregate analyses. We demonstrate CTKB's potential role in EHR phenotype knowledge engineering using ten validated phenotyping algorithms.
Results: At the time of writing, CTKB contained 87,504 distinctive OMOP CDM standard concepts, including Condition (47.82%), Drug (23.01%), Procedure (13.73%), Measurement (24.70%) and Observation (5.28%), with 34.78% for inclusion criteria and 65.22% for exclusion criteria, extracted from 352,110 clinical trials. The average hit rate of criteria concepts in eMERGE phenotype algorithms is 77.56%.
Conclusion: CTKB is a novel comprehensive knowledge base of discrete eligibility criteria concepts with the potential to enable knowledge engineering for clinical trial cohort definition, clinical trial population representativeness assessment, electronical phenotyping, and data gap analyses for using electronic health records to support clinical trial recruitment.
abstract_id: PUBMED:34042723
Potential Role of Clinical Trial Eligibility Criteria in Electronic Phenotyping. 2,719 distinctive phenotyping variables from 176 electronic phenotypes were compared with 57,150 distinctive clinical trial eligibility criteria concepts to assess the phenotype knowledge overlap between them. We observed a high percentage (69.5%) of eMERGE phenotype features and a lower percentage (47.6%) of OHDSI phenotype features matched to clinical trial eligibility criteria, possibly due to the relative emphasis on specificity for eMERGE phenotypes and the relative emphasis on sensitivity for OHDSI phenotypes. The study results show the potential of reusing clinical trial eligibility criteria for phenotyping feature selection and moderate benefits of using them for local cohort query implementation.
abstract_id: PUBMED:34414397
Transformer-Based Named Entity Recognition for Parsing Clinical Trial Eligibility Criteria. The rapid adoption of electronic health records (EHRs) systems has made clinical data available in electronic format for research and for many downstream applications. Electronic screening of potentially eligible patients using these clinical databases for clinical trials is a critical need to improve trial recruitment efficiency. Nevertheless, manually translating free-text eligibility criteria into database queries is labor intensive and inefficient. To facilitate automated screening, free-text eligibility criteria must be structured and coded into a computable format using controlled vocabularies. Named entity recognition (NER) is thus an important first step. In this study, we evaluate 4 state-of-the-art transformer-based NER models on two publicly available annotated corpora of eligibility criteria released by Columbia University (i.e., the Chia data) and Facebook Research (i.e.the FRD data). Four transformer-based models (i.e., BERT, ALBERT, RoBERTa, and ELECTRA) pretrained with general English domain corpora vs. those pretrained with PubMed citations, clinical notes from the MIMIC-III dataset and eligibility criteria extracted from all the clinical trials on ClinicalTrials.gov were compared. Experimental results show that RoBERTa pretrained with MIMIC-III clinical notes and eligibility criteria yielded the highest strict and relaxed F-scores in both the Chia data (i.e., 0.658/0.798) and the FRD data (i.e., 0.785/0.916). With promising NER results, further investigations on building a reliable natural language processing (NLP)-assisted pipeline for automated electronic screening are needed.
abstract_id: PUBMED:35612118
Implementation of a Clinical Trial Recruitment Support System Based on Fast Healthcare Interoperability Resources (FHIR) in a Cardiology Department. Clinical Trial Recruitment Support Systems can booster patient inclusion of clinical trials by automatically analyzing eligibility criteria based on electronic health records. However, missing interoperability has hindered introduction of those systems on a broader scale. Therefore, our aim was to develop a recruitment support system based on FHIR R4 and evaluate its usage and features in a cardiology department. Clinical conditions, anamnesis, examinations, allergies, medication, laboratory data and echocardiography results were imported as FHIR resources. Clinical trial information, eligibility criteria and recruitment status were recorded using the appropriate FHIR resources without extensions. Eligibility criteria linked by the logical operation "OR" were represented by using multiple FHIR Group resources for enrollment. The system was able to identify 52 of 55 patients included in four clinical trials. In conclusion, use of FHIR for defining eligibility criteria of clinical trials may facilitate interoperability and allow automatic screening for eligible patients at multiple sites of different healthcare providers in the future. Upcoming changes in FHIR should allow easier description of "OR"-linked eligibility criteria.
abstract_id: PUBMED:32562334
Lenvatinib in patients with unresectable hepatocellular carcinoma who do not meet the REFLECT trial eligibility criteria. Aim: This study aimed to determine the efficacy and safety of lenvatinib for patients with unresectable hepatocellular carcinoma (HCC) who did not meet REFLECT eligibility criteria (phase 3 clinical trial).
Methods: In this multicenter retrospective study, patients with unresectable HCC treated with lenvatinib between 2018 and 2019 and had adequate clinical data were included. Objective response rate, progression-free-survival (PFS) and safety were evaluated according to meeting or not meeting the REFLECT eligibility criteria and according to the criteria of the REFLECT trial.
Results: Of the 105 patients included, 61% (64 of 105) did not meet the REFLECT eligibility criteria. Safety and median PFS of lenvatinib were similar between the patients who did and those who did not meet the criteria. Among the patients who did not meet the criteria, 28, 27, 14, six, seven and five had a history of tyrosine kinase inhibitor (TKI) treatment, Child-Pugh score B, HCC in ≥50% of the liver, reduced platelet count, bile duct invasion and main portal vein invasion, respectively. The efficacy and safety of lenvatinib for patients with or without Child-Pugh-score B or HCC in ≥50% of the liver were similar. Although treatment outcome was not significantly different, patients with TKI treatment history tended to have longer median PFS, whereas those with main portal vein invasion tended to have shorter median PFS.
Conclusion: Lenvatinib was effective for patients who did not meet the REFLECT inclusion criteria. However, the treatment outcome may vary according to several factors, such as a history of TKI treatment and tumor invasion.
Answer: Screening trial recruitment logs may not always accurately reflect the eligibility criteria of a given clinical trial. The study on the RAVES 0803 trial found that screening logs at a recruiting trial site underestimated the trial's target population and overestimated recruitment. Only a subpopulation of all eligible patients was captured in trial screening logs and subsequently reported on, which could affect the generalizability of the trial's reported findings (PUBMED:24685345). Similarly, another study on a multicentre randomised controlled oncology trial showed that while central collection of screening logs can identify reasons for patients declining trial participation and help monitor trial activity at sites, obtaining complete data can be challenging. There was a correlation between the number of patients reported on logs and recruitment; however, this was likely confounded by sites' available patient population. The study concluded that the use of screening logs may not be appropriate for all trials, and their use should be carefully considered in relation to the associated workload (PUBMED:32641097).
These findings suggest that while screening logs are a tool for understanding recruitment activities, they may not provide a fully accurate or complete picture of trial eligibility and recruitment. The discrepancies between the logs and actual eligibility may be due to various factors, including incomplete data collection, variations in site practices, and the challenges in capturing all potentially eligible patients. Therefore, relying solely on screening logs to understand a trial's recruitment and eligibility may lead to an incomplete understanding of the trial's population and potential biases in the findings. |
Instruction: Could honey have a place in colitis therapy?
Abstracts:
abstract_id: PUBMED:12207075
Could honey have a place in colitis therapy? Effects of honey, prednisolone, and disulfiram on inflammation, nitric oxide, and free radical formation. Background/aims: The purpose of this study was to investigate the potential therapeutic roles of honey, prednisolone and disulfiram in an experimental model of inflammatory bowel disease. Another aspect of the study was to find out whether these substances have any effect on nitric oxide (NO) and free radical production.
Methods: After the induction of colitis with trinitrobenzene sulfonic acid in 64 male rats, physiological saline, honey, prednisolone and disulfiram enemas were applied to the rats once daily for 3 days (acute treatment groups) or 7 days (chronic treatment groups). Control groups received only saline enemas. Rats were killed on the 4th or 8th days and their colonic mucosal damage was quantitated using a scoring system. Acute and chronic inflammatory responses were determined by a mucosal injury score, histological examination and measurement of the myeloperoxidase (MPO) activity of tissues. The content of malonylaldehyde (MDA) and NO metabolites in colon homogenates was also measured to assess the effects of these substances on NO and free oxygen radical production.
Results: Estimation of colonic damage by mucosal injury scoring was found to be strongly correlated with the histologic evaluation of colon specimens. On the other hand, mucosal injury scores were not correlated with MPO, MDA or NO values. There were significant differences between the MPO results of chronic-control and chronic-honey groups, as well as chronic-control and chronic-prednisolone groups (p = 0.03 and p = 0.0007). The acute honey, prednisolone, and disulfiram groups had significantly lower MDA results compared to the acute control group (p = 0.04, p = 0.02, and p = 0.04). In terms of NO, there was no significant difference between the treatment and control groups. NO was found to have a strong relationship with MDA (p = 0.03) and MPO values (p = 0.001). On the other hand, MPO results were not found to be correlated with MDA values (p > 0.05).
Conclusions: MPO activity is not directly proportional to the severity of the inflammation, but it may only determine the amount of neutrophil in the tissues. Inflammatory cells are not the sole intensifying factor in colitis. Therefore, mucosal injury scores may not correlate well with MPO activities. In an inflammatory state NO and MPO levels have a strong relationship, since NO is released from the neutrophils. In an inflammatory model of colitis, intrarectal honey administration is as effective as prednisolone treatment. Honey may have some features in the treatment of colitis, but this issue requires further investigation. Honey, prednisolone and even disulfiram also have some value in preventing the formation of free radicals released from the inflamed tissues. Prednisolone may also have some possible benefits in the inhibition of NO production in colitis therapy.
abstract_id: PUBMED:34466462
Comparison of Antioxidant and Anti-Inflammatory Effects of Honey and Spirulina platensis with Sulfasalazine and Mesalazine on Acetic Acid-Induced Ulcerative Colitis in Rats. Background: Antioxidant therapy has gained attention for the treatment of ulcerative colitis (UC). The excessive generation of reactive oxygen/nitrogen species in the gastrointestinal tract increases oxidative stress, thereby leading to antioxidant defense depletion, lipid peroxidation, inflammation, tissue damage, and ulceration. Spirulina platensis (SP) and honey are excellent sources of potent antioxidants such as polyphenols and other bioactive compounds. We aimed to investigate antioxidant and anti-inflammatory effects of honey and SP in comparison with sulfasalazine (SSZ) and mesalazine on acetic acid-induced colitis (AA-colitis) in rats.
Materials And Methods: Fifty-six Sprague Dawley male rats were allocated to seven groups, with each group comprising eight rats. UC was induced, except in normal controls (NC). All groups received oral treatments for seven days. The normal saline solution of 2 mL was intrarectally administered to the NC group. The AA-colitis and NC groups received 2 mL acetic acid intrarectally as a single dose and 2 mL normal saline for seven consecutive days orally. The mesalazine group received 100 mg/kg mesalazine, the SSZ group 360 mg/kg SSZ, the honey or H group 1 mL honey diluted with 1 mL distilled water, the SH group 1g/kg SP and 1 mL honey, and the SP group 1g/kg SP. After clinical activity score assessment, the rats were sacrificed. Colonic weight/length ratio, prostaglandin E2 (PGE2), myeloperoxidase (MPO), nitric oxide (NO), malondialdehyde (MDA), interleukin-1β (IL-1β), tumor necrosis factor-α (TNF-α), interleukin-6 (IL-6), glutathione peroxidase (GPx), total antioxidant capacity (TAC), reduced glutathione (GSH), and superoxide dismutase (SOD) were measured. Colonic histopathological changes were observed microscopically.
Results: Treatment of UC with SP, honey, and combination regimen significantly reduced TNF-α, IL-1β, IL-6, MDA, MPO, NO, and PGE2, and increased TAC, GSH, GPx, and SOD in interventional groups compared to the AA-colitis group (P<0.05).
Conclusion: Honey and SP might be beneficial food supplements for medical nutrition therapy in UC.
abstract_id: PUBMED:18814487
Effect of Manuka honey and sulfasalazine in combination to promote antioxidant defense system in experimentally induced ulcerative colitis model in rats. Manuka honey (MH, 5g/kg) provided protection against trinitro-benzo-sulphonic acid induced colonic damage. Combination therapy (MH+sulfasalazine) also reduced colonic inflammation and all the biochemical parameters were significant compared to control and MH alone treated group. Combination therapy showed additive effect of the MH which restored lipid peroxidation and improvement of antioxidant parameters. Morphological and histological scores were significantly reduced in combination groups. In inflammatory model of colitis, oral administration of MH (5g/kg) and combination with sulfasalazine (360 mg/kg) with MH (5g/kg) significantly reduced the colonic inflammation. The results indicate the additive effect of Manuka honey with sulfasalazine in colitis.
abstract_id: PUBMED:12632976
Protective effect of natural honey against acetic acid-induced colitis in rats. Aims: The protective effects of natural honey against acetic acid-induced colitis were investigated in rats.
Methods: Honey and glucose, fructose, sucrose, maltose mixture were administered, orally and rectally, daily for a period of 4 days. Induction of colitis was done on the third day using 3% acetic acid. Animals were killed on day 4 two hours after administration of the dose and colonic biopsies were taken for macroscopic scoring, histopathological and biochemical studies.
Results: Honey dose-dependently afforded protection against acetic acid-induced colonic damage. There was almost 100% protection with the highest dose (5 g/kg) used while glucose, fructose, sucrose, maltose mixture produced no significant protective effect. Also, honey prevented the depletion of the antioxidant enzymes reduced glutathione and catalase and restored the lipid peroxide malondialdehyde towards normal levels.
Conclusions: Further studies are required to explore the active ingredients responsible for the antioxidant effect of honey and its therapeutic potential in humans.
abstract_id: PUBMED:31533201
Honey Polyphenols Ameliorate DSS-Induced Ulcerative Colitis via Modulating Gut Microbiota in Rats. Scope: Ulcerative colitis (UC) is a multifaceted and recurrent immune disorder that requires long-term potent pharmacological treatment. Honey, as a natural food of nourishment and pharmaceutical value, has been found to defend against colitis.
Methods And Results: The effects of different constituents in honey are investigated on DSS-induced colitis in rats. Rats are given DSS, sugars, honey, polyphenols, or SASP for a week, with blood and colon samples collected for the biochemical parameters and inflammation-related gene analysis and colon contents for gut microbiota. The results show that pretreatments with honey polyphenols significantly improve SOD, GSH-Px, NO, and MPO levels and reduce DSS-induced colonic apoptosis, the colonic inflammatory cytokines IL-6, TNF-α, and TGF-β1 accompanied by downregulation of IL-1β, IL-6, TNF-α, and IFN-γ gene and upregulation of IκB-α gene. Furthermore, honey polyphenols and SASP show similar microbial community structure shifts and selective enrichment of key species. At the genus level, honey polyphenols significantly reduce the population of Bacteroides, Corynebacterium, and Proteus species. The correlation analysis indicates that colonic gene expression regulated by honey polyphenols is relative to the key species of gut microbiota.
Conclusions: Honey polyphenols improve intestinal inflammation and oxidative stress resistance via modulating gut microbiota, which is conducive to revealing the host-microbe interactions.
abstract_id: PUBMED:33857512
Structural differences of polysaccharides from Astragalus before and after honey processing and their effects on colitis mice. Honey-processed Astragalus is a dosage form of Radix Astragali processed with honey, which exhibits better efficacy of tonifying Qi than the raw product. Polysaccharides are its main water-soluble active components. This work was designed to study the structural differences of homogeneous honey-processed Astragalus polysaccharides (HAPS3a) and Astragalus polysaccharides (APS3a) and their effects on colitis mice. The results showed that HAPS3a (Mw = 2463.5 kDa) and APS3a (Mw = 3373.2 kDa) differed in molecular weight, monosaccharide compositions, glycosidic bonds and degree of branching (DB). Notably, the molar ratios of galactose and galacturonic acid in HAPS3a were 22.66% and 33.24%, while those in APS3a were 11.87% and 49.55%, respectively. The uronic acid residues 1,4-β-GalpA and 1,6-α-GlcpA of the backbone in APS3a were converted into the corresponding neutral residues in HAPS3a after honey processing. The different DB of HAPS3a (15.35%) and APS3a (25.13%) suggested that the chain conformation became smoother. The anti-inflammatory effects on colitis mice revealed that HAPS3a exhibited better effects than APS3a by protecting intestinal mucosa, regulating the expression of cytokines and influencing microbiota diversity. Taken together, the differences in anti-inflammatory activity might be related to structural differences caused by honey processing. Our findings have laid a foundation for the processing mechanism of Astragalus.
abstract_id: PUBMED:18688794
Effect of different doses of Manuka honey in experimentally induced inflammatory bowel disease in rats. To evaluate the effect of different doses of Manuka honey in experimentally induced inflammatory bowel disease in rats. Adult Wistar rats of either sex were used (n = 30). Colitis was induced by a single intracolonic administration of TNBS dissolved in 35% ethanol. The rats (n = 30) were divided into five groups (n = 6) and were treated with vehicle (ethanol), TNBS, Manuka honey (5 g/kg, p.o.), Manuka honey (10 g/kg, p.o.) or sulfasalazine (360 mg/kg, p.o.) body weight for 14 days. After completion of treatment, the animals were killed and the following parameters were assessed: morphological score, histological score and different antioxidant parameters.Manuka honey at different doses provided protection against TNBS-induced colonic damage. There was significant protection with Manuka honey 5 g/kg as well as with 10 g/kg body weight compared with the control (p < 0.001). All the treated groups showed reduced colonic inflammation and all the biochemical parameters were significantly reduced compared with the control in the Manuka honey treated groups (p < 0.001). Manuka honey at different doses restored lipid peroxidation as well as improved antioxidant parameters. Morphological and histological scores were significantly reduced in the low dose Manuka honey treated group (p < 0.001). In the inflammatory model of colitis, oral administration of Manuka honey 5 g/kg and Manuka honey 10 g/kg body weight significantly reduced the colonic inflammation. The present study indicates that Manuka honey is efficacious in the TNBS-induced rat colitis model, but these results require further confirmation in human studies.
abstract_id: PUBMED:20645809
In vivo activity assessment of a "honey-bee pollen mix" formulation. Honey-bee pollen mix (HBM) formulation is claimed to be effective for the treatment of asthma, bronchitis, cancers, peptic ulcers, colitis, various types of infections including hepatitis B, and rheumatism by the herb dealers in northeast Turkey. In the present study, in vivo antinociceptive, anti-inflammatory, gastroprotective and antioxidant effects of pure honey and HBM formulation were evaluated comparatively. HBM did not show any significant gastroprotective activity in a single administration at 250 mg/kg dose, whereas a weak activity was observed after three days of successive administration at 500 mg/kg dose. On the other hand, HBM displayed significant antinociceptive (p <0.01) and anti-inflammatory (p <0.01) activities at 500 mg/kg dose orally without inducing any apparent acute toxicity or gastric damage. HBM was also shown to possess potent antilipidperoxidant activity (p <0.01) at 500 mg/kg dose against acetaminophen-induced liver necrosis model in mice. On the other hand, pure honey did not exert any remarkable antinociceptive, anti-inflammatory and gastroprotective activity, but a potent antilipidperoxidant activity (p <0.01) was determined. Results have clearly proved that mixing pure honey with bee pollen significantly increased the healing potential of honey and provided additional support for its traditional use. Total phenolic and flavonoid contents of HBM were found to be 145 and 59.3 mg/100 g of honey, which were estimated as gallic acid and quercetin equivalents, respectively.
abstract_id: PUBMED:27378376
The dual anti-inflammatory and antioxidant activities of natural honey promote cell proliferation and neural regeneration in a rat model of colitis. A decreased antioxidant capacity and excessive inflammation are well-known features in the pathogenesis of ulcerative colitis (UC). Recent evidence has suggested a role of honey in reducing colitis-induced inflammatory and oxidative stress markers. In this study, we examined whether the anti-inflammatory and anti-oxidative properties of honey have a beneficial effect on the enteric innervation and cellular proliferation of UC in rat. The colitis was induced in rats by dextran sodium sulphate (DSS). The effect of natural honey on induced colitis was assessed by the following parameters in colonic samples: tissue injury, inflammatory infiltration, interleukin-1β and -6, superoxide dismutase and reduced glutathione. In addition, the expression of tumour necrosis factor-α, inducible NO synthase, caspase-3, CD34, Ki67, S100, c-kit, and neuron-specific enolase were examined by immunohistochemistry. Compared to the DSS-induced colitis group, the honey-treated group had significantly improved macroscopic and microscopic scores and exhibited the down-regulation of oxidative, inflammatory, and apoptotic markers. In addition, up-regulation of intrinsic muscular innervation and epithelial cellular proliferation markers was detected. These results provide new insight into the beneficial role of natural honey in the treatment of DSS-induced colitis via the inhibition of colonic motor dysfunction and the inflammatory-oxidative-apoptotic cascade. In addition, the role of honey in epithelial regeneration was clarified.
abstract_id: PUBMED:21221182
Infliximab: the evidence for its place in therapy in ulcerative colitis. Introduction: Refractory ulcerative colitis has a high, unmet medical need for avoiding steroid dependency and avoiding colectomy. Controlled trials with biologic agents have recently been reported.
Aims: We aimed to review the current evidence supporting the use of the monoclonal antitumor necrosis factor antibody, infliximab, in active ulcerative colitis and determine its current place in therapy.
Evidence Review: Although faced with initial conflicting data particularly in steroid-refractory patients, two large, placebo-controlled trials have shown that intravenous infliximab induces and maintains clinical improvement in a clinically significant proportion of patients when used with scheduled re-treatment. Infliximab also spares steroids and induces endoscopic remission in moderately ill patients. In fulminant colitis unresponsive to intravenous steroids, one placebo-controlled trial indicates that infliximab is able to prevent colectomy in this patient population. Evidence for cost effectiveness and avoidance of colectomy long term are still lacking.
Place In Therapy: Infliximab 5 mg/kg induction at 0, 2, and 6 weeks, and every 8 weeks thereafter should be considered in patients with moderately to severely active ulcerative colitis failing medical therapy. Steroid-dependent and steroid-refractory patients also qualify for infliximab therapy.
Answer: Based on the provided abstracts, honey could potentially have a place in colitis therapy. Several studies have investigated the effects of honey on colitis in rat models, with findings suggesting that honey has anti-inflammatory and antioxidant properties that may be beneficial in treating colitis.
One study found that intrarectal administration of honey was as effective as prednisolone treatment in an inflammatory model of colitis, suggesting that honey may have therapeutic features in the treatment of colitis (PUBMED:12207075). Another study compared the effects of honey and Spirulina platensis with sulfasalazine and mesalazine, two standard treatments for ulcerative colitis, and found that honey significantly reduced inflammatory markers and increased antioxidant capacity in rats with acetic acid-induced colitis (PUBMED:34466462).
Manuka honey, in particular, was shown to provide protection against colonic damage and reduce inflammation in a rat model of colitis, with the combination of Manuka honey and sulfasalazine having an additive effect (PUBMED:18814487). Similarly, natural honey was found to afford protection against acetic acid-induced colitis and prevent the depletion of antioxidant enzymes in rats (PUBMED:12632976).
Honey polyphenols were also shown to ameliorate DSS-induced ulcerative colitis by modulating gut microbiota and improving intestinal inflammation and oxidative stress resistance in rats (PUBMED:31533201). Additionally, honey-processed Astragalus polysaccharides exhibited better anti-inflammatory effects on colitis mice than unprocessed polysaccharides, suggesting that honey processing can enhance the therapeutic properties of certain substances (PUBMED:33857512).
Furthermore, different doses of Manuka honey were efficacious in reducing colonic inflammation in a TNBS-induced rat colitis model (PUBMED:18688794), and a "honey-bee pollen mix" formulation showed significant antinociceptive, anti-inflammatory, and antioxidant activities in vivo (PUBMED:20645809).
Lastly, natural honey was found to promote cell proliferation and neural regeneration in a rat model of colitis, suggesting that its dual anti-inflammatory and antioxidant activities could be beneficial for the treatment of ulcerative colitis (PUBMED:27378376).
In conclusion, the evidence from these studies indicates that honey, particularly Manuka honey and honey polyphenols, may have a therapeutic role in the management of colitis due to its anti-inflammatory and antioxidant effects. However, further research, including human clinical trials, is necessary to confirm these findings and establish honey as a standard therapy for colitis. |
Instruction: Does sepsis treatment differ between primary and overflow intensive care units?
Abstracts:
abstract_id: PUBMED:22865794
Does sepsis treatment differ between primary and overflow intensive care units? Background: Sepsis is a major cause of death in hospitalized patients. Early goal-directed therapy is the standard of care. When primary intensive care units (ICUs) are full, sepsis patients are cared for in overflow ICUs.
Objective: To determine if process-of-care measures in the care of sepsis patients differed between primary and overflow ICUs at our institution.
Design: We conducted a retrospective study of all adult patients admitted with sepsis between July 2009 and February 2010 to either the primary ICU or the overflow ICU.
Measurements: Baseline patient characteristics and multiple process-of-care measures, including diagnostic and therapeutic interventions.
Results: There were 141 patients admitted with sepsis to our hospital; 100 were cared for in the primary ICU and 41 in the overflow ICU. Baseline acute physiology and chronic health evaluation (APACHE II) scores were similar. Patients received similar processes-of-care in the primary ICU and overflow ICU with the exception of deep vein thrombosis (DVT) and gastrointestinal (GI) prophylaxis within 24 hours of admission, which were better adhered to in the primary ICU (74% vs 49%, P = 0.004, and 68% vs 44%, P = 0.012, respectively). There were no significant differences in hospital and ICU length of stay between the 2 units (9.68 days vs 9.73 days, P = 0.98, and 4.78 days vs 4.92 days, P = 0.97, respectively).
Conclusions: Patients with sepsis admitted to the primary ICU and overflow ICU at our institution were managed similarly. Overflowing sepsis patients to non-primary intensive care units may not affect guideline-concordant care delivery or length of stay.
abstract_id: PUBMED:26349425
Strategies for antifungal treatment failure in intensive care units Recent epidemiologic studies reveal both an increasing incidence and an escalation in resistance of invasive fungal infections in intensive care units. Primary therapy fails in 70 % of cases, depending on the underlying pathogens and diseases. The purpose of this review is to raise awareness for the topic of antifungal therapy failure, describe the clinical conditions in which it occurs, and suggest a possible algorithm for handling the situation of suspected primary therapy failure.
abstract_id: PUBMED:35469029
Antimicrobial stewardship programs in European pediatric intensive care units: an international survey of practices. Antibiotic therapy represents one of the most common interventions in pediatric intensive care units (PICUs). This study aims to describe current antimicrobial stewardship programs (ASP) in European PICUs. A cross-sectional survey distributed to European pediatric intensive care physicians through the European Society of Neonatal and Pediatric Intensive Care (ESPNIC) Infection, Inflammation, and Sepsis Section, to members of the Spanish Society of Pediatric Intensive Care, of the Pediatric Reanimation and Emergency Care French Group, and to European physicians known to be involved in antimicrobial stewardship programs. Responses from 60 PICUs across 12 countries were analyzed. Fifty three (88%) stated that ASP was implemented. The main interventions considered as ASP were the pharmacokinetic monitoring of antimicrobials (n = 41, 77%) and the development of facility-specific clinical practice guidelines (n = 40, 75%). The most common team composition of antimicrobial stewardship program included a pediatric infectious disease physician, a pharmacist, and a microbiologist (n = 11, 21%).
Conclusion: Although ASP practices were reported to be widely implemented across European PICUs, this survey observed a large heterogeneity in terms of activities and modalities of intervention.
What Is Known: • Antibiotic therapy represents one of the most common interventions in pediatric intensive care units. • The role and subsequent success of antimicrobial stewardship programs has largely been reported in the adult population but scarcely in the pediatric population.
What Is New: • Antimicrobial stewardship programs were reported to be widely implemented across European pediatric intensive care units. • We observed a large heterogeneity in terms of activities and modalities of intervention.
abstract_id: PUBMED:24967858
Epidemiology of sepsis in Colombian intensive care units. Introduction: Currently, there is not enough data available concerning sepsis in developing countries, especially in Latin America.
Objective: We developed a study aimed at determining the frequency, clinical and epidemiological characteristics, and the consequences of sepsis in patients requiring admission to intensive care units in Colombia.
Materials And Methods: This was a secondary analysis of a prospective cohort study carried out over a six-month period, from September 1, 2007, to February 28, 2008, in ten medical/surgical intensive care units in four Colombian cities. Patients were considered eligible if they had a probable or confirmed diagnosis of infection according to medical records. We recorded demographic characteristics, first admission diagnosis and co-morbidities, clinical status, and sepsis, severe sepsis or septic shock.
Results: During the study period, 826 patients were admitted to the intensive care units. From these patients, 421 (51%) developed sepsis in the community, 361 (44%) in the ICU, and 44 (5%) during hospitalization in the general ward. Two hundred and fifty three patients (30.6%) had involvement of one organ system: 20% had respiratory involvement, followed by kidney and central nervous system involvement with 3.4% and 2.7%, respectively.
Conclusions: In our cohort of septic patients, the prevalence of sepsis treated in ICU is similar to that reported in other studies, as well as the overall mortality.
abstract_id: PUBMED:33084908
Agency work in intensive care : Impact of temporary contract work on patient care in intermediate care and intensive care units Background: Temporary contract workers are used for the nursing care of intensive care patients, usually by service providers in the sense of temporary employment. If or how temporary contract work has an impact on patient care has scarcely been investigated so far.
Aim: The aim of this systematic review is to describe the available research results on the use of temporary workers in nursing care in intensive and intermediate care units and to summarize the prospective effects on patient outcomes.
Method: Seven databases were systematically searched for English and German language articles using Boolean operators and evaluated according to the PRISMA schema. References of included studies were included in the search and the quality of all included studies was evaluated according to the Hawker criteria.
Result: From a total of 630 datasets viewed, 1 qualitative and 2 quantitative studies were identified and analyzed. The findings of the quantitative studies indicated that the probability of the occurrence of catheter-associated infections can increase with the use of temporary workers, but is more dependent on the size of the unit: For each additional bed, the probability of VAP increases by 14.8% (95%-CI = 1.032-1.277, p = 0.011). However, trends for a decrease in the sepsis rate as soon as fewer temporary workers (hours/patient) were deployed in the intensive care unit (ICU) could not be confirmed.
Conclusion: In the few available studies no evidence was found that the use of temporary workers in intensive care units (ICU) and intermediate care units (IMC) has a significant impact on patient outcomes; however, evidence was found that individual qualifications and working conditions have an influence on outcomes. Further studies should consider what ratio of permanent to temporary workers should be considered uncritical, what qualifications temporary workers should have and to what extent these can be checked.
abstract_id: PUBMED:23732493
Real daily costs of patients admitted to public intensive care units Background: Patient care costs in intensive care units are high and should be considered in medical decision making.
Aim: To calculate the real disease related costs for patients admitted to intensive care units of public hospitals.
Material And Methods: Using an activity associated costs analysis, the expenses of 716 patients with a mean age of 56 years, mean APACHE score of 20 (56% males), admitted to intensive care units of two regional public hospitals, were calculated. Patients were classified according to their underlying disease.
Results: The costs per day of hospital stay, in Chilean pesos, were $ 426,265 for sepsis, $ 423,300 for cardiovascular diseases, $ 418,329 for kidney diseases, $ 404,873 for trauma, $ 398,913 for respiratory diseases, $ 379,455 for digestive diseases and $ 371,801 for neurologic disease. Human resources and medications determined up to 85 and 12% of costs, respectively. Patients with sepsis and trauma use 32 and 19% of intensive care unit resources, respectively. Twenty seven percent of resources are invested in patients that eventually died.
Conclusions: A real cost benefit analysis should be performed to optimize resource allocation in intensive care units.
abstract_id: PUBMED:21169817
Candida bloodstream infections in intensive care units: analysis of the extended prevalence of infection in intensive care unit study. Objectives: To provide a global, up-to-date picture of the prevalence, treatment, and outcomes of Candida bloodstream infections in intensive care unit patients and compare Candida with bacterial bloodstream infection.
Design: A retrospective analysis of the Extended Prevalence of Infection in the ICU Study (EPIC II). Demographic, physiological, infection-related and therapeutic data were collected. Patients were grouped as having Candida, Gram-positive, Gram-negative, and combined Candida/bacterial bloodstream infection. Outcome data were assessed at intensive care unit and hospital discharge.
Setting: EPIC II included 1265 intensive care units in 76 countries.
Patients: Patients in participating intensive care units on study day.
Interventions: None.
Measurement And Main Results: Of the 14,414 patients in EPIC II, 99 patients had Candida bloodstream infections for a prevalence of 6.9 per 1000 patients. Sixty-one patients had candidemia alone and 38 patients had combined bloodstream infections. Candida albicans (n = 70) was the predominant species. Primary therapy included monotherapy with fluconazole (n = 39), caspofungin (n = 16), and a polyene-based product (n = 12). Combination therapy was infrequently used (n = 10). Compared with patients with Gram-positive (n = 420) and Gram-negative (n = 264) bloodstream infections, patients with candidemia were more likely to have solid tumors (p < .05) and appeared to have been in an intensive care unit longer (14 days [range, 5-25 days], 8 days [range, 3-20 days], and 10 days [range, 2-23 days], respectively), but this difference was not statistically significant. Severity of illness and organ dysfunction scores were similar between groups. Patients with Candida bloodstream infections, compared with patients with Gram-positive and Gram-negative bloodstream infections, had the greatest crude intensive care unit mortality rates (42.6%, 25.3%, and 29.1%, respectively) and longer intensive care unit lengths of stay (median [interquartile range]) (33 days [18-44], 20 days [9-43], and 21 days [8-46], respectively); however, these differences were not statistically significant.
Conclusion: Candidemia remains a significant problem in intensive care units patients. In the EPIC II population, Candida albicans was the most common organism and fluconazole remained the predominant antifungal agent used. Candida bloodstream infections are associated with high intensive care unit and hospital mortality rates and resource use.
abstract_id: PUBMED:2403113
Epidemic bacteremia due to Acinetobacter baumannii in five intensive care units. From March 5, 1986 to September 4, 1987, Acinetobacter baumannii (AB) was isolated from blood or vascular catheter-tip cultures of 75 patients in five intensive care units at a hospital in New Jersey. To identify risk factors for AB bacteremia in the intensive care units, a case-control study was conducted. Characteristics of 72 case-patients were compared with those of 37 controls. Case-patients were more likely than controls to have had peripheral arterial catheters (odds ratio (OR) = 7.0, p less than 0.001), mechanical ventilation (OR = 5.8, p less than 0.001), hyperalimentation (OR = 5.7, p less than 0.001), or pulmonary arterial catheters (OR = 3.9, p less than 0.001). Arterial catheters were used with reusable pressure transducers for intravascular pressure monitoring. A logistic regression analysis identified four independent risk factors: transducers, ventilation, hyperalimentation, and days of transducer use at an insertion site. The strongest influence on the risk of AB bacteremia was exerted by number of days of transducer usage. Cultures of 70 transducer diaphragms or domes, 42 in-use and 28 in-storage, were positive for AB in 21% and 46%, respectively. Plasmid analysis showed that patient blood cultures and transducer isolates were identical. Transducers were wiped with alcohol in the units between patient uses. Since reusable transducers appeared to be the source of this outbreak, it is recommended that reusable transducers receive either high level disinfection or sterilization between patient uses.
abstract_id: PUBMED:2139943
An imipenem-cilastatin combination in the treatment of infection at general intensive care units The purpose of this multicentre open trial carried out in 286 patients (mean age: 58 +/- 17 years) was to evaluate the effectiveness of imipenem-cilastatin in the treatment of severe infections observed in intensive care units. In more than 90 per cent of the cases an underlying pathology was present before the infection, and 41 per cent of the patients had previously been operated upon. Respiratory tract infections and septicaemias accounted for 75 per cent of the cases. The infection had been present for 7.1 +/- 8.2 days before treatment was instituted, and in 66 per cent of the cases it was nosocomial. The frequency of prolonged artificial ventilation (198 cases) demonstrated the severity of these infections. Seventy-three per cent of the initial 622 isolates were Gram-negative bacilli; the responsible organisms were Pseudomonas aeruginosa in 106 cases and Acinetobacter in 76 cases. Imipenem-cilastatin was administered alone in 173 cases and combined with other antibiotics in 113 cases. The mean dosage and duration of treatment were 29.2 +/- 9.9 mg.kg-1 and 13.4 +/- 7 days respectively. Clinical cure or improvement was achieved in 244 out of 284 assessable patients (65.2 per cent). Among the 40 therapeutic failures, 16 were due to a resistant organism isolated during treatment (P. aeruginosa 10, P. maltophilia 6). These good or very good clinical results show that imipenem is effective in the treatment of severe infections observed in intensive care units.
abstract_id: PUBMED:19732512
Treatment of incurable patient in intensive care unit This case represents an ethical problem in dealing with patients with prolonged stay of treatment and poor prognosis for survival in the context of known under-capacity in intensive care units (ICUs) in Denmark. A young man, diagnosed with paroxysmal nocturnal haemoglobinuria and myelodysplastic syndrome, developed necrotizing fasciitis and died of sepsis after 42 days. The treatment options were exhausted and treatment was eventually limited to antibiotics and fluid therapy, which raises the question; should ICUs be used for palliative care?
Answer: Sepsis treatment does not significantly differ between primary and overflow intensive care units according to a retrospective study conducted at one institution. The study compared process-of-care measures in the care of sepsis patients admitted to either the primary ICU or the overflow ICU between July 2009 and February 2010. The results showed that patients received similar processes-of-care in both units, with the exception of deep vein thrombosis (DVT) and gastrointestinal (GI) prophylaxis within 24 hours of admission, which were adhered to better in the primary ICU. There were no significant differences in hospital and ICU length of stay between the two units. This suggests that overflowing sepsis patients to non-primary intensive care units may not affect guideline-concordant care delivery or length of stay (PUBMED:22865794). |
Instruction: Can a screening questionnaire accurately identify mild Parkinsonian signs?
Abstracts:
abstract_id: PUBMED:22948126
Can a screening questionnaire accurately identify mild Parkinsonian signs? Background: Mild parkinsonian signs (MPS) are early features that, when present, increase the risk of neurodegenerative disease and mortality. Current methods to identify MPS are limited to neurological examination. Our objective was to assess the ability of a 9-item Parkinson's Disease Screening Questionnaire (PDSQ), which has high sensitivity in the detection of overt Parkinson's disease (PD), to detect MPS.
Methods: Measures including the PDSQ, Unified Parkinson's Disease Rating Scale and University of Pennsylvania Smell Identification Test were administered to 267 participants without neurodegenerative disease. Two published definitions of MPS were used to classify cases.
Results: PDSQ scores were higher for cases compared to controls (p < 0.001 for the first case definition and 0.07 for the second). However, the questionnaire had low sensitivity (47 and 59%) and specificity (62 and 63%) in the detection of MPS. Adding factors such as age, gender and smell test score to the questionnaire in a predictive model only marginally improved the test characteristics.
Conclusion: The results show the screening questionnaire does not accurately identify MPS. More accurate tests are needed to improve the detection of this early syndrome which can lead to motor disability, neurodegenerative disease and mortality.
abstract_id: PUBMED:34658837
Association Between Metabolic Syndrome and Mild Parkinsonian Signs Progression in the Elderly. Background: This study investigated the impact of metabolic syndrome on the progression from mild parkinsonian signs (MPS) to Parkinson's disease (PD). Methods: A total of 1,563 participants with MPS completed 6 years of follow-up. The diagnosis of metabolic syndrome was made according to Adult Treatment Panel III of the National Cholesterol Education Program. The evaluations of MPS and PD were based on the motor portion of the Unified Parkinson's Disease Rating Scale. Cox proportional hazard models were used to identify the association between metabolic syndrome and PD conversion. Results: Of the 1,563 participants, 482 (30.8%) with MPS developed PD at the end of the follow-up. Metabolic syndrome (HR: 1.69, 95% CI: 1.29-2.03) was associated with the risk of PD conversion. Metabolic syndrome was associated with the progression of bradykinesia (HR: 1.85, 95% CI: 1.43-2.34), rigidity (HR: 1.36, 95% CI: 1.19-1.57), tremor (HR: 1.98, 95% CI: 1.73-2.32), and gait/balance impairment (HR: 1.66, 95% CI: 1.25-2.11). The effect of metabolic syndrome on the progression of bradykinesia and tremor was nearly two fold. Participants treated for two or three to four components of metabolic syndrome, including high blood pressure, high fasting plasma glucose, hypertriglyceridemia, and low HDL-C, had a lower risk of PD conversion. Conclusion: Metabolic syndrome increased the risk of progression from MPS to PD. Participants treated for two or more components of metabolic syndrome had a lower risk of PD conversion.
abstract_id: PUBMED:25047763
The evolution of mild parkinsonian signs in aging. The progression of mild parkinsonian signs in the absence of idiopathic Parkinson's disease in aging is unclear. This study aims to identify predictors of the evolution of mild parkinsonian signs in non-demented older adults. Two hundred ten participants (76.25 ± 7.10 years, 57% women) were assessed at baseline and 1-year follow-up. Mild parkinsonian signs were defined as the presence of bradykinesia, rigidity and/or rest tremor. Depending upon the presence of these features at baseline and follow-up, participants were divided into one of four groups (no, transient, persistent or new-onset mild parkinsonian signs). Physical function was assessed using gait velocity. Ninety-five participants presented with mild parkinsonian signs at baseline. At 1-year follow-up, 59 demonstrated persistent mild parkinsonian signs, while 36 recovered (i.e., transient). Participants with persistent mild parkinsonian signs were older (79.66 ± 7.15 vs. 75.81 ± 7.37 years, p = 0.01) and evidenced slower gait velocity (90.41 ± 21.46 vs. 109.92 ± 24.32 cm/s, p < 0.01) compared to those with transient mild parkinsonian signs. Gait velocity predicted persistence of mild parkinsonian signs, even after adjustments (OR: 0.96, 95% CI: 0.94-0.98). Fifty-five participants demonstrated new-onset of mild parkinsonian signs. In comparison to participants without mild parkinsonian signs, presence of cardiovascular but not cerebrovascular disease at baseline was associated with new-onset mild parkinsonian signs. Our study reveals that gait velocity was the main predictor of persistent mild parkinsonian signs, whereas cardiovascular disease was associated with new-onset mild parkinsonian signs. These findings suggest a vascular mechanism for the onset of mild parkinsonian signs and a different mechanism, possibly neurodegenerative, for the persistence of mild parkinsonian signs.
abstract_id: PUBMED:32310187
Mild Parkinsonian Signs in a Community Ambulant Population. Background: Mild parkinsonian signs (MPS) are common in the older adult and associated with a wide range of adverse health outcomes. There is limited data on the prevalence of MPS and its significance.
Objective: To determine the prevalence of MPS in the community ambulant population and to evaluate the relationship of MPS with prodromal features of Parkinson's disease (PD) and cognition.
Methods: This cross-sectional community-based study involved participants aged ≥50 years. Parkinsonian signs were assessed using the modified Unified Parkinson's Disease Rating Scale (mUPDRS) and cognition using the Montreal Cognitive Assessment (MoCA). Premotor symptoms of PD were screened using a self-reported questionnaire. Linear regression was used to assess the association of MPS with premotor symptoms of PD and cognitive impairment.
Results: Of 392 eligible participants, MPS was present in 105 (26.8%). Mean age of participants with MPS was 68.8±6.9 years and without MPS was 66.1±5.9 years (p < 0.001). Multivariate analysis revealed that MoCA scores were significantly lower in the MPS group (β= -0.152, 95% CI = -0.009, -0.138, p < 0.05). A significant correlation between the presence of REM sleep behavior disorder (RBD) and total MPS scores (β= 0.107, 95% CI = 0.053, 1.490, p < 0.05) was also found. Neither vascular risk factors nor other premotor symptoms were significantly associated with MPS.
Conclusion: MPS is common and closely related to cognitive impairment and increasing age. Presence of RBD is predictive of higher MPS scores. This study highlights the necessity of other investigations or sensitive risk markers to identify subjects at future risk of PD.
abstract_id: PUBMED:33758563
Gait Analysis of Old Individuals with Mild Parkinsonian Signs and Those Individuals' Gait Performance Benefits Little from Levodopa. Background And Purpose: Gait analysis and the effects of levodopa on the gait characteristics in Mild parkinsonian signs (MPS) are rarely published. The present research aimed to (1) analyze the gait characteristics in MPS; (2) explore the effects of levodopa on the gait performance of MPS.
Methods: We enrolled 22 inpatients with MPS and 20 healthy control subjects (HC) from Nanjing Brain Hospital. The Unified Parkinson's Disease Rating Scale was used to evaluate motor symptoms. Acute levodopa challenge test was performed to explore the effects of levodopa on the gait performance of MPS. The instrumented stand and walk test was conducted for each participant and the JiBuEn gait analysis system was used to collect gait data.
Results: For spatiotemporal parameters: Compared with HC, the state before taking levodopa/benserazide in MPS group (meds-off) demonstrated a decrease in stride length (SL) (p≤0.001), an increase in SL variability (p≤0.001), and swing phase time variability (p=0.016). Compared with meds-off, the state after 1 hour of taking levodopa/benserazide in MPS group (meds-on) exhibited an increase in SL (p≤0.001), a decrease in SL variability (p≤0.001). For kinematic parameters: Compared with HC, meds-off demonstrated a decrease in heel strike angle (p=0.008), range of motion (ROM) of knee joint (p=0.011) and ROM of hip joint (p=0.007). Compared with meds-off, meds-on exhibited an increase in HS (p≤0.001). Bradykinesia and rigidity scores were significantly correlated with gait parameters.
Conclusion: Although the clinical symptoms of the MPS group are mild, their gait damage is obvious and they exhibited a decreased SL and joints movement, and a more variable gait pattern. Levodopa had little effect on the gait performance of those individuals.
abstract_id: PUBMED:32505084
Clinical and neuroimaging correlates of progression of mild parkinsonian signs in community-dwelling older adults. Introduction: Mild parkinsonian signs (MPS) are associated with morbidity. Identification of MPS progression markers may be vital for preventive management, yet has not been pursued. This study aimed to ascertain clinical/neuroimaging features predictive of MPS progression.
Methods: 205 participants in the Health ABC Study were included. MPS was defined using published guidelines. MPS progression was evaluated by determining UPDRS-III change between baseline and follow-up ≥2 years later. Standard brain MRI and DTI were obtained at baseline. Correlation coefficients between demographics, vascular risk factors, imaging markers, and UPDRS-III change were adjusted for follow-up time. Linear regression was used to adjust for possible confounders in the relationship between imaging markers and MPS progression.
Results: 30% of participants had baseline MPS. Demographics and risk factors did not differ significantly between participants with MPS (MPS+) and without MPS (MPS-). Mean follow-up time was 3.8±0.8 years. Older age, male gender, diabetes were associated with faster rate of UPDRS-III change in MPS- but not MPS+ participants. Among MPS- participants, the only imaging marker associated with faster UPDRS-III progression was higher gray matter mean diffusivity (MD), widespread in various cortico-subcortical bihemispheric regions, independent of age, gender, diabetes. No imaging features were associated with UPDRS-III change among MPS+ participants.
Conclusions: Lower gray matter integrity predicted MPS progression in those who did not have baseline MPS. Microstructural imaging may capture early changes related to MPS development, prior to macrostructural change. Any future management promoting gray matter preservation may inhibit MPS development.
abstract_id: PUBMED:37130449
Prevalence and functional impact of parkinsonian signs in older adults from the Good Aging in Skåne study. Introduction: Mild parkinsonian signs (MPS) have been characterized by several definitions, using the motor part of the Unified Parkinson's Disease Rating Scale (UPDRS). We aimed to investigate the prevalence of MPS and their association with functional level and comorbidities in the oldest old.
Method: Community-dwelling older adults (n = 559, median age 85, range 80-102 years) were examined regarding MPS, possible parkinsonism (PP) and subthreshold parkinsonism (SP) according to four previously used definitions and concerning the impact of parkinsonian signs on cognitive, physical, and autonomic function. MPS, PP and SP are different terms describing a very similar phenomenon and there is no gradation between these. In two of the four definitions more advanced symptoms were categorized as parkinsonism.
Results: Median UPDRS score in the whole study group was 10 points (range: 0-58) and was predominated by bradykinesia. MPS/PP/SP were present in 17-85%, and parkinsonism in 33-71% of the cohort. Independently of age and gender, MPS/PP/SP and especially parkinsonism, were associated with a higher risk of fear of falling and accomplished falls, with lower: cognition, ADL, physical activity and quality of life, and with urinary incontinence, obstipation and orthostatic intolerance.
Conclusions: In a population of older adults above 80 years, MPS are highly prevalent as well as more advanced symptoms defined as parkinsonism, and only 9-17% of the cohort is symptom-free. Predominance of bradykinesia in the oldest old might indicate a need for revision of MPS definitions to improve their sensibility.
abstract_id: PUBMED:31362655
Mild Parkinsonian Signs in a Hospital-based Cohort of Mild Cognitive Impairment Types: A Cross-sectional Study. Background: Mild Parkinsonian Signs (MPS) have been associated with Mild Cognitive Impairment (MCI) types with conflicting results.
Objective: To investigate the association of individual MPS with different MCI types using logistic ridge regression analysis, and to evaluate for each MCI type, the association of MPS with caudate atrophy, global cerebral atrophy, and the topographical location of White Matter Hyperintensities (WMH), and lacunes.
Methods: A cross-sectional study was performed among 1,168 subjects with different types of MCI aged 45-97 (70,52 ± 9,41) years, who underwent brain MRI. WMH were assessed through two visual rating scales. The number and location of lacunes were also rated. Atrophy of the caudate nuclei and global cerebral atrophy were assessed through the bicaudate ratio, and the lateral ventricles to brain ratio, respectively. Apolipoprotein E (APOE) genotypes were also assessed. Using the items of the motor section of the Unified Parkinson's Disease Rating Scale, tremor, rigidity, bradykinesia, and gait/balance/axial dysfunction were evaluated.
Results: Bradykinesia, and gait/balance/axial dysfunction were the MPS more frequently encountered followed by rigidity, and tremor. MPS were present in both amnestic and non-amnestic MCI types, and were associated with WMH, lacunes, bicaudate ratio, and lateral ventricles to brain ratio.
Conclusion: MPS are present in both amnestic and non-amnestic MCI types, particularly in those multiple domain, and carrying the APOE ε4 allele. Cortical and subcortical vascular and atrophic processes contribute to MPS. Long prospective studies are needed to disentangle the contribution of MPS to the conversion from MCI to dementia.
abstract_id: PUBMED:28345787
Non-motor symptoms and quality of life in subjects with mild parkinsonian signs. Background: Mild parkinsonian signs (MPS) are frequent in the elderly population and associated with the presence of risk markers for Parkinson's disease (PD). Both MPS and non-motor signs may be present in prodromal PD and may significantly impair quality of life (QoL).
Objective: To disentangle the contribution of motor impairment and extra-motor manifestations to QoL in subjects with MPS (n=63), manifest PD (n=69), disorders with motor symptoms due to non-neurodegenerative diseases (n=213) and healthy controls (n=258).
Methods: Subjects with MPS, healthy controls, disease controls (patients with motor impairment due to, eg, arthrosis and spondylosis), and PD patients (total n=603) were selected from a large epidemiological longitudinal study, the EPIPARK cohort. Motor function was determined using the UPDRSIII protocol, and information on depressive symptoms, anxiety, sleep, and QoL was assessed via rating scales and data were analyzed.
Results: Depressive symptoms, anxiety, and sleep problems were equally frequent in the MPS group and controls. Health-related QoL was slightly reduced in the MPS group. Motor impairment and its extent was comparable between the MPS group and disease controls (UPDRSIII 5-6 points). Higher motor dysfunction was associated with lower QoL. Depressive symptoms, but not anxiety and daytime sleepiness, was significant predictors of general QoL, independent of motor function.
Conclusions: Quality of life is slightly decreased in an elderly population with MPS. QoL is associated with severity of motor impairment but also with non-motor aspects, ie, depressive symptoms. Follow-up studies in large cohorts are warranted to determine the natural course of MPS and its impact on QoL.
abstract_id: PUBMED:32158327
Predictors for Incident Mild Parkinsonian Signs in Older Japanese. Background: Mild parkinsonian signs are important clinical symptoms related to the decline of motor and cognitive functions. We aimed to identify predictors for the incidence of mild parkinsonian signs in older Japanese by conducting an 8-year longitudinal community-based cohort study.
Methods: Participants aged 65 years or older, living in Ama-cho, a rural island town in western Japan, underwent a baseline assessment of motor function, cognitive function, depression score, the Pittsburgh Sleep Quality Index (PSQI), the Tanner questionnaire, and cerebral white matter lesions on brain magnetic resonance imaging from 2008 to 2010, and then underwent a follow-up neurological examination from 2016 to 2017. Mild parkinsonian signs were defined according to a modified Unified Parkinson's Disease Rating Scale score.
Results: Of the 316 participants without mild parkinsonian signs at baseline, 94 presented with incident mild parkinsonian signs at follow-up. In addition to an absence of exercise habits, a higher score on the Tanner questionnaire, PSQI, and deep white-matter hyperintensity Fazekas scores were significant independent predictors for incidence of mild parkinsonian signs.
Conclusion: We suggest multiple factors related to incidence of mild parkinsonian signs. Vascular lesions and sleep disorders are associated with a pathogenesis of mild parkinsonian signs, the Tanner questionnaire is useful for early detection of subclinical mild parkinsonian signs, and exercise has a possibility of being associated with preventing onset of mild parkinsonian signs.
Answer: No, a screening questionnaire cannot accurately identify mild Parkinsonian signs (MPS). A study assessing the ability of a 9-item Parkinson's Disease Screening Questionnaire (PDSQ) to detect MPS found that the questionnaire had low sensitivity (47 and 59%) and specificity (62 and 63%) in the detection of MPS. Even when factors such as age, gender, and smell test score were added to the questionnaire in a predictive model, the test characteristics only marginally improved. Therefore, the study concluded that the PDSQ does not accurately identify MPS and that more accurate tests are needed to improve the detection of this early syndrome (PUBMED:22948126). |
Instruction: Is pancreaticoduodenectomy justified for septuagenarians and octogenarians?
Abstracts:
abstract_id: PUBMED:14571811
Is pancreaticoduodenectomy justified for septuagenarians and octogenarians? Background/aims: Almost all of the studies described elderly as beginning at the age 70 and very few focused on the group of patients over 80 years of age. This study was undertaken to compare the surgical risk and prognosis between septuagenarians and octogenarians undergoing pancreaticoduodenectomy and to clarify whether or not pancreaticoduodenectomy is justified in octogenarians.
Methodology: Among 276 patients with periampullary lesions undergoing pancreaticoduodenectomy between 1982 and 2000, octogenarians and septuagenarians were identified. The study concentrated on the surgical risks and outcomes.
Results: There were 16 (6%) octogenarians and 82 (30%) septuagenarians among our 276 patients undergoing pancreaticoduodenectomy. Surgical mortality did not significantly increase in octogenarians (13%), as compared to septuagenarians (12%). Surgical morbidity was also similar in both groups (51% in octogenarians vs. 56% in septuagenarians). Octogenarians needed more frequent care in the intensive care unit (69%) postoperatively than septuagenarians (27%), p = 0.001. There was no significant difference in survival, (median survival = 16 months for septuagenarians and 17.6 months for octogenarians), p = 0.137. About half of each group (44.2% septuagenarians and 54.5% octogenarians) still died of the underlying periampullary lesions, p = 0.771.
Conclusions: Surgical risk did not significantly increase and prognosis was similar in octogenarians after pancreaticoduodenectomy, as compared to septuagenarians. Therefore, pancreaticoduodenectomy is justified not only in septuagenarians but also in octogenarians if carefully selected.
abstract_id: PUBMED:38453010
Clinical Outcomes and Complication Profile of Spine Surgery in Septuagenarians and Octogenarians: Case Series. Introduction: The aging global population presents an increasing challenge for spine surgeons. Advancements in spine surgery, including minimally invasive techniques, have broadened treatment options, potentially benefiting older patients. This study aims to explore the clinical outcomes of spine surgery in septuagenarians and octogenarians.
Methods: This retrospective analysis, conducted at a US tertiary center, included patients aged 70 and older who underwent elective spine surgery for degenerative conditions. Data included the Charlson Comorbidity Index, ASA classification, surgical procedures, intraoperative and postoperative complications, and reoperation rates. The objective of this study was to describe the outcomes of our cohort of older patients and discern whether differences existed between septuagenarians and octogenarians.
Results: Among the 120 patients meeting the inclusion criteria, there were no significant differences in preoperative factors between the age groups(p>0.05). Notably, the septuagenarian group had a higher average number of fused levels(2.36 vs. 0.38, p=0.001), while the octogenarian group underwent a higher proportion of minimally invasive procedures(p=0.012), resulting in lower overall bleeding in the oldest group(p<0.001). Mobility outcomes were more favorable in septuagenarians, whereas octogenarians tended to maintain or experience a decline in mobility(p=0.012). A total of 6(5%) intraoperative complications and 12(10%) post-operative complications were documented, with no statistically significant differences observed between the groups.
Conclusion: This case series demonstrates that septuagenarians and octogenarians can achieve favorable clinical outcomes with elective spine surgery. Spine surgeons should be well-versed in the clinical and surgical care of older adults, providing optimal management that considers their increased comorbidity burden and heightened fragility.
abstract_id: PUBMED:27563443
Clinical outcomes of pancreaticoduodenectomy in octogenarians: a surgeon's experience from 2007 to 2015. Background: As the number of elderly people in our population increases, there will be a greater number of octogenarians who will need pancreaticoduodenectomy as the only curative option for periampullary malignancies. This study evaluated clinical outcomes of pancreaticoduodenectomy in octogenarians, in comparison to younger patients.
Methods: A retrospective review was conducted of 216 consecutive patients who underwent pancreaticoduodenectomy from January 2007 to April 2015. A two-sided Fisher's exact statistical analysis was used to compare pre-operative comorbidities, intra-operative factors, surgical pathology, and post-operative complication rates between non-octogenarians and octogenarians.
Results: One hundred and eighty three non-octogenarians and 33 octogenarians underwent pancreaticoduodenectomy. Of patients with periampullary adenocarcinoma, octogenarians were more likely to present with advanced disease state (P=0.01). The two cohorts had similar ASA scores (P=0.62); however, octogenarians were more likely to have coronary artery disease (P=0.03). The length of operation was shorter in octogenarians (P=0.002). Mortality rates (P=0.49) and overall postoperative complication rates (P=1.0) were similar in two cohorts; however octogenarians had a higher incidence of pulmonary embolism (P=0.02).
Conclusions: Our data demonstrates that octogenarians can undergo pancreaticoduodenectomy with outcomes similar to those in younger patients. Thus, patients should not be denied a curative surgical option for periampullary malignancy based on advanced age alone.
abstract_id: PUBMED:37542000
Application of creatinine-based eGFR equations in Chinese septuagenarians and octogenarians. Purpose: The utilization of creatinine-based estimated glomerular filtration rate (eGFR) equations in the adult population is acknowledged. Nevertheless, the appropriateness of creatinine-based eGFR in septuagenarians and octogenarians is debatable. This study evaluates the creatinine-based equations in Chinese septuagenarians and octogenarians cohorts.
Patients And Methods: This study employed a retrospective design, utilizing a review of the hospital medical records system to identify 347 hospitalized participants within the Division of Geriatrics or the Division of Nephrology. These participants underwent renal dynamic imaging with 99 m Tc-DTPA and serum creatinine testing. Comparison of the equations was performed, including the full age-spectrum equation (FAS-Cr equation), European Kidney Function Consortium equation (EKFC equation), Chronic Kidney Disease Epidemiology Collaboration equation for Asian (Asian CKD-EPI equation), Xiangya equation, and Lund-Malmö revised equation (LMR equation).
Results: Most equations tended to underestimate GFR. The FAS-Cr equation had the smallest interquartile range (IQR), while the Asian CKD-EPI equation (mGFR ≥ 30) and Xiangya equation (mGFR < 30) had the biggest IQRs. The FAS-Cr equation had the highest overall P30 of 63.98%, while the Asian CKD-EPI equation had the highest P30 of 75.64% in mGFR ≥ 60. The Xiangya equation, on the other hand, reported the lowest P30 of 36.36% in mGFR < 30. We discovered similar patterns in root-mean-square error (RMSE) as P30. GFR category misclassification rates in the entire cohort ranged from 46.11 to 49.86% for all equations. The FAS-Cr equation exhibited an advantage in octogenarians over other equations in the GFR category misclassification with mGFR lower than 60 ml/min/1.73 m2.
Conclusion: None of the creatinine-based equations in this study could perform well regarding precision, accuracy, and CKD stages' classification for the Chinese elderly. Nevertheless, the FAS-Cr equation should be suitable for octogenarians with mGFR lower than 60 ml/min/1.73 m2.
abstract_id: PUBMED:24560585
Outcomes of pancreaticoduodenectomy for pancreatic malignancy in octogenarians: an American College of Surgeons National Surgical Quality Improvement Program analysis. Background: Most series analyzing outcomes of pancreaticoduodenectomy in octogenarians are limited by a small sample size. The investigators used the American College of Surgeons National Surgical Quality Improvement Program database for an analysis of the impact of advanced age on outcomes after pancreatic cancer surgery.
Methods: The National Surgical Quality Improvement Program database from 2005 to 2010 was accessed to study the outcomes of 475 pancreaticoduodenectomies performed in patients ≥80 years of age compared with 4,102 patients <80 years of age using chi-square and Student's t tests. A multivariate logistic regression was used to analyze factors associated with 30-day mortality and the occurrence of major complications.
Results: Octogenarians had significantly more preoperative comorbidities compared with patients <80 years of age. On multivariate analysis, age ≥80 years was associated with an increased likelihood of experiencing 30-day mortality and major complications compared with patients <80 years of age. On subgroup analysis, septuagenarians had a similar odds ratio of experiencing mortality or complications compared with octogenarians, whereas patients <70 years of age were at lower risk.
Conclusions: Although octogenarians have an increased risk for mortality and major complications compared with patients <80 years of age, on subgroup analysis, they do not differ from septuagenarians.
abstract_id: PUBMED:35234053
Surgical outcomes of acute type A aortic dissection in septuagenarians and octogenarians. Background: We studied surgical outcomes of acute type A aortic dissection and compared early and late outcomes between septuagenarians and octogenarians.
Methods: From 2010 to 2019, we evaluated 254 consecutive patients with acute type A aortic dissection. We performed emergent operations within 48 h of symptom onset for 188 patients, including 59 septuagenarians and 32 octogenarians.
Results: The overall 30-day mortality rate was 8.5% in septuagenarians and 9.4% in octogenarians (p = 1.0). The hospital mortality rate was 10.2% in septuagenarians and 12.5% in octogenarians (p = 0.74). Multivariate analysis identified prolonged ventilation (≥ 72 h) as a significant risk factor for hospital mortality. Being an octogenarian was not significantly associated with hospital mortality. The actuarial survival rate at 5 years was 80.1% in septuagenarians and 58.5% in octogenarians (log-rank p = 0.09). The freedom from aortic event rate at 5 years was 91.0% in septuagenarians and 100% in octogenarians (log-rank p = 0.23).
Conclusion: The two groups showed no significant differences in hospital mortality or morbidity. Our tear-oriented strategies might be appropriate for both septuagenarians and octogenarians. Prolonged ventilation (≥ 72 h) was a significant risk predictor for hospital mortality.
abstract_id: PUBMED:36675514
Extracorporeal Life Support for Cardiogenic Shock in Octogenarians: Single Center Experience. Background: The age limit for the use of extracorporeal membrane oxygenation (ECMO) support for post-cardiotomy cardiac failure is not defined. The aim of the study was to evaluate the outcomes of octogenarians supported with ECMO due to cardiogenic shock.
Methods: A retrospective review of consecutive elderly patients supported with ECMO during a 13-year period in a tertiary care center. Patient's demographic variables, comorbidities, perioperative data and outcomes were collected from patient medical records. Data of octogenarian patients were compared with the septuagenarian group. The main outcomes of the study was in hospital mortality, 6-month survival and 1-year survival after hospital discharge and discharge options. Multivariate logistic regression analysis was performed to identify the factors associated with hospital survival.
Results: Eleven patients (18.3%) in the elderly group were octogenarians (aged 80 years or above), and forty-nine (81.7%) were septuagenarians (aged 70-79 years). There were no differences except age in demographic and preoperative variables between groups. Pre ECMO SAVE, SOFA, SAPS-II and inotropic scores were significantly higher in septuagenarians than octogenarians. There was no statistically significant difference in hospital mortality, 6-month survival, 1 year survival or discharge options between groups.
Conclusions: ECMO could be successfully used in selected octogenarian patients undergoing cardiac surgery to support a failing heart. An early decision to initiate ECMO therapy in elderly post-cardiotomy shock patients is associated with favorable outcomes.
abstract_id: PUBMED:11436086
Infrainguinal revascularizations in octogenarians and septuagenarians. Objective: Severe atherosclerosis is a major contributor for death in octogenarians and a cause of multiple vascular-related ailments, including claudication and limb loss. Advanced age and health may limit the success of limb-salvaging procedures. Mortality, morbidity, and outcome of infrainguinal grafts have been examined in octogenarians and septuagenarians.
Methods: After 128 femoropopliteal and 99 femorotibial bypass grafts in 209 octogenarians and 242 femoropopliteal and 166 femorotibial bypass grafts in 383 septuagenarians, survival, primary patency, limb salvage, myocardial infarction and stroke rates were determined. The survival, myocardial infarction, and stroke rates of controls, 1514 octogenarians and 2011 septuagenarians, were compared.
Results: After a bypass graft, 5-year survival of octogenarians (54%) and septuagenarians (64%) was similar (P >.2) and was 89% and 89% for controls. The 5-year primary patency rates were 74% for octogenarians and 68% for septuagenarians (P >.2). Five-year limb salvage rates were 86% for octogenarians and 86% for septuagenarians. After a bypass graft, the respective rates of myocardial infarction were 4.1% and 3.9% per year and of a stroke 3.2% and 3.2% per year for octogenarians and septuagenarians, which occurred more frequently (P <.05) than in controls.
Conclusions: Death and cardiovascular events are higher after revascularization in octogenarians and septuagenarians, compared with controls, and are related to the severity of atherosclerosis and not age. Patency rates are excellent and similar. Limb salvage procedures should be considered for most octogenarians.
abstract_id: PUBMED:28865996
Comparative Perioperative Outcomes in Septuagenarians and Octogenarians Undergoing Radical Cystectomy for Bladder Cancer-Do Outcomes Differ? Background: Treatment choice for muscle invasive bladder cancer continues to be radical cystectomy. However, radical cystectomy carries a relatively high risk of morbidity and mortality compared with other urological procedures.
Objective: To compare surgical complications following radical cystectomy in septuagenarians and octogenarians.
Design, Setting, And Participants: The National Surgical Quality Improvement Program database (2009-2013) was used to identify patients who were 70 yr and older and underwent radical cystectomy.
Outcome Measurements And Statistical Analysis: The data were analyzed for demographics and comorbidities, and compared for complications, including pulmonary, thromboembolic, wound, and cardiac complications. Patients who were 70-79 yr of age were compared with those 80 yr and older. Univariate and multivariate analyses were completed.
Results And Limitations: A total of 1710 patients aged ≥70 yr met our inclusion criteria. Of them, 28.8% (n=493) were 80 yr and older, while 71.2% (n=1217) were between 70 and 79 yr old. Operative time (338.4 vs 307.2min, p=0.0001) and the length of stay (11.9 vs 10.4 d, p=0.0016) were higher in the octogenarian group. The intra- and postoperative transfusion rates, reoperative rates, wound dehiscence rates, and pneumonia, sepsis, and myocardial infarction rates were similar between the two groups. The wound infection rate (7.3% vs 4.1%, p=0.01) was higher in the septuagenarians and mortality rate (4.3% vs 2.3%, p=0.04) was higher in the octogenarian group.
Conclusions: Radical cystectomy can safely be performed in octogenarians without increased cardiac, pulmonary, and thromboembolic complications when compared with septuagenarians. These patients need to be counseled that the mortality rate is slightly higher compared with that in septuagenarians. Chronological age alone should not be used to decide on offering radical cystectomy.
Patient Summary: We looked at complications following radical cystectomy in patients aged 80 yr and older. We found that there was no significant difference for wound, cardiac, or pulmonary complications, but there was an increased risk of mortality in this age group.
abstract_id: PUBMED:26117303
Outcome of pancreaticoduodenectomy in octogenarians: Single institution's experience and review of the literature. Introduction: Pancreatic and perampullary neoplasms in patients aged 80 or older trouble the surgeons because of the risk of surgical treatment. We have reviewed our experience and literature's reports of pancreaticoduodenectomy in octogenarians, evaluating early results and long-term survival in pancreatic cancer group.
Methods: Three hundred eighty-five patients who underwent pancreaticoduodenectomy for neoplasms from 1998 to 2011 were included in the study, and were divided in two groups: group 1, patients younger than 80 years of age, and group 2, patients 80 years of age and older. Operative morbidity, mortality, disease-free and long-term survival were analysed. English literature was systematically searched for pancreatic resection's outcome in octogenarians.
Results: There were 385 pancreaticoduodenectomies: 362 patients were in group 1 and 23 patients in group 2. There was no significant difference regarding gender, and pathologic findings between the two groups. Complications' rate (40 vs. 43%), mortality rate (4% vs. 0%), and overall median survival for pancreatic cancer patients were not statistically different in the two groups (median 21 vs. 19 months). Literature's review showed 14 reports of pancreatic resection in octogenarians. Most of the studies (particularly in centres with high-volume pancreatic surgery) showed that outcome after pancreatectomy was not different in octogenarians or in younger patients.
Conclusion: Pancreaticoduodenectomy is an acceptable option for elderly patients. Age alone should not be considered a contraindication to major pancreatic resection, but a careful preoperative evaluation and an accurate postoperative management are mandatory.
Answer: Based on the provided abstracts, pancreaticoduodenectomy appears to be justified for both septuagenarians and octogenarians, provided that patients are carefully selected. Several studies have shown that surgical risks do not significantly increase and prognosis is similar for octogenarians compared to septuagenarians after undergoing pancreaticoduodenectomy (PUBMED:14571811; PUBMED:27563443). Moreover, the mortality and morbidity rates, as well as overall postoperative complication rates, were found to be similar between the two age groups in these studies.
One study specifically mentioned that octogenarians can undergo pancreaticoduodenectomy with outcomes similar to those in younger patients, suggesting that advanced age alone should not be a reason to deny patients a curative surgical option for periampullary malignancy (PUBMED:27563443). Another study using the American College of Surgeons National Surgical Quality Improvement Program database found that although octogenarians have an increased risk for mortality and major complications compared with patients under 80 years of age, they do not differ significantly from septuagenarians (PUBMED:24560585).
Furthermore, a single institution's experience supported the notion that pancreaticoduodenectomy is an acceptable option for elderly patients, with no significant difference in complications, mortality rate, and overall median survival for pancreatic cancer patients between those younger than 80 and those 80 years of age and older (PUBMED:26117303).
In conclusion, the evidence from these studies suggests that pancreaticoduodenectomy can be justified for septuagenarians and octogenarians, with the caveat that individual patient selection is crucial, taking into account their comorbidities and overall health status. |
Instruction: Does social support affect the relationship between socioeconomic status and depression?
Abstracts:
abstract_id: PUBMED:17070599
Does social support affect the relationship between socioeconomic status and depression? A longitudinal study from adolescence to adulthood. Background: The aim of this prospective longitudinal study of adolescents was to investigate socioeconomic differences in adult depression and in the domain of social support from adolescence to adulthood. We also studied the modifying effect of social support on the relationship between socioeconomic status (SES) and depression.
Methods: All 16-year-old ninth-grade school pupils of one Finnish city completed questionnaires at school (n=2194). Subjects were followed up using postal questionnaires when aged 22 and 32 years.
Results: At 32 years of age there was a social gradient in depression, with a substantially higher prevalence among subjects with lower SES. Low parental SES during adolescence did not affect the risk of depression at 32 years of age, but the person's lower level of education at 22 years did. Lower level of support among subjects with lower SES was found particularly in females. Some evidence indicated that low level of social support had a greater impact on depression among lower SES group subjects. However, this relationship varied depending on the domain of social support, life stage and gender. On the other hand, the results did not support the hypothesis that social support would substantially account for the variation in depression across SES groups.
Limitations: The assessments and classifications of social support were rather brief and crude, particularly in adolescence and early adulthood.
Conclusions: It is important to pay attention to social support resources in preventive programs and also in the treatment settings, with a special focus on lower SES group persons.
abstract_id: PUBMED:38406501
The effect of social support on home isolation anxiety and depression among college students in the post-pandemic era: the mediating effect of perceived loss of control and the moderating role of family socioeconomic status. Background: There is an escalating concern about the rising levels of anxiety and depression among college students, especially during the post-pandemic era. A thorough examination of the various dimensions of social support and their impact on these negative emotions in college students is imperative.
Aim: This study aimed to determine if a perceived loss of control mediates the relationship between social support and levels of anxiety and depression among college students during the post-pandemic era. Additionally, it examined whether family socioeconomic status moderates this mediated relationship.
Methods: We administered an online cross-sectional survey in China, securing responses from 502 participants. The sample comprised home-isolated college students impacted by COVID-19. Established scales were employed to assess social support, anxiety, depression, perceived loss of control, and family socioeconomic status. Analytical techniques included descriptive statistics, correlation analysis, and a bootstrap method to investigate mediating and moderating effects.
Results: Social support was found to negatively affect anxiety and depression in college students, with perceived loss of control partially mediating this relationship. In addition, family socio-economic status was shown to moderate this moderating process. Furthermore, family socioeconomic status influenced this mediation, with higher socioeconomic families exhibiting a stronger moderating effect on perceived loss of control across different dimensions of social support.
Conclusion: This study may help to develop strategies to mitigate the impact of anxiety and depression in the lives and studies of university students during unexpected public health crises, and to promote better mental health among college students.
abstract_id: PUBMED:32849145
Higher Socioeconomic Status Predicts Less Risk of Depression in Adolescence: Serial Mediating Roles of Social Support and Optimism. Family socioeconomic status (SES) is known to have a powerful influence on adolescent depression. However, the mechanisms underlying this association are unclear. Here, we explore this issue by testing the potential mediating roles of social support (interpersonal resource) and optimism (intrapersonal resource), based on the predictions of the reserve capacity model (RCM). Participants were 652 adolescents [age range: 11-20 years old, Mage = 14.55 years, SD = 1.82; 338 boys (51.80%)] from two junior and two senior high schools in Wuhan, China. They completed questionnaires measuring family SES, perceived social support, optimism, and depression. Results showed, as predicted, (1) SES negatively predicted adolescent depression; (2) social support and optimism serially mediated the relations between SES and depression, consistent with the predictions by the RCM. Specifically, higher SES predicted greater social support and increased optimism, which in turn contributed to reduced depression. The implications of these data to the prevention and interventions of adolescent depression were discussed.
abstract_id: PUBMED:21191462
The Associations between Social Support, Health-Related Behaviors, Socioeconomic Status and Depression in Medical Students. Objectives: The objective of this study was to estimate the prevalence of depression in medical students and to evaluate whether interpersonal social support, health-related behaviors, and socio-economic factors were associated with depression in medical students.
Methods: The subjects in this study were 120 medical students in Seoul, Korea who were surveyed in September, 2008. The subjects were all women and over the age of 20. Their age, body mass index (BMI), quality of sleep, diet, household income, smoking, alcohol consumption, exercise levels, and self-reported health status were surveyed. The degree of perceived social support was measured using the interpersonal support evaluation list (ISEL). Depression was evaluated using the center for epidemiology studies depression scale (CES-D).
Results: The mean CES-D score was 14.1±8.6 and 37.1% of the participants appeared to suffer from depression. Low levels of perceived interpersonal support increased the risk of depression by more than 10 times and having higher household income did not necessarily decrease the risk of depression.
Conclusion: Medical students have a relatively high level of depression. Efforts should be made to encourage social support in order to promote mental health in medical students.
abstract_id: PUBMED:32691647
Low socioeconomic status, parental stress, depression, and the buffering role of network social capital in mothers. Background: Pathways underlying the stress-depression relationship in mothers, and the factors that buffer this relationship are not well understood.
Aims: Drawing from the Stress Process model, this study examines (1) if parental stress mediates the association between socioeconomic characteristics and depressive symptoms, and (2) if social support and network capital moderate these pathways.
Method: Data came from 101 mothers from Montreal. Generalized structural equation models were conducted, with depressive symptoms (CES-D scores) as the outcome, socioeconomic stressors as independent variables, parental stress as the mediator, and social support and network social capital as moderators.
Results: Parental stress partially mediated the association between household income and depressive symptoms (indirect effect: β = -0.09, Bootstrap SE = 0.03, 95% CI = -0.15 to -0.03 p = 0.00). Network diversity moderated the relationship between parental stress and depressive symptoms (β = -0.25, 95% CI = -0.42 to -0.09, p = 0.00); at high levels of stress, mothers with high compared to low network diversity reported fewer symptoms.
Conclusion: Findings highlight the role that socioeconomic factors play in influencing women's risk of depression and shaping the benefits that ensue from social resources. Addressing these factors requires interventions that target the social determinants of depression.
abstract_id: PUBMED:38321378
The relationship between childhood socioeconomic status and depression level in older adults: the mediating role of adult socioeconomic status and subjective well-being. Background: There is a causal link between childhood socioeconomic status and health status in adulthood and beyond. It's vital to comprehend the relationship between childhood socioeconomic status and mental health among older Chinese individuals from the current generation who have undergone significant social changes in China. This understanding is critical to foster healthy demographic and social development in China.
Methods: Using data from the 2020 China Family Panel Studies, we investigate the relationship between childhood socioeconomic status and depression in older adults. Additionally, we examine the mediating role of adult socioeconomic status and subjective well-being.
Results: 1) Childhood socioeconomic status of Chinese older adults differences by region of residence, while depression levels differences by gender, region of residence, and marital status. 2) Adult socioeconomic status mediated the relationship between childhood socioeconomic status and depression in older adults. 3) Adult socioeconomic status and subjective well-being had a chain-mediated role in the relationship between childhood socioeconomic status and depression in older adults.
Conclusions: In terms of childhood socioeconomic status, older adults in urban regions were significantly higher than those in rural regions. As for depression level, female older adults were more depressed than males; married older people have the lowest depression levels, while unmarried and widowed older people have higher depression levels; older adults in rural regions had higher depression levels than those in urban regions. Evidence from our study further suggests that childhood socioeconomic status can suppress the depression level in older adults through adult socioeconomic status; it can also further reduce the depression level in older adults through the chain mediation of adult economic status affecting subjective well-being. As depression is more prevalent among older individuals with a lower childhood socioeconomic status, it is vital to prioritize the extensive impact of childhood socioeconomic status as a distal factor and investigate "upstream" solutions to enhance childhood socioeconomic status and reduce the gap during the early years of life.
abstract_id: PUBMED:32066483
Socioeconomic resources and quality of life in alcohol use disorder patients: the mediating effects of social support and depression. Background: Quality of life (QoL) has recently attracted increased attention as a major indicator of the recovery from alcohol use disorder (AUD). This study investigated the mediating effects of social support and depression for the relationship between socioeconomic resources and QoL among people with AUD in South Korea.
Methods: Patients across South Korea who had been diagnosed with AUD in the previous year (n = 404) and were registered at hospitals and addiction management centers were surveyed. The participants ranged in age from 19 to 65 years. Structural equation modeling was performed, using stable residence, income, stable employment, social support, depression, and QoL as predictors. Bootstrapping analysis was performed to test for mediating effects.
Results: The socioeconomic resources income (β = .297, p < .001), stable employment (β = .131, p < .01), and stable residence (β = .091, p < .05) showed statistically significant and positive relationships with social support. However, none of these were significantly related to depression. Social support showed a significant and negative relationship with depression (β = -.172, p < .001). Income positively and directly influenced QoL (β = .148, p < .001). All three socioeconomic resources indirectly influenced depression through social support, which, in turn, influenced QoL. This suggests that socioeconomic resources directly influence QoL and indirectly influence it through social support.
Conclusion: These findings suggest that social support has an important role in improving the QoL of people with AUD. Furthermore, socioeconomic resources, such as having a stable residence, employment, and income, are necessary for recovery from alcohol addiction.
abstract_id: PUBMED:37095501
Relationship between socioeconomic status and cognitive ability among Chinese older adults: the moderating role of social support. Background: Understanding the causes and pathways of cognitive decline among older populations is of great importance in China. This study aims to examine whether the discrepancy in socioeconomic status (SES) makes a difference to the cognitive ability among Chinese older adults, and to disentangle the moderating role of different types of social support in the process in which SES influences cognition.
Methods: We utilized a nationally representative sample from the 2018 Chinese Longitudinal Healthy Longevity Survey. A cumulative SES score was constructed to measure the combined effect of different socioeconomic statuses on the cognitive ability of the elderly. We further examined the moderating role of two types of social support, including emotional support, and financial support. Hierarchical regression analysis was applied to test the direct effect of SES on cognitive ability, and to investigate the moderating role of social support on the association of the SES with the dependent variables.
Results: The results showed that the higher SES of older adults was significantly associated with better cognitive ability (β = 0.52, p < 0.001) after controlling for age, sex, marital status, living region, Hukou, health insurance, lifestyle factors, and physical health status. Emotional support and financial support were moderated the relationship between SES score and cognitive ability.
Conclusion: Our results reveal the importance of considering social support in buffering the effects of SES and the associated cognitive ability for aging populations. It highlights the importance of narrowing the socioeconomic gap among the elderly. Policymakers should consider promoting social support to improve the cognitive ability among older adults.
abstract_id: PUBMED:35451605
The relationship between social support in pregnancy and postnatal depression. Purpose: Lack of social support is considered a potential risk factor for postnatal depression but limited longitudinal evidence is available. Pregnancy, when women have increased contact with healthcare services, may be an opportune time to intervene and help strengthen women's social networks to prevent feelings of depression postnatally, particularly for those at greatest risk. Our study examined the longitudinal relationship between social support in pregnancy and postnatal depression, and whether this is moderated by age or relationship status.
Methods: We analysed data collected from 525 women from a diverse inner-city maternity population in England who were interviewed in pregnancy and again three months postnatally. Women provided sociodemographic information and completed self-report measures of depression (Edinburgh Postnatal Depression Scale) and social support (Social Provisions Scale).
Results: Less social support in pregnancy was associated with postnatal depression, after adjusting for sociodemographic confounders and antenatal depression (Coef. = - 0.05; 95% CI - 0.10 to - 0.01; p = 0.02). There was weak evidence of a moderating effect of relationship status. Subgroup analysis showed a stronger relationship between social support in pregnancy and postnatal depression for women who were not living with a partner (Coef. = - 0.11; 95% CI - 0.21 to - 0.01; p = 0.03) than for those who were (Coef. = - 0.03; 95% CI - 0.09 to 0.02; p = 0.28). Sensitivity analysis using multiple imputations to account for missing data confirmed the main results.
Conclusions: Interventions that target social support in pregnancy have the potential to reduce depression postnatally. Future research should explore in greater detail which women would benefit most from which type of social support.
abstract_id: PUBMED:20884996
Social inequalities in adolescent depression: social support and optimism as mediators? Background And Aims Of The Study: A number of studies have demonstrated that socioeconomic inequalities have profound effects on health status. The relationship between socio-economic status (SES) and health status, however, does not appear to be consistent across the life cycle and during adolescence and young adulthood fewer inequalities are detected in mortality and morbidity. However, social inequalities in psychological health and mental disorders are much higher in occurrence. Thus it is important to further investigate possible influences and mediators in adolescent mental health. This study examines the role of perceived parental social support and optimism in understanding the relationship between adolescent depression and SES.
Method: Data were collected in a sample of Hungarian high school students (N = 881; aged between 14-20 years) in Szeged, Hungary. Using Structural Equation Modeling we examined associations between objective SES, subjective SES, parental support, optimism (LOT), depression (CDI), and self-perceived health (SPH).
Results: Findings suggest: 1.SES variables may generate social inequalities in adolescent health (namely, depression and self-perceived health) through parental social support; and 2. Social inequalities in adolescent health may be explained by differences in parents' resources (both in terms of material, emotional, or security aspects) which provide youth with social support that may strengthen optimism during the socialization process.
Conclusion: Results seem to suggest that experts in mental health promotion might want to take into account socioeconomic differences in attitudes and coping skills which may influence psychosocial adjustment and health among youth. While parents from lower social classes may lack the necessary material and psychosocial resources, therapy sessions and special programs could contribute to strengthening certain attitudes (such as optimism) and thus lowering social inequalities in health later in adulthood.
Answer: Yes, social support does affect the relationship between socioeconomic status (SES) and depression. Several studies have found that social support can have a significant impact on this relationship in various populations and life stages.
For instance, a longitudinal study found that social support resources are important to consider in preventive programs and treatment settings, especially for individuals from lower SES groups, as they may experience lower levels of support, which can impact depression (PUBMED:17070599). Another study indicated that social support negatively affects anxiety and depression among college students, with perceived loss of control partially mediating this relationship, and family socioeconomic status moderating the mediated relationship (PUBMED:38406501).
Additionally, research has shown that higher SES predicts less risk of depression in adolescence, with social support and optimism serving as serial mediators in this relationship (PUBMED:32849145). In medical students, low levels of perceived interpersonal support significantly increased the risk of depression, and higher household income did not necessarily decrease the risk of depression (PUBMED:21191462).
Moreover, parental stress was found to partially mediate the association between household income and depressive symptoms, with network social capital moderating the relationship between parental stress and depressive symptoms (PUBMED:32691647). The relationship between childhood SES and depression in older adults was also mediated by adult SES and subjective well-being, suggesting that childhood SES can have a long-term impact on depression levels (PUBMED:38321378).
In patients with alcohol use disorder, socioeconomic resources indirectly influenced depression through social support, which in turn influenced quality of life (PUBMED:32066483). The relationship between SES and cognitive ability among Chinese older adults was moderated by social support, highlighting the importance of social support in buffering the effects of SES (PUBMED:37095501).
Lastly, a study on the relationship between social support in pregnancy and postnatal depression found that less social support in pregnancy was associated with postnatal depression, with relationship status potentially moderating this relationship (PUBMED:35451605). Another study suggested that social inequalities in adolescent health, including depression, may be explained by differences in parental resources, which provide youth with social support that may strengthen optimism (PUBMED:20884996).
In summary, social support plays a crucial role in mediating and moderating the relationship between socioeconomic status and depression across different populations and life stages. |
Instruction: Do early postoperative CT findings following type A aortic dissection repair predict early clinical outcome?
Abstracts:
abstract_id: PUBMED:27864636
Do early postoperative CT findings following type A aortic dissection repair predict early clinical outcome? Purpose: The purposes of this study are to determine the prevalence of specific postoperative CT findings following Stanford type A aortic dissection repair in the early postoperative period and to determine if these postoperative findings are predictive of adverse clinical outcome.
Methods: Patients who underwent type A dissection repair between January 2012 and December 2014 were identified from our institutional cardiac surgery database. Postoperative CT exams within 1 month of surgery were retrospectively reviewed to determine sizes and attenuation of mediastinal, pericardial, and pleural fluid, and the presence or absence of pneumomediastinum, pneumothorax, or lung consolidation. Poor early clinical outcome was defined as length of stay (LOS) > 14 days. Student's t test and chi-square test were used to determine the relationship between postoperative CT features and early clinical outcome.
Results: Thirty-nine patients (24 M, 15 F, mean age 58.5 ± 13.7 years) underwent type A dissection repair and mean LOS was 17.3 ± 21.2 days. A subset of 19 patients underwent postoperative CTs within 30 days of surgery, and there was no significant relationship between LOS and sizes and attenuation of mediastinal, pericardial, and pleural fluid, and the presence or absence of pneumomediastinum, pneumothorax, or lung consolidation.
Conclusions: CT features such as mediastinal, pericardial, and pleural fluid were ubiquitous in the early postoperative period. There was no consistent CT feature or threshold that could reliably differentiate between "normal postoperative findings" and early postoperative complications.
abstract_id: PUBMED:32616126
Outcomes after Surgical Repair of Thoracoabdominal Aortic Aneurysm with Distal Aortic Dissection:DeBakey Type Ⅰ versus Type Ⅲ Objective To evaluate the early and mid-term results after surgical repair of thoracoabdominal aortic aneurysm(TAAA)in patients with DeBakey typeⅠor Ⅲ aortic dissection. Methods The clinical data of 130 patients who underwent TAAA repair for chronic DeBakey typeⅠ(groupⅠ, n=47)or type Ⅲ(group Ⅲ, n=83)aortic dissections in our center between January 2009 and December 2017 were retrospectively analyzed.Early postoperative results,midterm survival,and re-interventions were compared between these two groups. Results The 30-day mortality rate was 6.9%(n=9)in the overall cohort,with no statistic difference between groupⅠand group Ⅲ(10.6% vs. 4.8%;χ2=0.803, P=0.370).The incidence of major adverse events(38.3% vs. 51.8%;χ2=2.199, P=0.138),5-year actuarial survival rate [(81.7±5.9)% vs.(87.2±4.2)%;χ2=0.483, P=0.487],and 5-year actuarial freedom from all reinterventions [(84.5±6.7)% vs.(85.5±4.8)%;χ2=0.010, P=0.920] showed no significant differences between these two groups. Conclusions The early and mid-term outcomes after surgical repair of TAAA are similar for DeBakey typeⅠ and type Ⅲ patients.However,studies with larger sample sizes are still required.
abstract_id: PUBMED:30956742
Cardiac arrest identified by a chest CT scan in a patient with normal telemetry findings. Early recognition of cardiac arrest has been linked traditionally to clinical signs and telemetry findings. Few case reports have presented normal telemetry findings in patients with cardiac arrest where a contrast enhanced CT scan of the chest was able to identify the diagnosis. The early recognition of a cardiac arrest whether by telemetry monitoring or CT scan is important to improve the clinical outcomes. This case report presents a patient who was hypertensive and unresponsive upon arrival to the emergency department. A chest CT scan to rule out aortic dissection showed no contrast in the pulmonary arteries, aorta, and the rest of the heart chambers although normal telemetry findings were present. Resuscitation was initiated, and patient survived with poor neurological recovery.
abstract_id: PUBMED:34306462
Early and late outcomes of non-total aortic arch replacement for repair of acute Stanford Type A aortic dissection. Objective: This study evaluated the early and late outcomes of non-total aortic arch replacement for acute Stanford A aortic dissection.
Methods: 131 cases of acute Stanford Type A aortic dissection with no rupture admitted to our hospital from January 2016 to December 2019 were selected for non-total aortic arch replacement. According to different surgical methods, 51 patients with tear-oriented ascending/hemiarch replacement were included in Group A, and 80 patients who underwent total arch replacement surgery were enrolled in Group B. The perioperative indicators, 30-day mortality rate, and the incidence of postoperative complications were compared between the two groups, and the survival rate of patients were compared by follow-up after discharge.
Results: The cardiopulmonary bypass time, cardiac perfusion time, invasive ventilation and ICU hospitalization in Group A were critically shorter than those in Group B (P<0.05). The incidence of transient cerebral dysfunction in Group A was substantially lower than that in Group B (P<0.05). The difference of comparison in perioperative mortality, incidence of permanent neurological dysfunction, and incidence of acute kidney and liver damage between the two groups was statistically insignificant (P>0.05). In addition, the two groups had statistically insignificant difference in survival during postoperative follow-up (P>0.05).
Conclusion: For acute Stanford type A aortic dissection without rupture in aortic arch, the non-total aortic arch replacement has simple surgical method with high perioperative safety and long-term efficacy that similar to total arch replacement.
abstract_id: PUBMED:34830651
Preoperative Predictors of Adverse Clinical Outcome in Emergent Repair of Acute Type A Aortic Dissection in 15 Year Follow Up. Background: Acute type A aortic dissection (AAAD) has high mortality. Improvements in surgical technique have lowered mortality but postoperative functional status and decreased quality of life due to debilitating deficits remain of concern. Our study aims to identify preoperative conditions predictive of undesirable outcome to help guide perioperative management.
Methods: We performed retrospective analysis of 394 cases of AAAD who underwent repair in our institution between 2001 and 2018. A combined endpoint of parameters was defined as (1) 30-day versus hospital mortality, (2) new neurological deficit, (3) new acute renal insufficiency requiring postoperative renal replacement, and (4) prolonged mechanical ventilation with need for tracheostomy.
Results: Total survival/ follow-up time averaged 3.2 years with follow-up completeness of 94%. Endpoint was reached by 52.8%. Those had higher EuroSCORE II (7.5 versus 5.5), higher incidence of coronary artery disease (CAD) (9.2% versus 3.2%), neurological deficit (ND) upon presentation (26.4% versus 11.8%), cardiopulmonary resuscitation (CPR) (14.4% versus 1.6%) and intubation (RF) before surgery (16.9% versus 4.8%). 7-day mortality was 21.6% versus 0%. Hospital mortality 30.8% versus 0%.
Conclusions: This 15-year follow up shows, that unfavorable postoperative clinical outcome is related to ND, CAD, CPR and RF on arrival.
abstract_id: PUBMED:37018150
The modified frozen elephant trunk may outperform limited and extended-classic repair in acute type I dissection. Objectives: A better surgical approach for acute DeBakey type I dissection has been sought for decades. We compare operative trends, complications, reinterventions and survival after limited versus extended-classic versus modified frozen elephant trunk (mFET) repair for this condition.
Methods: From 1 January 1978 to 1 January 2018, 879 patients underwent surgery for acute DeBakey type I dissection at Cleveland Clinic. Repairs were limited to the ascending aorta/hemiarch (701.79%) or extended through the arch [extended classic (88.10%) or mFET (90.10%)]. Weighted propensity score matched established comparable groups.
Results: Among weighted propensity-matched patients, mFET repair had similar circulatory arrest times and postoperative complications to limited repair, except for postoperative renal failure, which was twice as high in the limited group [25% (n = 19) vs 12% (n = 9), P = 0.006]. Lower in-hospital mortality was observed following limited compared to extended-classic repair [9.1% (n = 7) vs 19% (n = 16), P = 0.03], but not after mFET repair [12% (n = 9) vs 9.5% (n = 8), P = 0.6]. Extended-classic repair had higher risk of early death than limited repair (P = 0.0005) with no difference between limited and mFET repair groups (P = 0.9); 7-year survival following mFET repair was 89% compared to 65% after limited repair. Most reinterventions following limited or extended-classic repair underwent open reintervention. All reinterventions following mFET repair were completed endovascularly.
Conclusions: Without increasing in-hospital mortality or complications, less renal failure and a trend towards improved intermediate survival, mFET may be superior to limited or extended-classic repair for acute DeBakey type I dissections. mFET repair facilitates endovascular reintervention, potentially reducing future invasive reoperations and warranting continued study.
abstract_id: PUBMED:25730411
Retrograde type A aortic dissection after thoracoabdominal aneurysm repair: early diagnosis with intraoperative transesophageal echocardiography. Retrograde type A aortic dissection that arises immediately after open replacement of the thoracoabdominal aorta is a rare and potentially lethal complication that has only been reported twice previously. A 74-year-old man with a history of expanding Crawford type I thoracoabdominal aortic aneurysm presented for open surgical repair. The intraoperative course was unremarkable. However, intraoperative transesophageal echocardiography after the repair revealed type A aortic dissection extending up to the sinotubular junction. Subsequently, emergent aortic arch repair was performed under deep hypothermic circulatory arrest. Early diagnosis with transesophageal echocardiography and optimal cerebral protection were instrumental in the successful outcome of this repair.
abstract_id: PUBMED:24566590
Lower heart rate in the early postoperative period does not correlate with long-term outcomes after repair of type A acute aortic dissection. Little evidence exists regarding the need for a reduction in postoperative heart rate after repair of type A acute aortic dissection. This single-center retrospective study was conducted to determine if lower heart rate during the early postoperative phase is associated with improved long-term outcomes after surgery for patients with type A acute aortic dissection. We reviewed 434 patients who underwent aortic repair between 1990 and 2011. Based on the average heart rate on postoperative days 1, 3, 5, and 7, 434 patients were divided into four groups, less than 70, 70-79, 80-89, and greater than 90 beats per minute. The mean age was 63.3 ± 12.1 years. During a median follow-up of 52 months (range 16-102), 10-year survival in all groups was 67%, and the 10-year aortic event-free rate was 79%. The probability of survival and being aortic event-free using Kaplan-Meier estimates reveal that there is no significant difference when stratified by heart rate. Cox proportional regression analysis for 10-year mortality shows that significant predictors of mortality are age [Hazard Ratio (HR) 1.04; 95% confidence interval (CI) 1.07-1.06; p = 0.001] and perioperative stroke (HR 2.30; 95% CI 1.18-4.50; p = 0.024). Neither stratified heart rate around the time of surgery nor beta-blocker use at the time of discharge was significant. There is no association between stratified heart rate in the perioperative period with long-term outcomes after repair of type A acute aortic dissection. These findings need clarification with further clinical trials.
abstract_id: PUBMED:36013300
Postoperative Intensive Care Management of Aortic Repair. Vascular surgery patients have multiple comorbidities and are at high risk for perioperative complications. Aortic repair surgery has greatly evolved in recent years, with an increasing predominance of endovascular techniques (EVAR). The incidence of cardiac complications is significantly reduced with endovascular repair, but high-risk patients require postoperative ST-segment monitoring. Open aortic repair may portend a prohibitive risk of respiratory complications that could be a contraindication for surgery. This risk is greatly reduced in the case of an endovascular approach, and general anesthesia should be avoided whenever possible in the case of endovascular repair. Preoperative renal function and postoperative kidney injury are powerful determinants of short- and long-term outcome, so that preoperative risk stratification and secondary prevention are critical tasks. Intraoperative renal protection with selective renal and distal aortic perfusion is essential during open repair. EVAR has lower rates of postoperative renal failure compared to open repair, with approximately half the risk for acute kidney injury (AKI) and one-third of the risk of hemodialysis requirement. Spinal cord ischemia used to be the most distinctive and feared complication of aortic repair. The risk has significantly decreased since the beginning of aortic surgery, with advances in surgical technique and spinal protection protocols, and is lower with endovascular repair. Endovascular repair avoids extensive aortic dissection and aortic cross-clamping and is generally associated with reduced blood loss and less coagulopathy. The intensive care physician must be aware that aortic repair surgery has an impact on every organ system, and the importance of early recognition of organ failure cannot be overemphasized.
abstract_id: PUBMED:26060379
Effects of Gender on Outcomes and Survival Following Repair of Acute Type A Aortic Dissection. Previous studies have demonstrated gender-related differences in early and late outcomes following type A dissection diagnosis. However, it is widely unknown whether gender affects early clinical outcomes and survival after repair of type A aortic dissection. The goal of this study was to compare the early and late clinical outcomes in women versus men after repair of acute type A aortic dissections. Between January 2000 and October 2010 a total of 251 patients from four academic medical centers underwent repair of acute type A aortic dissection. Of those, 79 were women and 172 were men with median ages of 67 (range, 20-87 years) and 58 years (range, 19-83 years), respectively (p < 0.001). Major morbidity, operative mortality, and 10-year actuarial survival were compared between the groups. Operative mortality was not significantly influenced by gender (19% for women vs. 17% for men, p = 0.695). There were similar rates of hemodynamic instability (12% for women vs. 13% men, p = 0.783) between the two groups. Actuarial 10-year survival rates were 58% for women versus 73% for men (p = 0.284). Gender does not significantly impact early clinical outcomes and actuarial survival following repair of acute type A aortic dissection.
Answer: No, early postoperative CT findings following Stanford type A aortic dissection repair do not predict early clinical outcomes. A study that included 39 patients who underwent type A dissection repair found that there was no significant relationship between the length of stay (LOS) and the sizes and attenuation of mediastinal, pericardial, and pleural fluid, nor the presence or absence of pneumomediastinum, pneumothorax, or lung consolidation on postoperative CT scans within 30 days of surgery. The study concluded that CT features such as mediastinal, pericardial, and pleural fluid were ubiquitous in the early postoperative period and that there was no consistent CT feature or threshold that could reliably differentiate between "normal postoperative findings" and early postoperative complications (PUBMED:27864636). |
Instruction: Does the use of orthoses improve self-reported pain and function measures in patients with plantar fasciitis?
Abstracts:
abstract_id: PUBMED:19218074
Does the use of orthoses improve self-reported pain and function measures in patients with plantar fasciitis? A meta-analysis. Objectives: To perform a meta-analysis examining the effects of foot orthoses on self-reported pain and function in patients with plantar fasciitis.
Data Sources: MEDLINE, SPORTDiscus, and CINAHL were searched from their inception until December 2007 using the terms "foot", "plantar fascia", "arch", "orthotic", "orthoses" and "plantar fasciitis".
Study Selection: Original research studies which met these criteria were included: (1) randomised controlled trials or prospective cohort designs, (2) the patients had to be suffering from plantar fasciitis at the time of recruitment, (3) evaluated the efficacy of foot orthoses with self-reported pain and/or function, (4) means, standard deviations, and sample size of each group had to be reported.
Results: We utilised the Roos, Engstrom, and Soderberg (Roos, E., Engstrom, M., & Soderberg, B. (2006). Foot orthoses for the treatment of plantar fasciitis. Foot and Ankle International, 8, 606-611) night splint condition to compare our pooled orthoses results. The meta-analysis results showed significant reductions in pain after orthotic intervention. The Roos et al.' (Roos, E., Engstrom, M., & Soderberg, B. (2006). Foot orthoses for the treatment of plantar fasciitis. Foot and Ankle International, 8, 606-611) study also showed significant reduction in pain after night splint treatment. The meta-analysis results also showed significant increases in function after orthotic use. In contrast, the Roos et al.' (Roos, E., Engstrom, M., & Soderberg, B. (2006). Foot orthoses for the treatment of plantar fasciitis. Foot and Ankle International, 8, 606-611) study did not show a significant increase in function after night splinting for 12 weeks.
Conclusion: The use of foot orthoses in patients with plantar fasciitis appears to be associated with reduced pain and increased function.
abstract_id: PUBMED:28935689
Foot orthoses for plantar heel pain: a systematic review and meta-analysis. Objective: To investigate the effectiveness of foot orthoses for pain and function in adults with plantar heel pain.
Design: Systematic review and meta-analysis. The primary outcome was pain or function categorised by duration of follow-up as short (0 to 6 weeks), medium (7 to 12 weeks) or longer term (13 to 52 weeks).
Data Sources: Medline, CINAHL, SPORTDiscus, Embase and the Cochrane Library from inception to June 2017.
Eligibility Criteria For Selecting Studies: Studies must have used a randomised parallel-group design and evaluated foot orthoses for plantar heel pain. At least one outcome measure for pain or function must have been reported.
Results: A total of 19 trials (1660 participants) were included. In the short term, there was very low-quality evidence that foot orthoses do not reduce pain or improve function. In the medium term, there was moderate-quality evidence that foot orthoses were more effective than sham foot orthoses at reducing pain (standardised mean difference -0.27 (-0.48 to -0.06)). There was no improvement in function in the medium term. In the longer term, there was very low-quality evidence that foot orthoses do not reduce pain or improve function. A comparison of customised and prefabricated foot orthoses showed no difference at any time point.
Conclusion: There is moderate-quality evidence that foot orthoses are effective at reducing pain in the medium term, however it is uncertain whether this is a clinically important change.
abstract_id: PUBMED:30021556
Custom foot orthoses improve first-step pain in individuals with unilateral plantar fasciopathy: a pragmatic randomised controlled trial. Background: Foot orthoses are routinely used to treat plantar fasciopathy in clinical practice. However, minimal evidence exists as to the effect of both truly custom designed foot orthoses, as well as that of the shoe the foot orthoses are placed into. This study investigated the effect of wearing custom foot orthoses and new athletic footwear on first-step pain, average 24-h pain and plantar fascia thickness in people with unilateral plantar fasciopathy over 12 weeks.
Methods: A parallel, three-arm randomised controlled trial with blinding of participants and assessors. 60 participants diagnosed with unilateral plantar fasciopathy were randomised to either custom foot orthoses and new shoes (orthoses group), a sham insole with a new shoes (shoe group) or a sham insole placed in the participant's regular shoes (control group). Primary outcome was first-step pain. Secondary outcomes were average 24-h pain and plantar fascia thickness measured on ultrasound. Outcomes were assessed at baseline, 4 week and 12 week trial time-points.
Results: At 4 weeks, the orthoses group reported less first-step pain (p = 0.002) compared to the control group. At 12 weeks, the orthoses group reported less first-step pain compared to both the shoe (p = < 0.001) and sham (p = 0.01) groups. Both the orthoses (p = < 0.001) and shoe (p = 0.006) groups reported less average 24-h pain compared to the control group at 4 and 12 weeks. The orthoses group demonstrated reduced plantar fascia thickness on ultrasound compared to both the shoe (p = 0.032) and control groups (p = 0.011).
Conclusions: Custom foot orthoses in new shoes improve first-step pain and reduce plantar fascia thickness over a period of 12 weeks compared to new shoes alone or a sham intervention.
Trial Registration: Australian New Zealand Clinical Trials Registry ( ACTRN 12613000446763 ). Submitted on the 10th of April 2013 and registered on the 18th of April 2013.
abstract_id: PUBMED:25941995
A randomized controlled trial of custom foot orthoses for the treatment of plantar heel pain. Background: Up to 10% of people will experience heel pain. The purpose of this prospective, double-blind, randomized clinical trial was to compare custom foot orthoses (CFO), prefabricated foot orthoses (PFO), and sham insole treatment for plantar fasciitis.
Methods: Seventy-seven patients with plantar fasciitis for less than 1 year were included. Outcome measures included first step and end of day pain, Revised Foot Function Index short form (FFI-R), 36-Item Short Form Health Survey (SF-36), activity monitoring, balance, and gait analysis.
Results: The CFO group had significantly improved total FFI-R scores (77.4 versus 57.2; P = .03) without group differences for FFI-R pain, SF-36, and morning or evening pain. The PFO and CFO groups reported significantly lower morning and evening pain. For activity, the CFO group demonstrated significantly longer episodes of walking over the sham (P = .019) and PFO (P = .03) groups, with a 125% increase for CFOs, 22% PFOs, and 0.2% sham. Postural transition duration (P = .02) and balance (P = .05) improved for the CFO group. There were no gait differences. The CFO group reported significantly less stretching and ice use at 3 months.
Conclusions: The CFO group demonstrated 5.6-fold greater improvements in spontaneous physical activity versus the PFO and sham groups. All three groups improved in morning pain after treatment that included standardized athletic shoes, stretching, and ice. The CFO changes may have been moderated by decreased stretching and ice use after 3 months. These findings suggest that more objective measures, such as spontaneous physical activity improvement, may be more sensitive and specific for detecting improved weightbearing function than traditional clinical outcome measures, such as pain and disease-specific quality of life.
abstract_id: PUBMED:29032913
Extracorporeal Shockwave Therapy Plus Rehabilitation for Patients With Chronic Plantar Fasciitis Might Reduce Pain and Improve Function but Still Not Lead to Increased Activity: A Case-Series Study With Multiple Outcome Measures. Plantar fasciitis is a common cause of plantar-aspect heel pain. Although many patients will improve, a proportion will have ongoing and sometimes debilitating symptoms. Evidence from randomized controlled trials has shown that extracorporeal shockwave therapy (ESWT) results in benefits in treating pain. However, uncertainties remain whether these benefits translate to improvements in overall function. The present prospective case series examined the results from 35 patients with chronic plantar fasciitis who had undergone a course of ESWT in addition to a graded rehabilitation program. Of the 35 subjects, 34% were male, and the median age was 50.9 years. The duration of symptoms before ESWT was 24 months. The results of the present case series demonstrated statistically significant improvements in measures of self-reported "average pain" from a median of 7.0 of 10 at baseline to 5.0 of 10 at 3 months (p < .001) and of "worst pain" from 9.0 of 10 at baseline to 7.0 of 10 at 3 months (p < .001). In addition, significant improvements were found in several validated patient-rated outcome measures of local foot/ankle function but not in overall markers of health, anxiety/depression scores, or activity levels, despite the improvements in pain. No statistically significant correlations were found between gender, age, or chronicity of symptoms and the improvements seen. No significant side effects occurred in the present study. The results of our series support the use of ESWT for patients with chronic plantar fasciitis for local pain symptoms; however, uncertainties remain regarding global benefits to health.
abstract_id: PUBMED:32993721
Predictors of response to foot orthoses and corticosteroid injection for plantar heel pain. Background: Foot orthoses and corticosteroid injection are common interventions used for plantar heel pain, however few studies have investigated the variables that predict response to these interventions.
Methods: Baseline variables (age, weight, height, body mass index (BMI), sex, education, foot pain, foot function, fear-avoidance beliefs and feelings, foot posture, weightbearing ankle dorsiflexion, plantar fascia thickness, and treatment preference) from a randomised trial in which participants received either foot orthoses or corticosteroid injection were used to predict change in the Foot Health Status Questionnaire foot pain and foot function subscales, and first-step pain measured using a visual analogue scale. Multivariable linear regression models were generated for different dependent variables (i.e. foot pain, foot function and first-step pain), for each intervention (i.e. foot orthoses and corticosteroid injection), and at different timepoints (i.e. weeks 4 and 12).
Results: For foot orthoses at week 4, greater ankle dorsiflexion with the knee extended predicted reduction in foot pain (adjusted R2 = 0.16, p = 0.034), and lower fear-avoidance beliefs and feelings predicted improvement in foot function (adjusted R2 = 0.43, p = 0.001). At week 12, lower BMI predicted reduction in foot pain (adjusted R2 = 0.33, p < 0.001), improvement in foot function (adjusted R2 = 0.37, p < 0.001) and reduction in first-step pain (adjusted R2 0.19, p = 0.011). For corticosteroid injection at week 4, there were no significant predictors for change in foot pain or foot function. At week 12, less weightbearing hours predicted reduction in foot pain (adjusted R2 = 0.25, p = 0.004) and lower baseline foot pain predicted improvement in foot function (adjusted R2 = 0.38, p < 0.001).
Conclusions: People with plantar heel pain who use foot orthoses experience reduced foot pain if they have greater ankle dorsiflexion and lower BMI, while they experience improved foot function if they have lower fear-avoidance beliefs and lower BMI. People who receive a corticosteroid injection experience reduced foot pain if they weightbear for fewer hours, while they experience improved foot function if they have less baseline foot pain.
abstract_id: PUBMED:35315833
Custom-made foot orthoses with and without heel plugs and their effect on plantar pressures during treadmill walking. Background: Foot orthoses have consistently demonstrated an improvement in pain scores for plantar fasciitis. The fabrication of custom-made foot orthoses (CFOs) can vary between clinicians and may include the use of different materials and casting techniques. This cross-sectional study's objective was to quantify plantar pressure for two CFOs, one with a heel plug (HP) and one without.
Methods: Fourteen healthy participants (8 men and 6 women; 35.4 ± 7.7 years) were cast by the same practitioner. Both CFOs were made with the same materials and specifications, except for the HP orthosis, which replaced hard material under the heel with a softer blue PORON ® plug for added cushioning. Plantar pressures were recorded during treadmill walking for both devices in a running shoe. Average pressure, peak pressure, and pressure contact area were determined for three regions of the foot: hindfoot, midfoot, and forefoot. A paired samples t -test determined differences in each region ( P < 0.05).
Results: The HP orthosis reduced the overall means of average pressure, peak pressure, and pressure contact area in the hindfoot while tending to increase these measures in the midfoot and forefoot. The three measures showed statistically significant decreases in the hindfoot, whereas a statistically significant increase was seen in average and peak pressures in the midfoot ( P < 0.05).
Conclusions: CFOs with HPs are more effective than regular CFOs in offloading plantar pressures in the hindfoot while increasing pressures in the midfoot. This is an important finding because offloading the hindfoot is critical in pathologies such as plantar fasciitis to decrease pain and increase function.
abstract_id: PUBMED:22954426
Evaluation of combined prescription of rocker sole shoes and custom-made foot orthoses for the treatment of plantar fasciitis. Background: It is a routine practice to prescribe a combination of rocker shoes and custom-made foot orthoses for patients with plantar fasciitis. Recently, there has been a debate on this practice, and studies have shown that the individual prescription of rocker shoes or custom-made foot orthoses is effective in treating plantar fasciitis. The aim of this study was to evaluate and compare the immediate therapeutic effects of individually prescribed rocker sole shoes and custom-made foot orthoses, and a combined prescription of them on plantar fasciitis.
Methods: This was a cross-over study. Fifteen patients with unilateral plantar fasciitis were recruited; they were from both genders and aged between 40 and 65. Subjects performed walking trials which consisted of one 'unshod' condition and four 'shod' conditions while wearing baseline shoes, rocker shoes, baseline shoes with foot orthotics, and rocker shoes with foot orthotics. The study outcome measures were the immediate heel pain intensity levels as reflected by visual analog scale pain ratings and the corresponding dynamic plantar pressure redistribution patterns as evaluated by a pressure insole system.
Results: The results showed that a combination of rocker shoes and foot orthoses produced a significantly lower visual analog scale pain score (9.7 mm) than rocker shoes (30.9 mm) and foot orthoses (29.5 mm). With regard to baseline shoes, it also significantly reduced the greatest amount of medial heel peak pressure (-33.58%) without overloading other plantar regions when compared to rocker shoes (-7.99%) and foot orthoses (-28.82%).
Discussion: The findings indicate that a combined prescription of rocker sole shoes and custom-made foot orthoses had greater immediate therapeutic effects compared to when each treatment had been individually prescribed.
abstract_id: PUBMED:16919213
Foot orthoses for the treatment of plantar fasciitis. Background: The literature suggests mechanical interventions such as foot orthoses and night splints are effective in reducing pain from plantar fasciitis. There is, however, a lack of controlled trials. We studied the effects of foot orthoses and night splints, alone or combined, in a prospective, randomized trial with 1-year followup.
Methods: Forty-three patients (34 women and nine men with a mean age of 46 years) with plantar fasciitis were randomized to receive foot orthoses (n = 13), foot orthoses and night splints (n = 15), or night splints alone (n = 15). Data were available for 34 (79%) patients after treatment (12 weeks), and for 38 (88%) at 1-year followup. Pain, functional limitations, and quality of life were evaluated with the Foot and Ankle Outcome Score.
Results: All groups improved significantly in all outcomes evaluated across all times (p < 0.04). At 12 weeks, pain reduction of 30% to 50% compared to baseline were seen (p < 0.03). At 52 weeks, pain reduction of 62% was seen in the two groups using foot orthoses compared to 48% in the night splint only group (p < 0.01). Better compliance and fewer side effects were reported for orthosis use. At 12 months, 19 of 23 patients reported still using foot orthoses compared to 1 of 28 still using the night splint.
Conclusions: Foot orthoses and anterior night splints were effective both short-term and long-term in treating pain from plantar fasciitis. Parallel improvements in function, foot-related quality of life, and a better compliance suggest that a foot orthosis is the best choice for initial treatment plantar fasciitis.
abstract_id: PUBMED:30643800
Effectiveness of Foot Biomechanical Orthoses to Relieve Patients Suffering from Plantar Fasciitis: Is the Reduction of Pain Related to Change in Neural Strategy? Plantar fasciitis is a cause of chronic pain under the heel and bottom of the foot. One of the treatments to reduce pain consists of using plantar orthoses to address specific imbalances during foot placement or gait. The aim of the present study was to determine if reduction of pain with a treatment based on plantar orthoses is related to changes in reflexes and muscle activity of the muscles of the lower limbs. Ten patients (51.0±3.5 years, 76.0±2.38 kg, 171.9±1.8 cm, 3 women and 7 men) with plantar fasciitis for less than 1 year were followed up during nine weeks. Soleus M, H, and V waves recorded at rest and during voluntary contraction and Root Mean Square-Electromyogram from four leg muscles recorded during walking and static position were analyzed in patients before and 3, 6, and 9 weeks after wearing orthoses. Pain level and gait and posture parameters were also analyzed. Results were compared to five healthy participants exhibiting no pain (30.6±2.1 years, 60.0±3.5 kg, 167.0±3.4 cm, 3 women and 2 men). Results indicated that pain was significantly reduced after 3 weeks. Hmax/Mmax and Hsup/Msup ratios were significantly higher and MHmax/Mmax and MHsup/Msup were significantly lower in healthy participants compared to patients with plantar fasciitis. No difference in the V/Msup ratio was found between groups. Furthermore, all other measured locomotor, stabilometric, and electromyographic parameters remained unchanged throughout the entire protocol. The reduction of pain is not related to change in neural activity suggesting that, after 9 weeks of wearing plantar orthoses, patients are not yet cured and return to physical activity should be delayed.
Answer: Yes, the use of orthoses does improve self-reported pain and function measures in patients with plantar fasciitis. A meta-analysis examining the effects of foot orthoses on self-reported pain and function in patients with plantar fasciitis showed significant reductions in pain after orthotic intervention and significant increases in function after orthotic use (PUBMED:19218074). Another systematic review and meta-analysis found moderate-quality evidence that foot orthoses are effective at reducing pain in the medium term, although it was uncertain whether this change was clinically important (PUBMED:28935689).
A pragmatic randomized controlled trial reported that custom foot orthoses improved first-step pain and reduced plantar fascia thickness over a period of 12 weeks compared to new shoes alone or a sham intervention (PUBMED:30021556). Similarly, a randomized controlled trial found that custom foot orthoses (CFO) demonstrated greater improvements in spontaneous physical activity compared to prefabricated foot orthoses (PFO) and sham groups, suggesting that CFO may be more effective in improving weightbearing function (PUBMED:25941995).
Another study indicated that a combined prescription of rocker sole shoes and custom-made foot orthoses had greater immediate therapeutic effects compared to when each treatment was individually prescribed (PUBMED:22954426). Furthermore, a study on the treatment of plantar fasciitis found that foot orthoses and anterior night splints were effective both short-term and long-term in treating pain from plantar fasciitis, with better compliance and fewer side effects reported for orthosis use (PUBMED:16919213).
In summary, the evidence suggests that the use of orthoses, particularly custom foot orthoses, is associated with improvements in pain and function for patients with plantar fasciitis. |
Instruction: Should I stay or should I go?
Abstracts:
abstract_id: PUBMED:29354045
Dopaminergic Therapy Increases Go Timeouts in the Go/No-Go Task in Patients with Parkinson's Disease. Parkinson's disease (PD) is characterized by resting tremor, rigidity and bradykinesia. Dopaminergic medications such as L-dopa treat these motor symptoms, but can have complex effects on cognition. Impulse control is an essential cognitive function. Impulsivity is multifaceted in nature. Motor impulsivity involves the inability to withhold pre-potent, automatic, erroneous responses. In contrast, cognitive impulsivity refers to improper risk-reward assessment guiding behavior. Informed by our previous research, we anticipated that dopaminergic therapy would decrease motor impulsivity though it is well known to enhance cognitive impulsivity. We employed the Go/No-go paradigm to assess motor impulsivity in PD. Patients with PD were tested using a Go/No-go task on and off their normal dopaminergic medication. Participants completed cognitive, mood, and physiological measures. PD patients on medication had a significantly higher proportion of Go trial Timeouts (i.e., trials in which Go responses were not completed prior to a deadline of 750 ms) compared to off medication (p = 0.01). No significant ON-OFF differences were found for Go trial or No-go trial response times (RTs), or for number of No-go errors. We interpret that dopaminergic therapy induces a more conservative response set, reflected in Go trial Timeouts in PD patients. In this way, dopaminergic therapy decreased motor impulsivity in PD patients. This is in contrast to the widely recognized effects of dopaminergic therapy on cognitive impulsivity leading in some patients to impulse control disorders. Understanding the nuanced effects of dopaminergic treatment in PD on cognitive functions such as impulse control will clarify therapeutic decisions.
abstract_id: PUBMED:37689007
Attentional priming in Go No-Go search tasks. Go/No-Go responses in visual search yield different estimates of the operation of visual attention than more standard present versus absent tasks. Such minor methodological tweaks have a surprisingly large effect on measures that have, for the last half-century or so, formed the backbone of prominent theories of visual attention. Secondly, priming effects in visual search have a dominating influence on visual search, accounting for effects that have been attributed to top-down guidance in standard theories. Priming effects in visual search have, however, never been investigated for searches involving Go/No-Go present/absent decisions. Here, Go/No-Go tasks were used to assess visual search for an odd-one-out face, defined either by color or facial expression. The Go/No-Go responses for the color-based task were very fast for both present and absent trials and notably, they resulted in negative slopes of RT and set size. Interestingly "Go" responses were even faster for the target absent case. The "Go" responses were, on the other hand, much slower for expression and became higher with increased set-size, particularly for the target-absent response. Priming effects were considerable for the feature search, but for expression, the target absent priming was strong, but did not occur for target present trials, arguing that repetition priming for this search mainly reflects priming of context rather than target features. Overall, the results reinforce the point that Go/No-Go tasks are highly informative for theoretical accounts of visual attention and are shown here to cast a new light on attentional priming.
abstract_id: PUBMED:29404378
Modeling Individual Differences in the Go/No-go Task with a Diffusion Model. The go/no-go task is one in which there are two choices, but the subject responds only to one of them, waiting out a time-out for the other choice. The task has a long history in psychology and modern applications in the clinical/neuropsychological domain. In this article we fit a diffusion model to both experimental and simulated data. The model is the same as the two-choice model and assumes that there are two decision boundaries and termination at one of them produces a response and at the other, the subject waits out the trial. In prior modeling, both two-choice and go/no-go data were fit simultaneously and only group data were fit. Here the model is fit to just go/no-go data for individual subjects. This allows analyses of individual differences which is important for clinical applications. First, we fit the standard two-choice model to two-choice data and fit the go/no-go model to RTs from one of the choices and accuracy from the two-choice data. Parameter values were similar between the models and had high correlations. The go/no-go model was also fit to data from a go/no-go version of the task with the same subjects as the two-choice task. A simulation study with ranges of parameter values that are obtained in practice showed similar parameter recovery between the two-choice and go/no-go models. Results show that a diffusion model with an implicit (no response) boundary can be fit to data with almost the same accuracy as fitting the two-choice model to two-choice data.
abstract_id: PUBMED:32992713
Effect of Age in Auditory Go/No-Go Tasks: A Magnetoencephalographic Study. Response inhibition is frequently examined using visual go/no-go tasks. Recently, the auditory go/no-go paradigm has been also applied to several clinical and aging populations. However, age-related changes in the neural underpinnings of auditory go/no-go tasks are yet to be elucidated. We used magnetoencephalography combined with distributed source imaging methods to examine age-associated changes in neural responses to auditory no-go stimuli. Additionally, we compared the performance of high- and low-performing older adults to explore differences in cortical activation. Behavioral performance in terms of response inhibition was similar in younger and older adult groups. Relative to the younger adults, the older adults exhibited reduced cortical activation in the superior and middle temporal gyrus. However, we did not find any significant differences in cortical activation between the high- and low-performing older adults. Our results therefore support the hypothesis that inhibition is reduced during aging. The variation in cognitive performance among older adults confirms the need for further study on the underlying mechanisms of inhibition.
abstract_id: PUBMED:26955650
BOLD data representing activation and connectivity for rare no-go versus frequent go cues. The neural circuitry underlying response control is often studied using go/no-go tasks, in which participants are required to respond as fast as possible to go cues and withhold from responding to no-go stimuli. In the current task, response control was studied using a fully counterbalanced design in which blocks with a low frequency of no-go cues (75% go, 25% no-go) were alternated with blocks with a low frequency of go cues (25% go, 75% no-go); see also "Segregating attention from response control when performing a motor inhibition task: Segregating attention from response control" [1]. We applied a whole brain corrected, paired t-test to the data assessing for regions differentially activated by low frequency no-go cues relative to high frequency go cues. In addition, we conducted a generalized psychophysiological interaction analysis on the data using a right inferior frontal gyrus seed region. This region was identified through the BOLD response t-test and was chosen because right inferior gyrus is highly implicated in response inhibition.
abstract_id: PUBMED:26869060
Perioperative Predictors of Length of Stay After Total Hip Arthroplasty. Background: Few studies had examined whether specific patient variables or performance on functional testing can predict length of stay (LOS) after total hip arthroplasty (THA). Such tools would enable providers to minimize prolonged LOS by planning appropriate discharge dispositions preoperatively.
Methods: We prospectively recruited 120 patients undergoing a THA through an anterior (n = 40), posterior (n = 40), or lateral (n = 40) approach. Patients performed a timed up-and-go (TUG) test preoperatively to determine if it was predictive of hospital LOS after THA. Other variables of interest included patient age, body mass index, age-adjusted Charlson Comorbidity Index, mean procedure time, and time spent in the postanesthetic care unit. A logistic regression analysis was performed to determine which variables predicted LOS greater than 48 hours, which is our institution's target time to discharge.
Results: The TUG test was predictive of LOS beyond 48 hours. For every 5-second interval increase in TUG time, patients were twice as likely to stay in hospital beyond 48 hours (odds ratio [OR] = 2.02, 95% confidence interval [CI] = 1.02-4.01, P = .043). Patient age (OR = 0.97, 95% CI = 0.90-1.05, P = .46), body mass index (OR = 1.01, 95% CI = 0.86-1.18, P = .90), Charlson Comorbidity Index (OR = 1.29, 95% CI = 0.68-2.44, P = .44), mean procedure time (OR = 1.05, 95% CI = 0.97-1.14, P = .27), and mean time in the postanesthetic care unit (OR = 1.00, 95% CI = 0.99-1.00, P = .94) were not predictive of increased LOS.
Conclusion: The TUG test was predictive of hospital LOS after THA. It is a simple functional test that can be used to assist with discharge planning preoperatively to minimize extended hospital stays.
abstract_id: PUBMED:33863323
Predicting short stay total hip arthroplasty by use of the timed up and go-test. Background: One of the most important steps before implementing short stay total hip arthroplasty (THA) is establishing patient criteria. Most existing criteria are mainly based on medical condition, but as physical functioning is associated with outcome after THA, we aim to evaluate the added value of a measure of physical functioning to predict short-stay THA.
Methods: We used retrospective data of 1559 patients who underwent an anterior THA procedure. Logistic regression analyses were performed to study the predictive value of preoperative variables among which preoperative physical functioning by use of the Timed Up and Go test (TUG) for short stay THA (< 36 h). The receiver operating characteristic (ROC) curve and Youden Index were used to define a cutoff point for TUG associated with short stay THA.
Results: TUG was significantly associated with LOS (OR 0.84, 95%CI 0.82-0.87) as analyzed by univariate regression analysis. In multivariate regression, a model with the TUG had a better performance with an AUC of 0.77 (95%CI 0.74-0.79) and a R2 of 0.27 compared to the basic model (AUC 0.75, 95%CI 0.73-0.77, R2 0.24). Patients with a preoperative TUG less than 9.7 s had an OR of 4.01 (95%CI 3.19-5.05) of being discharged within 36 h.
Conclusions: Performance based physical functioning, measured by the TUG, is associated with short stay THA. This knowledge will help in the decision-making process for the planning and expectations in short stay THA protocols with the advantage that the TUG is a simple and fast instrument to be carried out.
abstract_id: PUBMED:28940554
Go/no-go procedure with compound stimuli with children with autism. The go/no-go with compound stimuli is an alternative to matching-to-sample to produce conditional and emergent relations in adults. The aim of this study was to evaluate the effectiveness of this procedure with two children diagnosed with autism. We trained and tested participants to respond to conditional relations among arbitrary stimuli using the go/no-go procedure. Both learned all the trained conditional relations without developing response bias or responding to no-go trials. Participants demonstrated performance consistent with symmetry, but not equivalence.
abstract_id: PUBMED:34514183
Factors Affecting the Length of Convalescent Hospital Stay Following Total Hip and Knee Arthroplasty. Objectives: : An important role of convalescent rehabilitation wards is the short-term improvement of mobility and activities of daily living (ADL). We aimed to identify predictors associated with the length of stay (LOS) in a convalescent hospital after total hip and knee arthroplasty.
Methods: : This study included 308 patients hospitalized in a convalescent ward following total hip or total knee arthroplasty. The following factors were examined: age, sex, orthopedic comorbidities, motor component of the functional independence measure (M-FIM), M-FIM gain, pain, 10-m walk test, timed up and go (TUG) test, functional ambulation category (FAC), cognitive function, and nutritional status. LOS was categorized as shorter (≤40 days) or longer (>40 days), based on the national average LOS in a convalescent ward, and was statistically analyzed with predictor variables.
Results: : In our hospital, the average LOS was 36.9 ± 21.4 days, and the average M-FIM at admission to the convalescent ward and the M-FIM gain were 71.1 ± 7.0 and 16.3 ± 6.9, respectively. In univariate analysis, there was a significant correlation between LOS and M-FIM at admission and M-FIM gain, pain, TUG time, and FAC. Logistic multivariate analysis identified M-FIM at admission (odds ratio [OR] 0.93, 95% confidence interval [CI] 0.88-0.98) and TUG time (OR 1.10, 95% CI 1.03-1.18) as independent predictors of LOS.
Conclusions: : The M-FIM and TUG test can be used to accurately estimate LOS and to plan rehabilitation treatment in a convalescent rehabilitation ward after lower-limb arthroplasty. Furthermore, preoperative or early postoperative intervention may lead to better quality motor activity and shorter LOS during the convalescent period.
abstract_id: PUBMED:28357193
What Is Trained During Food Go/No-Go Training? A Review Focusing on Mechanisms and a Research Agenda. Purpose Of Review: During food go/no-go training, people consistently withhold responses toward no-go food items. We discuss how food go/no-go training may change people's behavior toward no-go food items by comparing three accounts: (a) the training strengthens 'top-down' inhibitory control over food-related responses, (b) the training creates automatic 'bottom-up' associations between no-go food items and stopping responses, and (c) the training leads to devaluation of no-go food items.
Recent Findings: Go/no-go training can reduce intake of food and choices for food and facilitate short-term weight loss. It appears unlikely that food go/no-go training strengthens top-down inhibitory control. There is some evidence suggesting the training could create automatic stop associations. There is strong evidence suggesting go/no-go training reduces evaluations of no-go food items.
Summary: Food go/no-go training can change behavior toward food and evaluation of food items. To advance knowledge, more research is needed on the underlying mechanisms of the training, the role of attention during go/no-go training, and on when effects generalize to untrained food items.
Answer: The question "Should I stay or should I go?" is a broad and abstract one, and the provided abstracts do not directly answer this question as it pertains to life decisions or choices. However, the abstracts do discuss the concept of response inhibition and decision-making in the context of cognitive tasks and physical assessments, which can metaphorically relate to the broader question of whether to act or refrain from acting in various situations.
In the context of the Go/No-Go tasks, the decision to 'go' or 'stay' (i.e., to respond or not respond) is a measure of impulse control and cognitive processing. For instance, dopaminergic therapy in Parkinson's disease patients was found to increase Go trial Timeouts, indicating a more conservative response set and decreased motor impulsivity (PUBMED:29354045). This suggests that medication can influence the decision-making process, making patients less likely to impulsively 'go' and more likely to 'stay' or withhold their response.
Attentional priming in Go/No-Go tasks also affects the speed and accuracy of responses, with different effects observed depending on the nature of the task, such as color-based or expression-based searches (PUBMED:37689007). This indicates that prior experience and context can influence whether one decides to 'go' or 'stay'.
The diffusion model applied to Go/No-Go tasks provides a framework for understanding how decisions are made when only one response option is available, which can be related to the decision-making process in real-life scenarios (PUBMED:29404378).
In clinical settings, the Timed Up and Go test (TUG) is used to predict outcomes such as the length of hospital stay after total hip arthroplasty, which can be seen as a decision-making tool for healthcare providers to determine patient readiness for discharge – essentially, whether a patient should 'stay' in the hospital or is ready to 'go' home (PUBMED:26869060, PUBMED:33863323, PUBMED:34514183).
Finally, Go/No-Go training with food items has been used to study the mechanisms of behavior change and inhibition, which can have implications for understanding how individuals make decisions about consumption and whether they should 'go' ahead with eating or 'stay' their hand (PUBMED:28357193).
In summary, while the abstracts do not directly address the existential question of "Should I stay or should I go?" they do provide insights into the cognitive and behavioral processes involved in making decisions to act or inhibit action in specific contexts. |
Instruction: Does training on performance based financing make a difference in performance and quality of health care delivery?
Abstracts:
abstract_id: PUBMED:24708628
Does training on performance based financing make a difference in performance and quality of health care delivery? Health care provider's perspective in Rungwe Tanzania. Background: In recent years, Performance Based Financing (PBF); a form of result based financing, has attracted a global attention in health systems in developing countries. PBF promotes autonomous health facilities, motivates and introduces financial incentives to motivate health facilities and health workers to attain pre-determined targets. To achieve this, the Tanzanian government through the Christian Social Services Commission initiated a PBF pilot project in Rungwe district, Mbeya region. Kilimanjaro Christian Medical Center was given the role of training health workers on PBF principles in Rungwe. The aim of this study was to explore health care providers' perception on a three years training on PBF principles in a PBF pilot project at Rungwe District in Mbeya, Tanzania.
Methods: This was an explorative qualitative study, which took place at Rungwe PBF pilot area in October 2012. Twenty six (26) participants were purposively selected. Six took part in- depth interviews (IDIs) and twenty (20) in the group discussions. Both the IDIs and the GDs explored the perceived benefit and challenges of implementing PBF in their workplace. Data were manually analyzed using content analysis approach.
Results: Overall informants had positive perspectives on PBF training. Most of the health facilities were able to implement some of the PBF concepts in their work places after the training, such as developing job descriptions for their staff, creating quarterly business plans for their facilities, costing for their services and entering service agreement with the government, improved record keeping, customer care and involving community as partners in running their facilities. The most common principle of paying individual performance bonuses was mentioned as a major challenge due to inadequate funding and poor design of Rungwe PBF pilot project.
Conclusion: Despite poor design and inadequate funding, our findings have shown some promising results after PBF training in the study area. The findings have highlighted the potential of PBF to act as leverage for initiating innovative and proactive actions, which may motivate health personnel performance and quality of care in the study setting with minimal support. However, key policy issues at the national level should be addressed in order to exploit this opportunity.
abstract_id: PUBMED:34051728
Implementation of a performance-based financing scheme in Malawi and resulting externalities on the quality of care of non-incentivized services. Background: Countries in Africa progressively implement performance-based financing schemes to improve the quality of care provided by maternal, newborn and child health services. Beyond its direct effects on service provision, evidence suggests that performance-based financing can also generate positive externalities on service utilization, such as increased use of those services that reached higher quality standards after effective scheme implementation. Little, however, is known about externalities generated within non-incentivized health services, such as positive or negative effects on the quality of services within the continuum of maternal care.
Methods: We explored whether a performance-based financing scheme in Malawi designed to improve the quality of childbirth service provision resulted positive or negative externalities on the quality of non-targeted antenatal care provision. This non-randomized controlled pre-post-test study followed the phased enrolment of facilities into a performance-based financing scheme across four districts over a two-year period. Effects of the scheme were assessed by various composite scores measuring facilities' readiness to provide quality antenatal care, as well as the quality of screening, prevention, and education processes offered during observed antenatal care consultations.
Results: Our study did not identify any statistically significant effects on the quality of ANC provision attributable to the implemented performance-based financing scheme. Our findings therefore suggest not only the absence of positive externalities, but also the absence of any negative externalities generated within antenatal care service provision as a result of the scheme implementation in Malawi.
Conclusions: Prior research has shown that the Malawian performance-based financing scheme was sufficiently effective to improve the quality of incentivized childbirth service provision. Our findings further indicate that scheme implementation did not affect the quality of non-incentivized but clinically related antenatal care services. While no positive externalities could be identified, we also did not observe any negative externalities attributable to the scheme's implementation. While performance-based incentives might be successful in improving targeted health care processes, they have limited potential in producing externalities - neither positive nor negative - on the provision quality of related non-incentivized services.
abstract_id: PUBMED:25489036
Introduction of performance-based financing in burundi was associated with improvements in care and quality. Several governments in low- and middle-income countries have adopted performance-based financing to increase health care use and improve the quality of health services. We evaluated the effects of performance-based financing in the central African nation of Burundi by exploiting the staggered rollout of this financing across provinces during 2006-10. We found that performance-based financing increased the share of women delivering their babies in an institution by 22 percentage points, which reflects a relative increase of 36 percent, and the share of women using modern family planning services by 5 percentage points, a relative change of 55 percent. The overall quality score for health care facilities increased by 45 percent during the study period, but performance-based financing was found to have no effect on the quality of care as reported by patients. We did not find strong evidence of differential effects of performance-based financing across socioeconomic groups. The performance-based financing effects on the probability of using care when ill were found to be even smaller for the poor. Our findings suggest that a supply-side intervention such as performance-based financing without accompanying access incentives for poor people is unlikely to improve equity. More research into the cost-effectiveness of performance-based financing and how best to target vulnerable populations is warranted.
abstract_id: PUBMED:32530041
Effect of performance-based financing on health service delivery: a case study from Adamawa state, Nigeria. The Nigeria State Health Investment Project (NSHIP) was implemented in three Nigerian states between 2013 and 2018. Under the NSHIP, some local government areas were randomly assigned to Performance-Based Financing (PBF) intervention while others received decentralized facility financing (DFF) for comparison. This article evaluates the effect of PBF compared with DFF on health service delivery indicators in Adamawa state, under this quasi-experimental design, using the difference-in-differences technique. The analysis used health facility monthly data collected by the Health Management Information System through the District Health Information Software 2 (DHIS2). The PBF intervention group significantly increased the quantity of most of its service delivery indicators, such as antenatal care visits and deliveries by skilled personnel compared with the comparison group (DFF) after the introduction of NSHIP, although the baseline level of service delivery between PBF and DFF health facilities was statistically identical prior to the introduction of the intervention. We also conducted robustness check analysis to confirm the effect of PBF. Overall, we found a significant positive effect of PBF on most service delivery outcomes, except full vaccinations and post-natal care. One important policy implication is that we should carefully use PBF for targeted indicators.
abstract_id: PUBMED:23107831
Impact of performance-based financing on primary health care services in Haiti. To strengthen Haiti's primary health care (PHC) system, the country first piloted performance-based financing (PBF) in 1999 and subsequently expanded the approach to most internationally funded non-government organizations. PBF complements support (training and technical assistance). This study evaluates (a) the separate impact of PBF and international support on PHC's service delivery; (b) the combined impact of PBF and technical assistance on PHC's service delivery; and (c) the costs of PBF implementation in Haiti. To minimize the risk of facilities neglecting potential non-incentivized services, the incentivized indicators were randomly chosen at the end of each year. We obtained quantities of key services from four departments for 217 health centres (15 with PBF and 202 without) from 2008 through 2010, computed quarterly growth rates and analysed the results using a difference-in-differences approach by comparing the growth of incentivized and non-incentivized services between PBF and non-PBF facilities. To interpret the statistical analyses, we also interviewed staff in four facilities. Whereas international support added 39% to base costs of PHC, incentive payments added only 6%. Support alone increased the quantities of PHC services over 3 years by 35% (2.7%/quarter). However, support plus incentives increased these amounts by 87% over 3 years (5.7%/quarter) compared with facilities with neither input. Incentives alone was associated with a net 39% increase over this period, and more than doubled the growth of services (P < 0.05). Interview findings found no adverse impacts and, in fact, indicated beneficial impacts on quality. Incentives proved to be a relatively inexpensive, well accepted and very effective complement to support, suggesting that a small amount of money, strategically used, can substantially improve PHC. Haiti's experience, after more than a decade of use, indicates that incentives are an effective tool to strengthen PHC.
abstract_id: PUBMED:35583836
Discontinuation of performance-based financing in primary health care: impact on family planning and maternal and child health. Performance-based financing (PBF) is advocated as an effective means to improve the quality of care by changing healthcare providers' behavior. However, there is limited evidence on its effectiveness in low- and middle-income countries and on its implementation in primary care settings. Evidence on the effect of discontinuing PBF is even more limited than that of introducing PBF schemes. We estimate the effects of discontinuing PBF in Egypt on family planning, maternal health, and child health outcomes. We use a difference-in-differences (DiD) model with fixed effects, exploiting a unique dataset of six waves of spatially constructed facility-level health outcomes. We find that discontinuing performance-based incentives to providers had a negative effect on the knowledge of contraceptive methods, iron supplementation during pregnancy, the prevalence of childhood acute respiratory infection, and, more importantly, under-five child mortality, all of which were indirectly targeted by the PBF scheme. No significant effects are reported for directly targeted outcomes. Our findings suggest that PBF can induce permanent changes in providers' behavior, but this may come at the expense of non-contracted outcomes.
abstract_id: PUBMED:34874806
Performance-based Financing versus "Unconditional" Direct Facility Financing - False Dichotomy? A debate about how best to finance essential health care in low- and middle-income settings has been running for decades, with public health systems often failing to provide reliable and adequate funding for primary health care in particular. Since 2000, many have advocated and experimented with performance-based financing as one approach to addressing this problem. More recently, in light of concerns over high transaction costs, mixed results and challenges of sustainability, a less conditional approach, sometimes called direct facility financing, has come into favor. In this commentary, we examine the evidence for the effectiveness of both modalities and argue that they share many features and requirements for effectiveness. In the right context, both can contribute to health system strengthening, and they should be seen as potentially complementary, rather than as rivals.
abstract_id: PUBMED:28549142
How do performance-based financing programmes measure quality of care? A descriptive analysis of 68 quality checklists from 28 low- and middle-income countries. This paper seeks to systematically describe the length and content of quality checklists used in performance-based financing programmes, their similarities and differences, and how checklists have evolved over time. We compiled a list of supply-side, health facility-based performance-based financing (PBF) programmes in low- and lower middle-income countries based on a document review. We then solicited PBF manuals and quality checklists from implementers and donors of these PBF mechanisms. We entered each indicator from each quality checklist into a database verbatim in English, and translated into English from French where appropriate, and categorized each indicator according to the Donabedian framework and an author-derived categorization. We extracted 8,490 quality indicators from 68 quality checklists across 32 PBF implementations in 28 countries. On average, checklists contained 125 indicators; within the same program, checklists tend to grow as they are updated. Using the Donabedian framework, 80% of indicators were structure-type, 19% process-type, and less than 1% outcome-type. The author-derived categorization showed that 57% of indicators relate to availability of resources, 24% to managing the facility and 17% assess knowledge and effort. There is a high degree of similarity in a narrow set of indicators used in checklists for common service types such as maternal, neonatal and child health. We conclude that performance-based financing offers an appealing approach to targeting specific quality shortfalls and advancing toward the Sustainable Development Goals of high quality coverage. Currently most indicators focus on structural issues and resource availability. There is scope to rationalize and evolve the quality checklists of these programs to help achieve national and global goals to improve quality of care.
abstract_id: PUBMED:37383569
Effects of performance based financing on facility autonomy and accountability: Evidence from Zambia. Several low and lower- middle income countries have been using Performance-Based Financing (PBF) to motivate health workers to increase the quantity and quality of health services. Studies have demonstrated that PBF can contribute to improved health service delivery and health outcomes, but there is limited evidence on the mechanisms through which PBF can necessitate changes in the health system. Using difference-in-difference and synthetic control analytical approaches, we investigated the effect of PBF on autonomy and accountability at service delivery level using data from a 3-arm cluster randomised trial in Zambia. The arms consisted of PBF where financing is linked to outputs in terms of quality and quantity (intervention 1), input financing where funding is fully provided to finance all required inputs regardless of performance (intervention 2), and the current standard of care where there is input financing but with possible challenges in funding (pure control). The results show an increase in autonomy at PBF sites compared to sites in the pure control arm and an increase in accountability at PBF sites compared to sites in both the input-financing and pure control arms. On the other hand, there were no effects on autonomy and accountability in the input-financing sites compared to the pure control sites. The study concludes that PBF can improve financial and managerial autonomy and accountability, which are important for improving health service delivery. However, within the PBF districts, the magnitude of change was different, implying that management and leadership styles matter. Future research could examine whether personal attributes, managerial capacities of the facility managers, and the operating environment have an effect on autonomy and accountability.
abstract_id: PUBMED:37189035
Improving the readiness and clinical quality of antenatal care - findings from a quasi-experimental evaluation of a performance-based financing scheme in Burkina Faso. Background: While maternal mortality has declined globally, it remains highest in low-income countries. High-quality antenatal care (ANC) can prevent or decrease pregnancy-related complications for mothers and newborns. The implementation of performance-based financing (PBF) schemes in Sub-Saharan Africa to improve primary healthcare provision commonly includes financial indicators linked to ANC service quality indicators. In this study, we examine changes in ANC provision produced by the introduction of a PBF scheme in rural Burkina Faso.
Methods: This study followed a quasi-experimental design with two data collection points comparing effects on ANC service quality between primary health facilities across intervention and control districts based on difference-in-differences estimates. Performance scores were defined using data on structural and process quality of care reflecting key clinical aspects of ANC provision related to screening and prevention pertaining to first and follow-up ANC visits.
Results: We found a statistically significant increase in performance scores by 10 percent-points in facilities' readiness to provide ANC services. The clinical care provided to different ANC client groups scored generally low, especially with respect to preventive care measures, we failed to observe any substantial changes in the clinical provision of ANC care attributable to the PBF.
Conclusion: The observed effect pattern reflects the incentive structure implemented by the scheme, with a stronger focus on structural elements compared with clinical aspects of care. This limited the scheme's overall potential to improve ANC provision at the client level after the observed three-year implementation period. To improve both facility readiness and health worker performance, stronger incentives are needed to increase adherence to clinical standards and patient care outcomes.
Answer: Training on performance-based financing (PBF) can make a difference in performance and quality of health care delivery, as evidenced by several studies conducted in different countries.
In Tanzania, health care providers had positive perspectives on PBF training. After the training, most health facilities were able to implement PBF concepts, such as developing job descriptions, creating business plans, costing services, and improving record keeping and customer care. However, challenges were noted due to inadequate funding and poor project design, but the training still showed promising results for motivating health personnel and improving the quality of care with minimal support (PUBMED:24708628).
In Burundi, the introduction of PBF was associated with significant improvements in care and quality. The study found that PBF increased institutional deliveries and the use of modern family planning services, as well as the overall quality score for health care facilities. However, it did not affect the quality of care as reported by patients, and the effects on the probability of using care when ill were smaller for the poor (PUBMED:25489036).
In Nigeria, the PBF intervention group significantly increased the quantity of most service delivery indicators, such as antenatal care visits and deliveries by skilled personnel, compared with the comparison group. This suggests a positive effect of PBF on service delivery outcomes (PUBMED:32530041).
In Haiti, PBF was found to be a relatively inexpensive and effective complement to support, substantially improving primary health care services. The study showed that incentives alone were associated with a net 39% increase in service quantities over three years (PUBMED:23107831).
However, it is important to note that the impact of PBF on non-incentivized services can vary. For instance, a study in Malawi did not identify any statistically significant effects on the quality of antenatal care provision attributable to the implemented PBF scheme, suggesting that PBF might have limited potential in producing externalities on the quality of related non-incentivized services (PUBMED:34051728).
In summary, training on PBF can lead to improvements in health care delivery and quality, but the extent of its impact may depend on the design and funding of the PBF program, as well as the specific health services targeted by the incentives. |
Instruction: Is the health of young unemployed Australians worse in times of low unemployment?
Abstracts:
abstract_id: PUBMED:19236364
Is the health of young unemployed Australians worse in times of low unemployment? Objective: To compare the health of young unemployed Australians during a period of low unemployment (April 2007: rate 4.4%) against published Australian norms for 18-24 year olds and unemployed people during a time of higher unemployment (February 1995 to January 1996: rate 8.1% to 8.9%).
Methods: Two hundred and fifty-one unemployed 18-25 year olds residing in New South Wales completed the SF36 Health Survey version 2 (SF36v2) during a time of low unemployment. SF36v2 subscale and component summary scores were compared with published norms for 18-24 year olds and for unemployed persons during a time of higher unemployment.
Results: Young unemployed people during a period of low unemployment reported poorer health in all areas when compared with age-matched norms and poorer psychological health when compared with the published norms for unemployed people from a time when unemployment rates were higher.
Conclusions: The health of young unemployed individuals during a time of low unemployment was poor when compared to both the general population and to unemployed people during a time of higher unemployment.
Implications: Public health interventions must focus on improving the health of young unemployed people to support their engagement with and contribution to Australian society.
abstract_id: PUBMED:27353508
Association Among Sociodemograhic Factors, Work Ability, Health Behavior, and Mental Health Status for Young People After Prolonged Unemployment. The purpose of this study was to explore the associations of prolonged unemployment, health, and work ability among young workers using data from the 2008-2010 Occupational Health Counselling project in Kuopio, Eastern Finland. The total sample for this study was 190 young unemployed adults. The questionnaire included the Work Ability Index (WAI), the Beck Depression Inventory, the Alcohol Use Disorders Identification Test, and the Occupational Health Counselling Survey. Multivariate analyses revealed that men had a higher prevalence of prolonged unemployment than women. Using drugs for purposes other than treatment was associated independently with an increased prevalence of prolonged unemployment. Low WAI scores were associated with a higher prevalence of prolonged unemployment. This study showed that attention should be paid to male workers, those who have poor or moderate work ability and workers who use drugs. Young unemployed workers should be recognized at an early stage. A comprehensive, flexible network of community resources is essential to support young unemployed adults.
abstract_id: PUBMED:15804165
Is the health of the long-term unemployed better or worse in high unemployment areas? Data on 25.6 million adults from the UK 2001 Census were analysed to compare the regional pattern of self-rated health of the long-term unemployed to that of people from different social classes and of those who have never worked. The results show that the health of the long-term unemployed was better in high unemployment regions, and conversely, worse where the local labour market was traditionally stronger. This is the reverse of the regional pattern found-for different social classes and for those who have never worked.
abstract_id: PUBMED:20943994
Mental health among the unemployed and the unemployment rate in the municipality. Background: Previous research has shown that unemployment experiences increase the risk of poor mental health and that this effect differs depending on individual characteristics. Relatively little is known, however, about how the unemployment rate and labour market conditions impact the relationship. This study investigates how municipal unemployment rates and vacancy rates affect mental health in a nationally representative longitudinal survey of initially unemployed Swedish respondents.
Methods: The study uses a nationally representative longitudinal survey of currently and recently unemployed people in Sweden, in which respondents were re-interviewed one year after the initial interview. Mental health was measured using the GHQ-12. The present article uses multilevel models (hierarchical linear models) to combine municipal-level information on unemployment levels and vacancy rates with individual-level control variables.
Results: Higher municipal vacancy rates improved mental health among the unemployed. However, no coherent effect of municipal unemployment rate on the relationship between unemployment and mental health was found.
Conclusions: The effect of municipal vacancy rates can be understood in terms of the impact of perceived opportunity on the sense of life-course predictability. That there was no effect of municipal unemployment rate indicates that high local unemployment levels do not reduce the sense of shame and perceived stigma among the unemployed. Taken together, our findings would seem to present a rather bleak picture of the current dramatic labour market situation. The unemployed will be negatively affected by the extremely low demand for labour, while they will not be able to take comfort from their growing numbers.
abstract_id: PUBMED:24899516
Unemployment and health: experiences narrated by young Finnish men. Studies have shown that the experiences and consequences of unemployment can affect people differently depending on, for example, age and gender. The purpose of the present study was to describe young Finnish men's experiences of being unemployed as well as how their experiences of health emerged. Fifteen young unemployed Finnish men in the age range 18 to 27 years were interviewed face to face. Purposive sampling was used to increase the variation among informants. The interview texts were analyzed using both manifest and latent qualitative content analysis. The present results showed that the young men were strongly negatively affected by being unemployed. They described how they had slowly lost their foothold. They also described feelings of shame and guilt as well as a flight from reality. The present results show that even young men who have only experienced shorter periods of unemployment, in this study periods between 2 and 6 months, are negatively affected, for example, with regard to their identity and emotional life. Further research is needed to describe and elucidate in more detail the effects of unemployment on men of different ages and living in different contexts.
abstract_id: PUBMED:32782471
Youth unemployment and mental health: prevalence and associated factors of depression among unemployed young adults in Gedeo zone, Southern Ethiopia. Background: The high rate of unemployment among young adults in Ethiopia, which was 25.3% in 2018, is a major social, and public health concern. The risk of mental health problems like depression is higher among the unemployed than among the employed. However, there was no study conducted on the prevalence and associated factors of depression among unemployed young adults in Ethiopia. Hence, this study was aimed to assess the prevalence and associated factors of depression among unemployed young adults in Gedeo zone, Southern Ethiopia.
Methods: Community based cross sectional study design was employed among 1452 unemployed young adults in Gedeo zone, Southern Ethiopia from May to July, 2019. In order to select the study participants, systematic random sampling technique was used. The presence of depression was assessed by using Patient Health Questionnaire-9 (PHQ-9), and data about socio-demographic characteristics of study participants were collected by using structured questionnaire. Data were coded and entered into Epi-Data version 3.1, and analyzed by SPSS version 20. A multivariable logistic regression analysis was carried out to identify factors associated with depression, and variables with p values < 0.05 were considered as statistically significant. The strength of the association was presented by adjusted odds ratio with 95% confidence interval.
Result: The overall prevalence of depression among unemployed young adults in the present study was 30.9% (95% CI: 28.4%, 33.1%). Of the total study participants with depression, 56.7% had mild depression, 36% had moderate depression, and 7.3% had severe depression. Being male (AOR = 1.40, 95% CI: 1.10, 1.80), long duration of unemployment (≥ 1 years) (AOR = 1.56, 95% CI: 1.21, 1.99), low self-esteem (AOR = 1.32, 95% CI: 1.03, 1.68), poor social support (AOR = 1.98, 95% CI: 1.34, 2.93), and current alcohol use (AOR = 1.86, 95% CI: 1.33, 2.59) were significantly associated with depression.
Conclusion: The results of our study indicated that depression is an important public health problem among unemployed young adults in Ethiopia. Therefore, our study suggested that policy makers and program planners should establish appropriate strategy for prevention, early detection and management of depression among this population. Besides, addressing the need of unemployed young people, improving access to care for depression is an important next step. Furthermore, we recommend further studies to understand the nature of depression among unemployed young people, and to strengthen the current results.
abstract_id: PUBMED:9456072
Unemployment as a disease and diseases of the unemployed. There is a causal link between unemployment and the deterioration in health status, but there is also an act of selection so that people with health problems have more problems in getting a new job. Unemployed men, especially the young, increase their alcohol consumption as compared with employed referents. Unemployed persons are smokers to a greater extent than employed persons, and smokers have a higher risk of becoming unemployed. Psychological indicators have been studied well in connection with health effects of unemployment. Losing, or gaining, employment has clear effects on psychiatric symptoms and on well-being. The death rate is increased among unemployed persons.
abstract_id: PUBMED:16597477
The lesser evil: bad jobs or unemployment? A survey of mid-aged Australians. Paid work is related to health in complex ways, posing both risks and benefits. Unemployment is associated with poor health, but some jobs may still be worse than no job at all. This research investigates that possibility. We used cross-sectional survey data from Australians aged 40-44 (N = 2497). Health measures were depression, physical health, self-rated health, and general practitioner visits. Employees were classified according to their job quality (strain, perceived job insecurity and marketability). Employee health was compared to people who were unemployed, and to people who were not in the labour force. We found that unemployed people reported worse health when compared to all employees. However, distinguishing in terms of employee's job quality revealed a more complex pattern. Poor quality jobs (characterized by insecurity, low marketability and job strain) were associated with worse health when compared to jobs with fewer or no stressors. Furthermore, people in jobs with three or more of the psychosocial stressors report health that is no better than the unemployed. In conclusion, paid work confers health benefits, but poor quality jobs which combine several psychosocial stressors could be as bad for health as being unemployed. Thus, workplace and industrial relations policies that diminish worker autonomy and security may generate short-term economic gains, but place longer-term burdens on the health of employees and the health-care system.
abstract_id: PUBMED:26379589
The impact of anticipated stigma on psychological and physical health problems in the unemployed group. Previous research has demonstrated that the unemployed suffer increased psychological and physical health problems compared to their employed counterparts. Further, unemployment leads to an unwanted new social identity that is stigmatizing, and stigma is known to be a stressor causing psychological and physical health problems. However, it is not yet known whether being stigmatized as an unemployed group member is associated with psychological and physical health in this group. The current study tested the impact of anticipated stigma (AS) on psychological distress (PD) and physical health problems, operationalized as somatic symptoms (SSs), in a volunteer sample of unemployed people. Results revealed that AS had a direct effect on both PD and SSs, such that greater AS significantly predicted higher levels of both. Moreover, the direct effect on SSs became non-significant when PD was taken into account. Thus, to the extent that unemployed participants anticipated experiencing greater stigma, they also reported increased PD, and this PD predicted increased SSs. Our findings complement and extend the existing literature on the relationships between stigmatized identities, PD and physical health problems, particularly in relation to the unemployed group. This group is important to consider both theoretically, given the unwanted and transient nature of the identity compared to other stigmatized identities, but also practically, as the findings indicate a need to orient to the perceived valence of the unemployed identity and its effects on psychological and physical health.
abstract_id: PUBMED:24704770
Public health issues of the Roma and non-Roma unemployed in the Ózd microregion of Hungary Introduction: In their previous studies authors of the present work showed that public health situation and socioeconomic position of unemployed Roma persons in the Ozd microregion were well below the average of the whole Hungarian population.
Aim: To continue these previous studies, the authors wanted to determine whether the greater proportion of the unemployed Roma persons in the Ózd microregion compared to the country average could contribute to the worse public health situation and the poor hygienic situation of the living environment of unemployed persons hardly or not suitable for learning.
Method: Data from 400 unemployed Roma (96 males, 97 females) and caucasian non-Roma subjects (114 males, 93 females) obtained in 2012 and 2013 using self-completed and interview questionnaires were analysed. In addition, occupational medical examination methods were applied and the results were analysed.
Results: It was found that all studied parameters (public health and epidemiological safety, in-door living environmental hygiene, conditions for learning) indicated significantly disadvantaged situation of the Roma compared to the non-Roma unemployed persons. There was a clear relationship between these examined parameters and other factors adversely influencing the quality of life of Roma persons (unemployment, deep poverty, lower level of education, shorter life span). Finally, the results showed that the quality of life conditions of the lowest 1/10, 1/5 of the non-Roma unemployed persons were comparable to those found in the average of Roma unemployed persons.
Conclusions: The authors conclude that i) the greater proportion of unemployed Roma persons in the Ozd microregion contributes to the worse public health, epidemiological safety and the worse living conditions of unemployed persons in this regions of the country; ii) It would be essential to provide Roma persons with conditions appropriate for learning, which could enable them to get qualification necessary for employment.
Answer: Yes, the health of young unemployed Australians is worse in times of low unemployment. A study comparing the health of young unemployed Australians during a period of low unemployment (April 2007: rate 4.4%) with published Australian norms for 18-24 year olds and unemployed people during a time of higher unemployment (February 1995 to January 1996: rate 8.1% to 8.9%) found that young unemployed people reported poorer health in all areas when compared with age-matched norms and poorer psychological health when compared with the published norms for unemployed people from a time when unemployment rates were higher (PUBMED:19236364). |
Instruction: Is sense of coherence helpful in coping with caregiver burden for dementia?
Abstracts:
abstract_id: PUBMED:24954832
Is sense of coherence helpful in coping with caregiver burden for dementia? Background: Sense of coherence (SOC) is associated with a reduced risk of various health problems and is thought to be a major factor related to the ability to cope with stress. In the present study, we examined the association between caregiver burden and SOC among caregivers to persons with dementia.
Methods: Participants included 274 caregivers or family members of community-dwelling elderly dementia patients. To assess the cognitive function of patients, neuropsychological tests (e.g. Mini-Mental State Examination, Clinical Dementia Rating) were conducted by a clinical psychologist who was well trained in interviewing participants; the tests used a semi-structured interview protocol. Senior neurologists and psychiatrists also independently evaluated the dementia status of patients. To assess the SOC and caregiver burden, a social welfare counsellor asked questions from a 13-item version of the SOC scale and the short, eight-item Japanese version of the Zarit Caregiver Burden Interview (ZBI).
Results: Among 78 caregivers of elderly subjects with cognitive impairment due to dementia, the ZBI score was significantly associated with SOC (r = -0.38, P = 0.001). Multiple regression analyses revealed that SOC scores (β = -0.42, P < 0.001) and Mini-Mental State Examination scores (β = -0.28, P = 0.009) were significantly associated with ZBI scores (F(2, 76) = 10.51, P < 0.001). SOC was closely associated with personal strain in the ZBI (β = -0.41, P < 0.001; F(3, 75) = 8.53, P < 0.001).
Conclusion: Caregivers with a strong SOC may be less prone to experiencing personal strain from their burden. These results suggest that reinforcement of SOC would contribute to reducing the personal strain.
abstract_id: PUBMED:31559837
Antonovsky's sense of coherence and resistance resources reduce perception of burden in family carers of people with Alzheimer's disease. Objectives: Taking care of people with dementia (PWD) has been associated with some degree of burden. The variability of the carer's burden can be partially explained by their personal characteristics. Antonovsky's model of health defined the resistance resources (RRs) as essential mechanisms to cope with stressors, and to shape the personal sense of coherence (SOC). This study identifies the RRs related with carer's SOC, and their implications in the perception of burden in family dementia carers.Methods: A sample of 308 participants from the 'SOC & DEM study' (154 carers and 154 PWD) was recruited from two memory clinics. Carer's personal characteristics of burden, SOC, self-efficacy, coping strategies, perceived social support, and depression were evaluated using standardized instruments. PWD's degree of dependence and behaviour and psychological symptoms of dementia (BPSD) were assessed too. A path analysis was used to test the relationship between caregiver burden and SOC including the personal RRs of the carers and clinical data of PWD.Results: The path model identified SOC as a major factor related to carer's burden perception (r = -.327). Self-efficacy (r = .285), two coping strategies, 'use instrumental support' (r = -.235) and 'behavioural disengagement' (r = -.219), and social support perceived (r = .304) were the main carer's personal characteristics directly related with SOC. Caring experience (r = -.281) was the main carer factor related with burden while dependence (r = .156) and BPSD (r = .157) were the dementia factors.Conclusion: The SOC has previously related with carer's burden. The results contributed to identify relevant and modifiable personal characteristics as RRs that could reduce this burden.
abstract_id: PUBMED:23813690
Coping Strategy and Caregiver Burden Among Caregivers of Patients With Dementia. Background: This study aims to examine whether coping strategies employed by caregivers are related to distinct symptoms of patients with dementia and to investigate the associations between burden and coping among caregivers of patients with dementia.
Methods: A cross-sectional study design was used. A total of 57 caregivers of patients with dementia were enrolled. Coping strategies were assessed using the Ways of Coping Checklist, and burden was assessed using the Chinese version of Caregiver Burden Inventory. Correlations between coping and patients' behavior or memory problems were examined. Severities of behavior and memory problems were adjusted to examine the correlations between caregiver burden and coping strategies.
Results: The patients' disruptive behavior problems were associated with avoidance, and depression problems were associated with avoidance and wishful thinking. After adjusting for severity of behavior problems, coping strategies using avoidance were positively correlated with caregiver burden.
Conclusions: Emotion-focused coping strategies are a marker of caregiver burden.
abstract_id: PUBMED:25114532
Caregiver burden and coping strategies in caregivers of patients with Alzheimer's disease. Background: Alzheimer's disease (AD) causes considerable distress in caregivers who are continuously required to deal with requests from patients. Coping strategies play a fundamental role in modulating the psychologic impact of the disease, although their role is still debated. The present study aims to evaluate the burden and anxiety experienced by caregivers, the effectiveness of adopted coping strategies, and their relationships with burden and anxiety.
Methods: Eighty-six caregivers received the Caregiver Burden Inventory (CBI) and the State-Trait Anxiety Inventory (STAI Y-1 and Y-2). The coping strategies were assessed by means of the Coping Inventory for Stressful Situations (CISS), according to the model proposed by Endler and Parker in 1990.
Results: The CBI scores (overall and single sections) were extremely high and correlated with dementia severity. Women, as well as older caregivers, showed higher scores. The trait anxiety (STAI-Y-2) correlated with the CBI overall score. The CISS showed that caregivers mainly adopted task-focused strategies. Women mainly adopted emotion-focused strategies and this style was related to a higher level of distress.
Conclusion: AD is associated with high distress among caregivers. The burden strongly correlates with dementia severity and is higher in women and in elderly subjects. Chronic anxiety affects caregivers who mainly rely on emotion-oriented coping strategies. The findings suggest providing support to families of patients with AD through tailored strategies aimed to reshape the dysfunctional coping styles.
abstract_id: PUBMED:25515800
Effectiveness of coping strategies intervention on caregiver burden among caregivers of elderly patients with dementia. Background: Coping strategies are a potential way to improve interventions designed to manage the caregiver burden of dementia. The purpose of this study was to develop an intervention targeted towards improving coping strategies and to examine its effectiveness on reducing caregiver burden.
Methods: A controlled study design was used. Fifty-seven caregivers of dementia patients were enrolled. Coping strategies were assessed with the Revised Ways of Coping Checklist (WCCL-R) and caregiver burden was assessed with the Chinese version of the Caregiver Burden Inventory. The participants were randomly divided into two groups. The intervention group was offered a series of five interventions in which problem-solving skills, knowledge of dementia, social resources, and emotional support were taught every 2 weeks, and the control group was telephoned every 2 weeks for the usual clinical management. Two weeks after the end of the intervention, we again administered the WCCL-R and the Caregiver Burden Inventory. Two-way repeated-measure anova was used to evaluate the changes in coping strategies and caregiver burden.
Results: Forty-six participants completed the study. No statistically significant differences were noted in the demographic data between the two groups. On the problem-focused coping subscale on the WCCL-R, the intervention group's mean score increased by 3.8 points, and the control group's decreased by 5.1 points (F = 7.988, P = 0.007). On the seeking social support coping subscale on the WCCL-R, the intervention group's mean score increased by 3.8 points, and the control group's decreased by 3.1 points (F = 4.462, P = 0.04). On the Caregiver Burden Inventory, the intervention group's mean score decreased by 7.2 points, and the control group's increased by 2.2 points (F = 6.155, P = 0.017).
Conclusions: Psychosocial intervention can help caregivers to adopt more problem-focused and social support coping strategies, which are beneficial in terms of reducing the caregiver burden.
abstract_id: PUBMED:29723129
Self-Compassion, Coping Strategies, and Caregiver Burden in Caregivers of People with Dementia. Objective: Caring for someone with dementia can have negative consequences for caregivers, a phenomenon known as caregiver burden. Coping strategies influence the impact of caregiving-related stress. Specifically, using emotion-focused strategies has been associated with lower levels of burden, whereas dysfunctional strategies have been related to increased burden. The concept of self-compassion has been linked to both positive outcomes and the coping strategies that are most advantageous to caregivers. However, as yet, no research has studied self-compassion in caregivers. Therefore, the aim of this study was to explore the relationship between self-compassion, coping strategies and caregiver burden in dementia caregivers.
Method: Cross-sectional survey data was collected from 73 informal caregivers of people with dementia recruited from post-diagnostic support services and caregiver support groups.
Results: Self-compassion was found to be negatively related to caregiver burden and dysfunctional coping strategies and positively related to emotion-focused coping strategies. Dysfunctional strategies mediated the relationship between self-compassion and caregiver burden, whereas emotion-focused strategies did not.
Conclusion: Caregivers with higher levels of self-compassion report lower levels of burden and this is at least partly due to the use of less dysfunctional coping strategies.
Clinical Implications: Interventions that develop self-compassion could represent a useful intervention for struggling caregivers.
abstract_id: PUBMED:25525074
Burden of care, social support, and sense of coherence in elderly caregivers living with individuals with symptoms of dementia. Family members are often the care providers of individuals with dementia, and it is assumed that the need for this will increase. There has been little research into the association between the burden of care and the caregiver's sense of coherence or receipt of social support. This study examined the relationship between the social support subdimensions and sense of coherence and the burden of care among older people giving care to a partner with dementia. The study was a cross-sectional observation study of 97 individuals, ≥65 years old and living with a partner who had symptoms of dementia. We used the Informant Questionnaire on Cognitive Decline in the Elderly, the Relative Stress Scale, the Social Provisions Scale, the Sense of Coherence Scale, and a questionnaire on sociodemographic variables. We used multiple regression analysis in a general linear model procedure. We defined statistical significance as p < 0.05. With adjustments for sociodemographic variables, the association with burden of care was statistically significant for the subdimension attachment (p < 0.01) and for sense of coherence (p < 0.001). The burden of care was associated with attachment and with sense of coherence. Community nurses and other health professionals should take necessary action to strengthen attachment and sense of coherence among the caregivers of people with dementia. Qualitative studies could provide deeper understanding of the variation informal caregivers experience when living together with their partner with dementia.
abstract_id: PUBMED:35932155
Longitudinal effect of dementia carers' sense of coherence on burden. Background: A sense of coherence (SOC) could help us better understand why there are individuals who cope better than others in similar situations. The study aimed to assess the effect of SOC on the course of burden reports in relatives of persons with dementia.
Methods: This was a prospective cohort study of 156 dementia carers. The SOC was assessed by the Orientation to Life Questionnaire (OLQ-13), burden by Burden Interview, and personal and contextual characteristics were collected via ad hoc questions. The main dementia symptoms, including functional difficulties (Disability Assessment for Dementia), neuropsychiatric symptoms (Neuropsychiatric Inventory), and cognitive impairment (Mini-Mental State Examination), were also assessed. A general linear model was adjusted to determine the effect of SOC and other covariates on burden throughout the follow-up. Burden differences between baseline and 12 and 24 months were analysed, and the baseline OLQ-13 score was grouped by quartiles.
Results: The global burden reported increased after 24 months (F = 9.98; df = 2; p < 0.001), but not equally for all carers; daughters reported the greatest increase. SOC, functional disability, and neuropsychiatric disorders showed a significant effect on burden, but time did not. Carers with higher SOC at baseline tend to remain with lower burden levels, whereas carers with low SOC reported higher burden at each visit.
Conclusions: This study reports evidence of the effect of SOC on burden at baseline, 12 and 24 months of follow-up. Burden scores differ by carers' SOC; those with higher SOC showed lower burden levels, whereas the low-SOC group reported a greater burden at each visit.
abstract_id: PUBMED:18279282
The relationship between caregiver burden, caregivers' perceived health and their sense of coherence in caring for elders with dementia. Aim: The aim of this study is to examine associations between caregiver burden, perceived health and sense of coherence in family caregivers to persons with dementia living at home.
Background: Most of the studies on family caregivers have focused on burden and morbidity. However, the caregiver's sense of coherence and perceived health have not been studied earlier in relation to caregiver burden.
Design: A cross-sectional investigation design was used.
Methods: Older persons, 2238 subjects, with any form of social services, were invited to an assessment of cognitive capacity. Those who had cognitive decline (255) were invited for a medical examination and 130 persons were diagnosed as having dementia. The family caregivers to persons with dementia answered a questionnaire including a caregiver burden scale, the Nottingham health profile scale, sense of coherence scale and the Euroqol instrument.
Results: The family caregivers experienced moderate burden, and strong associations were noted between burden, especially isolation, disappointment and emotional involvement with perceived health and sense of coherence, adjusted for age and relationship. Caregivers with lower burden reported significantly better perceived health and higher mean score of sense of coherence than caregivers with higher burden.
Conclusions: Assessment of status of family caregivers of persons with dementia living at home seems to be gaining considerable importance. The caregiver burden scale and the sense of coherence scale seem to be highly useful for identifying carers at risk of stress, pattern of burden and coping strategies.
Relevance To Clinical Practice: Nurses can help family caregivers to identify their negative experiences about caregiving and can help them reflect upon their coping strategies to find balance in their situation. Risk groups of caregivers may be identified, especially those with low perceived health and sense of coherence, for early interventions to reduce burden.
abstract_id: PUBMED:23472614
Sense of coherence, burden, and affective symptoms in family carers of people with dementia. Background: Caring for a relative with dementia has been associated with high levels of psychological morbidity in carers. Sense of coherence is an important resource of successful coping with caregiving. The purpose of the present study was to examine the relationship between stress associated with caregiving, sense of coherence, and self-reported depression and anxiety in family carers of people with dementia. We hypothesized that carers reporting high levels of anxiety and depression will report low levels of coherence, and that the relationship between caregiver stress and affective symptoms will be mediated by sense of coherence.
Methods: A total of 170 carers of people with dementia took part in the present study. Family carers completed the Sense of Coherence Scale, the Relative Stress Scale, and the Hospital Anxiety and Depression Scale. A series of multiple linear regressions were conducted to examine the relationship between stress related to caregiving, caregiver anxiety and depression, and whether sense of coherence mediated this relationship.
Results: Self-reported anxiety and depression were associated with low levels of sense of coherence. Sense of coherence mediated the relationship between burden and self-reported depressive effect and anxiety symptoms.
Conclusions: Carers reporting high levels of anxiety and depression are more likely to report low levels of sense of coherence. The relationship between stress related to caregiving and depressive symptoms is mediated by carers' self-reported sense of coherence. Future psychotherapeutic intervention studies in family carers of people with dementia may incorporate strategies that specifically target sense of coherence.
Answer: Yes, sense of coherence (SOC) appears to be helpful in coping with caregiver burden for dementia. Studies have found that caregivers with a strong SOC may be less prone to experiencing personal strain from their burden, suggesting that reinforcement of SOC could contribute to reducing personal strain (PUBMED:24954832). Additionally, SOC has been identified as a major factor related to carer's burden perception, with higher SOC associated with lower perceived burden. Personal characteristics such as self-efficacy, coping strategies, and social support that are directly related to SOC can also influence the perception of burden in family dementia carers (PUBMED:31559837). Furthermore, caregivers with higher levels of SOC report lower levels of burden, and this is at least partly due to the use of less dysfunctional coping strategies (PUBMED:29723129). The longitudinal effect of dementia carers' SOC on burden also indicates that those with higher SOC at baseline tend to maintain lower burden levels over time (PUBMED:35932155). Moreover, caregivers with lower burden reported significantly better perceived health and higher mean score of SOC than caregivers with higher burden, highlighting the importance of SOC in the caregiving experience (PUBMED:18279282). Lastly, the relationship between stress related to caregiving and depressive symptoms is mediated by carers' self-reported sense of coherence, indicating that interventions targeting SOC could be beneficial for caregivers of people with dementia (PUBMED:23472614). |
Instruction: Is a patient's knowledge of cardiovascular risk factors better after the occurrence of a major ischemic event?
Abstracts:
abstract_id: PUBMED:24211108
Is a patient's knowledge of cardiovascular risk factors better after the occurrence of a major ischemic event? Survey of 135 cases and 260 controls Aim: We hypothezised that patients (cases) who are hospitalized for a major ischemic event--myocardial infarction, stroke, decompensation of peripheral arterial disease--acquire better knowledge than a control population--atheromatous patients without a major ischemic event, patients consulting for a vein disease or a diabetes evaluation, and accompanists--about cardiovascular risk factors (smoking, hypertension, diabetes, dyslipidemia, obesity) and have a better understanding of the usefulness of making changes in their lifestyle (quit smoking, regular exercise, Mediterranean diet, low salt diet, weight control, diabetes care).
Methods: A questionnaire was proposed at vascular surgery consultations and vascular and cardiac functional explorations, at the M Pavillon of the Édouard-Herriot hospital, Lyon, France. In five months, 395 questionnaires (135 cases and 260 controls) were analyzed.
Results: The global knowledge score was statistically higher for cases than for controls (cases 3.23±1.81; controls 2.77±2.03; P=0.037). Cases did not abide by monitoring and dietary rules better, except as regards the management of diabetes. Regular physical activity was statistically more prevalent among controls than among cases. Cases mainly received their information from their doctors (general practitioner for 59% of controls and 78% of cases, cardiologist for 25% of controls and 57% of cases) while controls got their information more through magazines or advertising.
Conclusion: Our results show that after a major ischemic event, cases' knowledge of risk factors is better than the rest of the population without improved rules lifestyle changes. This suggests the usefulness of evaluating a therapeutic education program for atheromatous disease.
abstract_id: PUBMED:31294030
Karma of Cardiovascular Disease Risk Factors for Prevention and Management of Major Cardiovascular Events in the Context of Acute Exacerbations of Chronic Obstructive Pulmonary Disease. There is compelling epidemiological evidence that airway exposure to cigarette smoke, air pollution particles, as well as bacterial and viral pathogens is strongly related to acute ischemic events. Over the years, there have been important animal and human studies that have provided experimental evidence to support a causal link. Studies show that patients with cardiovascular diseases (CVDs) or risk factors for CVD are more likely to have major adverse cardiovascular events (MACEs) after an acute exacerbation of chronic obstructive pulmonary disease (COPD), and patients with more severe COPD have higher cardiovascular mortality and morbidity than those with less severe COPD. The risk of MACEs in acute exacerbation of COPD is determined by the complex interactions between genetics, behavioral, metabolic, infectious, and environmental risk factors. To date, there are no guidelines regarding the prevention, screening, and management of the modifiable risk factors for MACEs in the context of COPD or COPD exacerbations, and there is insufficient CVD risk control in those with COPD. A deeper insight of the modifiable risk factors shared by CVD, COPD, and acute exacerbations of COPD may improve the strategies for reduction of MACEs in patients with COPD through vaccination, tight control of traditional CV risk factors and modifying lifestyle. This review summarizes the most recent studies regarding the pathophysiology and epidemiology of modifiable risk factors shared by CVD, COPD, and COPD exacerbations that could influence overall morbidity and mortality due to MACEs in patients with acute exacerbations of COPD.
abstract_id: PUBMED:19845740
Cardiovascular risk factors and collateral artery formation. Arterial lumen narrowing and vascular occlusion is the actual cause of morbidity and mortality in atherosclerotic disease. Collateral artery formation (arteriogenesis) refers to an active remodelling of non-functional vascular anastomoses to functional collateral arteries, capable to bypass the site of obstruction and preserve the tissue that is jeopardized by ischaemia. Hemodynamic forces such as shear stress and wall stress play a pivotal role in collateral artery formation, accompanied by the expression of various cytokines and invasion of circulating leucocytes. Arteriogenesis hence represents an important compensatory mechanism for atherosclerotic vessel occlusion. As arteriogenesis mostly occurs when lumen narrowing by atherosclerotic plaques takes place, presence of cardiovascular risk factors (e.g. hypertension, hypercholesterolaemia and diabetes) is highly likely. Risk factors for atherosclerotic disease affect collateral artery growth directly and indirectly by altering hemodynamic forces or influencing cellular function and proliferation. Adequate collateralization varies significantly among atherosclerotic patients, some profit from the presence of extensive collateral networks, whereas others do not. Cardiovascular risk factors could increase the risk of adverse cardiovascular events in certain patients because of the reduced protection through an alternative vascular network. Likewise, drugs primarily thought to control cardiovascular risk factors might contribute or counteract collateral artery growth. This review summarizes current knowledge on the influence of cardiovascular risk factors and the effects of cardiovascular medication on the development of collateral vessels in experimental and clinical studies.
abstract_id: PUBMED:34068323
Importance of Increased Arterial Resistance in Risk Prediction in Patients with Cardiovascular Risk Factors and Degenerative Aortic Stenosis. Background: Cardiovascular disease is a leading cause of heart failure (HF) and major adverse cardiac and cerebral events (MACCE).
Objective: To evaluate impact of vascular resistance on HF and MACCE incidence in subjects with cardiovascular risk factors (CRF) and degenerative aortic valve stenosis (DAS).
Methods: From January 2016 to December 2018, in 404 patients with cardiovascular disease, including 267 patients with moderate-to-severe DAS and 137 patients with CRF, mean values of resistive index (RI) and pulsatile index (PI) were obtained from carotid and vertebral arteries. Patients were followed-up for 2.5 years, for primary outcome of HF and MACCE episodes.
Results: RI and PI values in patients with DAS compared to CRF were significantly higher, with optimal cut-offs discriminating arterial resistance of ≥0.7 for RI (sensitivity: 80.5%, specificity: 78.8%) and ≥1.3 for PI (sensitivity: 81.3%, specificity: 79.6%). Age, female gender, diabetes, and DAS were all independently associated with increased resistance. During the follow-up period, 68 (16.8%) episodes of HF-MACCE occurred. High RI (odds ratio 1.25, 95% CI 1.13-1.37) and PI (odds ratio 1.21, 95% CI 1.10-1.34) were associated with risk of HF-MACCE.
Conclusions: An accurate assessment of vascular resistance may be used for HF-MACCE risk stratification in patients with DAS.
abstract_id: PUBMED:30741676
Examining the Link Between Cardiovascular Risk Factors and Neuropsychiatric Symptoms in Mild Cognitive Impairment and Major Depressive Disorder in Remission. Background: Cardiovascular risk factors (CVRFs) have been linked to both depression and cognitive decline but their role in neuropsychiatric symptoms (NPS) has yet to be clarified.
Objective: Understanding the role of CVRFs in the etiology of NPS for prospective treatments and preventive strategies to minimize these symptoms.
Methods: We examined the distribution of NPS using the Neuropsychiatric Inventory (NPI) scores in three cohorts from the Prevention of Alzheimer's Dementia with Cognitive Remediation Plus Transcranial Direct Current Stimulation in Mild Cognitive Impairment and Depression (PACt-MD) study: older patients with a lifetime history of major depressive disorder (MDD) in remission, patients with mild cognitive impairment (MCI), and patients with combined MCI and MDD. We also examined the link between individual NPS and CVRFs, Framingham risk score, and Hachinski ischemic score in a combined sample.
Results: Analyses were based on a sample of 140 subjects, 70 with MCI, 38 with MCI plus MDD, and 32 with MDD. There was no effect of age, gender, education, cognition, or CVRFs on the presence (NPI >1) or absence (NPI = 0) of NPS. Depression was the most prevalent affective NPS domain followed by night-time behaviors and appetite changes across all three diagnostic groups. Agitation and aggression correlated negatively while anxiety, disinhibition, night-time behaviors, and irritability correlated positively with CVRFs (all p-values <0.05). Other NPS domains showed no significant association with CVRFs.
Conclusion: CVRFs are significantly associated with individual NPI sub-scores but not with total NPI scores, suggesting that different pathologies may contribute to different NPS domains.
abstract_id: PUBMED:36121209
Traditional and Non-traditional Cardiovascular Risk Factors and Cardiovascular Disease in Women with Psoriasis. Women with cardiovascular disease are underdiagnos-ed, undertreated and under-represented in research. Even though the increased risk of cardiovascular disease among patients with psoriasis is well establi-shed, only a few studies have examined women with psoriasis. This study examined the prevalence of cardio-vascular risk factors and cardiovascular disease among women with psoriasis. Using the Copenhagen City Heart Study and the Copenhagen General Population Study, 66,420 women were included in a cross-sectional design. Of these, 374 (0.56%) women had hospital-diagnosed psoriasis. Women with vs with-out hospital-diagnosed psoriasis had higher odds ratios of having traditional cardiovascular risk factors, including hypertriglyceridaemia, smoking, obesity, type 2 diabetes, and low physical activity, and of having non-traditional cardiovascular risk factors, including low level of education, high level of psycho-social stress, and low-grade inflammation. Compared with women from the general population, the multi-variable adjusted odds ratio of heart failure and ischaemic cerebrovascular disease in women with hospital-diagnosed psoriasis was 2.51 (95% confidence interval 1.33-4.73) and 2.06 (1.27-3.35). In conclusion, women with hospital-diagnosed psoriasis have a higher prevalence of traditional and non- traditional cardiovascular risk factors, and increased risk of heart failure and ischaemic cerebrovascular disease, even after adjusting for these cardiovascular risk factors.
abstract_id: PUBMED:12243367
The influence of pre-operative electrocardiographic abnormalities and cardiovascular risk factors on patient and graft survival following renal transplantation. Premature cardiovascular disease (CVD) is the leading cause of mortality and of graft loss in renal transplant recipients. However, the pattern of cardiovascular risk factors (specifically modifiable risk factors) is not well established and may be different from the general population. In this study we investigated the importance of electrocardiographic abnormalities and conventional cardiovascular risk factors present at the time of first renal transplantation in a longitudinal follow-up study of 515 patients. Overall, 45.8% were cigarette smokers, 13.0% were diabetic, 75.1% had "hypertension", 12.2% had symptoms of angina pectoris and 9.1% had a past history of myocardial infarction or stroke. Two thirds of ECG tracings were abnormal. 58.7% of men and 37.5% of women had left ventricular hypertrophy (LVH). Overall, 28.2% had simple LVH, 20.5% had LVH with repolarisation changes ('strain'). 434 patients had complete data for multivariate analyses of patient and graft survival. A Cox multivariate analysis of patient survival (patients whose graft failed were censored in the analysis) identified: age (hazard ratio 1.03/year), diabetes (2.72), smoking (1.81) and family history of premature CVD (2.17) as independent risk factors for patient survival. An abnormal ECG was also independently associated with outcome, with the exception of isolated left ventricular hypertrophy. Left ventricular hypertrophy with strain, or ischaemic changes were associated with adverse outcome with a hazard ratio of 1.96 and 3.30 respectively. A similar analysis of the determinants of graft survival (patients who died with a functioning graft were censored in the analysis) identified: acute rejection (hazard ratio 2.38), cigarette smoking (1.48) and age (1.04/year) as independent predictors of graft failure. These data demonstrate a high prevalence of ECG abnormalities and CV risk factors in renal transplant recipients. Moreover, ECG abnormalities and "conventional" cardiovascular risk factors are associated with poor graft and patient outcome and represent potentially remediable risk factors for renal transplant recipients.
abstract_id: PUBMED:20374483
Microvascular responses to cardiovascular risk factors. Hypertension, hypercholesterolemia, diabetes, and obesity are among a growing list of conditions that have been designated as major risk factors for cardiovascular disease (CVD). While CVD risk factors are well known to enhance the development of atherosclerotic lesions in large arteries, there is also evidence that the structure and function of microscopic blood vessels can be profoundly altered by these conditions. The diverse responses of the microvasculature to CVD risk factors include oxidative stress, enhanced leukocyte- and platelet-endothelial cell adhesion, impaired endothelial barrier function, altered capillary proliferation, enhanced thrombosis, and vasomotor dysfunction. Emerging evidence indicates that a low-grade systemic inflammatory response that results from risk factor-induced cell activation and cell-cell interactions may underlie the phenotypic changes induced by risk factor exposure. A consequence of the altered microvascular phenotype and systemic inflammatory response is an enhanced vulnerability of tissues to the deleterious effects of secondary oxidative and inflammatory stresses, such as ischemia and reperfusion. Future efforts to develop therapies that prevent the harmful effects of risk factor-induced inflammation should focus on the microcirculation.
abstract_id: PUBMED:12521001
Fibrinogen: cardiovascular risk factor This paper demonstrate that plasmatic fibrinogen is a risk factor for ischaemic cardiovascular disease. Apart from its hemostatic functions, it has an important role in the atherothrombotic process. Prospective studies in a normal population and on patients with pre-existent cardiovascular disease demonstrate that fibrinogen is a predictor of cardiovascular events, either as first episode or recurrence. It is also reviewed a epidemiological study which is been carried out in Venezuela as a pilot study for Latinamerica because our population is different from those where the studies have been performed up to now. It is also mentioned the factors that influence the fibrinogen levels, some of them can be modified which could be useful for the prevention of the disease. It is considered the necessity of further studies to evaluate the benefit of the control of the fibrinogen level.
abstract_id: PUBMED:24849556
Relationship Between Hematocrit Level and Cardiovascular Risk Factors in a Community-Based Population. Background: This study aimed to determine the relationship between hematocrit (HCT) levels and cardiovascular risk factors in a community-based population of middle-aged adults.
Methods: From April 2011 to February 2012, a total of 1,884 middle-aged adults were selected from a community-based population in China. Blood and urine samples were collected for routine blood and urine tests, and measurement of plasma glucose and lipid levels. Baseline information including traditional cardiovascular risk factors was obtained by standard questionnaire to analyze. We evaluated the distribution of the HCT values for middle-aged adults with or without cardiovascular risk factors. There were 548 males and 1,336 females in this study. The mean age of all subjects was 54.7 ± 6.7 years. There were 1,209 subjects with risk factors and 675 without risk factors.
Results: The HCT levels in subjects with risk factors were higher than those without risk factors (P = 0.005). According to a simplified tool for evaluation of the 10-year risk of ischemic cardiovascular diseases (CVDs) in Chinese populations, all subjects were divided into four groups: the ultralow-risk group (1,367, 72.6%), low-risk group (232, 12.3%), intermediate-risk group (201, 10.7%), and high-risk/ultrahigh-risk group (84, 4.4%). Compared with HCT levels in the ultralow-risk group, significant differences were found in the low-risk, intermediate-risk, and high-risk/ultrahigh-risk groups (all P < 0.05).
Conclusion: Our results indicate that elevated HCT levels may be positively associated with cardiovascular risk factors. Thus, the combination of HCT values and cardiovascular risk factors may enable early diagnosis of CVDs.
Answer: Yes, a patient's knowledge of cardiovascular risk factors is generally better after the occurrence of a major ischemic event. A study that surveyed 135 cases (patients hospitalized for a major ischemic event) and 260 controls (atheromatous patients without a major ischemic event, patients consulting for vein disease or diabetes evaluation, and accompanists) found that the global knowledge score was statistically higher for cases than for controls. This suggests that experiencing a major ischemic event may lead to increased awareness and knowledge about cardiovascular risk factors among patients (PUBMED:24211108). |
Instruction: Can we afford to do laparoscopic appendectomy in an academic hospital?
Abstracts:
abstract_id: PUBMED:16307952
Can we afford to do laparoscopic appendectomy in an academic hospital? Background: Multiple studies have shown laparoscopic appendectomy to be safe for both acute and perforated appendicitis, but there have been conflicting reports as to whether it is superior from a cost perspective. Our academic surgical group, who perform all operative cases with resident physicians, has been challenged to reduce expenses in this era of cost containment. We recognize resident training is an expensive commodity that is poorly reimbursed, and hypothesized laparoscopic appendectomy was too expensive to justify resident teaching of this procedure. The purpose of this study was to determine if laparoscopic appendectomy is more expensive than open appendectomy.
Methods: From April 2003 to April 2004, all patients undergoing appendectomy for presumed acute appendicitis at our university-affiliated teaching hospital were reviewed; demographic data, equipment charge, minutes in the operating room (OR), hospital length of stay, and total hospital charge were analyzed. OR minute charges were gradated based on equipment use and level of skilled nursing care. Conversions to open appendectomy were included in the laparoscopic group for analysis.
Results: During the study period, 247 patients underwent appendectomy for preoperative diagnosis of acute appendicitis, with 152 open (113 inflamed, 37 perforated, 2 normal), 88 laparoscopic (69 inflamed, 12 perforated, 7 normal), and 7 converted (2 inflamed, 4 perforated, 1 normal) operations performed. The majority were men (67%) with a mean age of 31.4 +/- 2.2 years. Overall, there was significant difference (P < .05) in intraoperative equipment charge (125.32 dollars +/- 3.99 dollars open versus 1,078.70 dollars +/- 24.06 dollars lap), operative time charge (3,022.16 dollars +/- 57.51 dollars versus 4,065.24 dollars +/- 122.64 dollars), and total hospital charge (12,310 dollars +/- 772 dollars versus 16,773 dollars +/- 1,319 dollars) but no significant difference in operative minutes (56.3 +/- 1.3 versus 57.4 +/- 2.3), operating room minutes (90.5 +/- 1.7 versus 95.7 +/- 2.5), or hospital days (2.6 versus 2.2). In subgroup analysis of patients with uncomplicated appendicitis, open and laparoscopic groups had equivalent hospital days (1.47 versus 1.49) but significantly different hospital charges (9,632.44 dollars versus 14,251.07 dollars).
Conclusions: Although operative time was similar between the 2 groups, operative and total hospital charges were significantly higher in the laparoscopic group. Unless patient factors warrant a laparoscopic approach (questionable diagnosis, obesity), we submit open appendectomy remains the most cost-effective procedure in a teaching environment.
abstract_id: PUBMED:24710238
Cost is not a drawback to perform laparoscopic appendectomy in an academic hospital. Appendectomy is the most frequently performed emergent surgical procedure in western countries. There is still controversy about which alternative is clinically and economically superior: open or laparoscopic appendectomy (LA). Our aim was to determine clinical outcomes and cost of both procedures in our academic institution. A retrospective comparative study was performed including patients undergoing appendectomy from January to December 2011. Demographic data, operating room occupancy time, hospital length of stay, complications, and economic data were obtained. A total of 116 appendectomies were performed along the time of study, 23.27% laparoscopic and 76.72% open. Groups were similar in terms of demographics and intraoperative findings. Operating room occupancy time was longer in laparoscopic group and hospital stay was shorter. No significant differences were found respecting to postoperative complications rate. Cost minimization analysis showed that LA saved 1561.08&OV0556; per patient. In our teaching setting, LA may have clinical and economic advantages over open appendectomy.
abstract_id: PUBMED:31523536
Comparison of Open Appendectomy and Laparoscopic Appendectomy in Perforated Appendicitis. Introduction Laparoscopic appendectomy for nonperforated appendicitis is associated with improved outcomes. This study compares laparoscopic appendectomy and open appendectomy in cases of a perforated appendix by assessing surgical site infection, mean operating time, and length of hospital stay. Materials and methods This study was a prospective randomized study conducted at the Department of Surgery, Holy Family Hospital, Rawalpindi, Pakistan, from January 2016 to January 2017, by randomly allotting the laparoscopic or the open appendectomy technique to 130 patients by the lottery method. Patients having a perforated appendix were included after they provided informed consent. Data were entered and analyzed using IBM SPSS Statistics for Windows, Version 20.0 (IBM Corp., Armonk, NY, US). Results The frequency of wound site infection was significantly higher in open appendectomy (27.69%) than in the laparoscopic approach (10.77%; p=0.01). Mean hospital stay was slightly longer in the laparoscopic approach (4.38 ± 1.09 days) than in open appendectomy (4.18 ± 0.77 days; p=0.23). Mean operating time for laparoscopic appendectomy and open appendectomy was 46.98 ± 2.99 minutes and 53.02 ± 2.88 minutes, respectively (p<0.000). Conclusion Laparoscopic appendectomy was associated with fewer surgical site infections and shorter mean operating time than an open appendectomy.
abstract_id: PUBMED:22148124
Comparison of clinical outcomes and hospital cost between open appendectomy and laparoscopic appendectomy. Purpose: Laparoscopic appendectomy has been recognized to have many advantages such as greater cosmetic results, less postoperative pain and shorter hospital stays. On the other hand, the cost of laparoscopic procedures is still more expensive than that of open procedures in Korea. The aim of this study is to compare clinical outcomes and hospital costs between open appendectomy and laparoscopic appendectomy.
Methods: Between January 1, 2010 and December 31, 2010, 471 patients were diagnosed with acute appendicitis. Of these, 418 patients met the inclusion criteria and were divided into two groups of open appendectomy (OA) group and laparoscopic appendectomy (LA) group. We analyzed the clinical data and hospital costs.
Results: The mean operation time for laparoscopic appendectomy (72.17 minutes) was significantly longer than that of open appendectomy (46.26 minutes) (P = 0.0004). The mean amounts of intravenous analgesics for OA group (2.00 times) was greater than that of LA group (1.86 times) (P < 0.0001). The complication rate was similar between the two groups (OA, 6.99% vs. LA, 10.87%; P = 0.3662). The mean length of postoperative hospital stay was shorter in LA group (OA, 4.55 days vs. LA, 3.60 days; P = 0.0002). The mean total cost covered by the National Health Insurance was more expensive in LA group (OA, 1,259,842 won [Korean monetary unit] vs. LA, 1,664,367 won; P = 0.0057).
Conclusion: Clinical outcomes of laparoscopic appendectomy were superior to that of open appendectomy even though the cost of laparoscopic appendectomy was more expensive than that of open appendectomy. Whenever surgeons manage a patient with appendicitis, laparoscopic appendectomy should be considered as the procedure of choice.
abstract_id: PUBMED:37489661
Laparoscopic Appendectomy versus Open Appendectomy in Acute Appendicitis. Background: Appendectomy is the most common emergency surgical procedure performed. Appendectomy is performed by either open or laparoscopic methods. However, there is lack of consensus regarding the most appropriate method. This study aimed to compare the outcomes of laparoscopic and open appendectomy in the treatment of acute appendicitis.
Methods: Fifty-two patients undergoing appendectomy were analyzed in this prospective comparative study, with 26 patients each in laparoscopic and open group. The outcomes were measured in terms of operative time, postoperative pain at 4, 6 and 12 hours, length of hospital stay, postoperative complications according to modified Clavien Dindo classification and cost analysis.
Results: Laparoscopic group had longer time after completion of surgery till exit from operation theatre (30 min in laparoscopic and 20 min in open, p<0.01) and significantly higher cost (Nrs. 26295 for laparoscopic and Nrs. 19575 for open, p<0.01) than open appendectomy. Operative time, time from entering operation theatre till being kept in operation table, time from being kept in operation table till initiation of anesthesia, postoperative pain at 4,6 and 12 hours and postoperative complications were insignificant in both groups.
Conclusions: The results suggest that laparoscopic appendectomy group had longer recovery time after operation and was costlier than open appendectomy. Thus, the decision of the operative procedure can be based on the patient's preference.
abstract_id: PUBMED:27582784
Laparoscopic versus open appendectomy: a retrospective cohort study assessing outcomes and cost-effectiveness. Background: Appendectomy is the most common surgical procedure performed in emergency surgery. Because of lack of consensus about the most appropriate technique, appendectomy is still being performed by both open (OA) and laparoscopic (LA) methods. In this retrospective analysis, we aimed to compare the laparoscopic approach and the conventional technique in the treatment of acute appendicitis.
Methods: Retrospectively collected data from 593 consecutive patients with acute appendicitis were studied. These comprised 310 patients who underwent conventional appendectomy and 283 patients treated laparoscopically. The two groups were compared for operative time, length of hospital stay, postoperative pain, complication rate, return to normal activity and cost.
Results: Laparoscopic appendectomy was associated with a shorter hospital stay (2.7 ± 2.5 days in LA and 1.4 ± 0.6 days in OA), with a less need for analgesia and with a faster return to daily activities (11.5 ± 3.1 days in LA and 16.1 ± 3.3 in OA). Operative time was significantly shorter in the open group (31.36 ± 11.13 min in OA and 54.9 ± 14.2 in LA). Total number of complications was less in the LA group with a significantly lower incidence of wound infection (1.4 % vs 10.6 %, P <0.001). The total cost of treatment was higher by 150 € in the laparoscopic group.
Conclusion: The laparoscopic approach is a safe and efficient operative procedure in appendectomy and it provides clinically beneficial advantages over open method (including shorter hospital stay, decreased need for postoperative analgesia, early food tolerance, earlier return to work, lower rate of wound infection) against only marginally higher hospital costs.
Trial Registration: NCT02867072 Registered 10 August 2016. Retrospectively registered.
abstract_id: PUBMED:29260130
Comparison of open appendectomy and laparoscopic appendectomy with laparoscopic intracorporeal knotting and glove endobag techniques: A prospective observational study. Objective: Despite the recent increase in the use of laparoscopic appendectomy procedures to treat acute appendicitis, laparoscopic appendectomy is not necessarily the best treatment modality. The aim of this study is to examine the value of laparoscopic intracorporeal knotting and glove endobag in terms of various parameters and in terms of reducing the costs related to laparoscopic appendectomy procedures.
Material And Methods: Seventy-two acute appendicitis patients who underwent laparoscopic appendectomy and open appendectomy surgery were enrolled in the study and were evaluated prospectively. The patients were divided into two groups: group 1 was treated with laparoscopic appendectomy using laparoscopic intracorpreal knotting and glove endobag (n=36) and group 2 was treated with open appendectomy (n=36). The two groups were statistically compared in terms of preoperative symptoms and signs, laboratory and imaging findings, operation time and technique, pain score, gas and stool outputs, duration of hospital stay, return to normal activity, and complications.
Results: No statistically significant differences were found between the groups in relation to gender, age, body mass index, or pre-operation findings, which included loss of appetite, vomiting, time when pain started, displacement of pain, defense, rebound, imaging methods, and laboratory and pathology examinations (p>0.05). Moreover, there were no differences between the groups with respect to drain usage, hospital stay time, or complications (p>0.05). In contrast, a statistically significant difference was found between the groups in terms of operation time, pain scores, gas-stool outputs, and return to normal activity in the laparoscopic appendectomy group (p=0.001).
Conclusion: Laparoscopic appendectomy can be performed in a facile, safe, and cost-effective manner with laparoscopic intracorporeal knotting and glove endobag. By using these techniques, the use of expensive instruments can be avoided when performing laparoscopic appendectomy.
abstract_id: PUBMED:26770712
Single-incision versus conventional laparoscopic appendectomy: A case-match study. Background: Three-port laparoscopic appendectomy is considered standard in many countries for the surgical treatment of acute appendicitis. Single-incision laparoscopic technique has been recently introduced and is supposed to minimize the aggression induced by surgery. Regarding appendectomy, comparison with standard laparoscopy, benefits and drawbacks of this novel technique remain to be evaluated. The goal of this study was to assess single-incision laparoscopic appendectomy compared to conventional laparoscopic appendectomy in terms of operation time, length of hospital stay, complication rate, and postoperative antibiotherapy rate.
Methods: From February 2011 to December 2011, single-incision laparoscopic appendectomy was proposed to patients admitted to the emergency room of the University Hospital of Lausanne (CHUV, Lausanne, Switzerland), diagnosed with uncomplicated acute appendicitis. Preoperative patients' information, technical difficulties during the operation, and postoperative follow-ups were recorded. Every patient who underwent single-incision laparoscopic appendectomy (n = 20) was matched 1:3 conventional laparoscopic appendectomy (n = 60), controlling for age, gender, body mass index, American Society of Anesthesiologists score, and histopathological findings.
Results: No statistically significant differences for median operation time, length of hospital stay, complication rate, and need for postoperative antibiotherapy were found. In 5 out of 20 single-incision laparoscopic appendectomy patients the Endoloop(®) Ligature was judged difficult to put in place.
Conclusion: This study suggests that single-incision laparoscopic appendectomy is a feasible and effective operative technique for uncomplicated acute appendicitis.
abstract_id: PUBMED:24082738
Suprapubic approach for laparoscopic appendectomy. Objective: To evaluate the results of laparoscopic appendectomy using two suprapubic port incisions placed below the pubic hair line.
Design: Prospective hospital based descriptive study.
Settings: Department of surgery of a tertiary care teaching hospital located in Rohtas district of Bihar. The study was carried out over a period of 11months during November 2011 to September 2012.
Participants: Seventy five patients with a diagnosis of acute appendicitis.
Materials And Methods: All patients underwent laparoscopic appendectomy with three ports (one 10-mm umbilical for telescope and two 5 mm suprapubic as working ports) were included. Operative time, conversion, complications, hospital stay and cosmetic results were analyzed.
Results: Total number of patients was 75 which included 46 (61.33%) females and 29 (38.67%) males with Mean age (±Standard deviation {SD}) at the time of the diagnosis was 30.32 (±8.86) years. Mean operative time was 27.2 (±5.85) min. One (1.33%) patient required conversion to open appendectomy. No one patient developed wound infection or any other complication. Mean hospital stay was 22.34 (±12.18) h. Almost all patients satisfied with their cosmetic results.
Conclusion: A laparoscopic approach using two supra pubic ports yields the better cosmetic results and also improves the surgeons working position during laparoscopic appendectomy. Although, this study had shown better cosmetic result and better working position of the surgeon, however it needs further comparative study and randomized controlled trial to confirm our findings.
abstract_id: PUBMED:27182227
Comparison of Laparoscopic Appendectomy with open appendectomy in Treating Children with Appendicitis. Objective: To analyze feasibility and curative effect of laparoscopic appendectomy in the treatment of pediatric appendicitis and compare it with open appendectomy.
Methods: Two hundred and sixty patients were selected for this study and randomly divided into open appendectomy group (130 cases) and laparoscopic appendectomy group (130 cases). Patients in open appendectomy group underwent traditional open appendectomy, while patients in laparoscopic appendectomy were treated with laparoscopic appendectomy. Incision length, blood loss during operation, duration of operation, time to leave bed, anus exhausting time, time to take food, catheter drainage time, urinary catheterization time, time of using antibiotics, use of pain killer and incidence of complications such as incision infection, residual abscess and intestinal obstruction were compared between two groups.
Results: We found relevant indexes including length of incision, amount of bleeding and duration of operation in laparoscopic appendectomy group were better than open appendectomy group after surgery; and differences were statistically significant (P<0.05). Indexes such as time to out of bed, time to take food, exhaust time, drainage time, catheterization time and application time and use of antibiotics in laparoscopic appendectomy group were all superior to open appendectomy group, and differences had statistical significance (P<0.05). Incidence of complications in laparoscopic appendectomy group was much lower than open appendectomy group and the difference was statistically significant (P<0.05).
Conclusion: Laparoscopic appendectomy has advantages of small trauma, sound curative effect, low incidence of complications and rapid recovery and can effectively relieve pain of children suffering from appendicitis. Hence it is worth promotion and should be preferred.
Answer: The question of whether laparoscopic appendectomy (LA) is affordable in an academic hospital setting has been addressed in several studies with varying conclusions.
One study (PUBMED:16307952) found that while operative time was similar between laparoscopic and open appendectomy, the operative and total hospital charges were significantly higher for the laparoscopic group. The authors concluded that unless patient factors warrant a laparoscopic approach, open appendectomy remains the most cost-effective procedure in a teaching environment.
In contrast, another study (PUBMED:24710238) reported that laparoscopic appendectomy saved money per patient and may have clinical and economic advantages over open appendectomy in their teaching setting.
A prospective randomized study (PUBMED:31523536) found that laparoscopic appendectomy was associated with fewer surgical site infections and shorter mean operating time than open appendectomy, suggesting potential benefits that might offset higher costs.
Another study (PUBMED:22148124) acknowledged that laparoscopic appendectomy was more expensive than open appendectomy but argued that the clinical outcomes of laparoscopic appendectomy were superior, suggesting it should be considered the procedure of choice.
A prospective comparative study (PUBMED:37489661) indicated that laparoscopic appendectomy had longer recovery time after operation and was costlier than open appendectomy, suggesting that the choice of procedure could be based on patient preference.
A retrospective cohort study (PUBMED:27582784) found that laparoscopic appendectomy offered clinically beneficial advantages over the open method, such as shorter hospital stay and lower rate of wound infection, against only marginally higher hospital costs.
A prospective observational study (PUBMED:29260130) concluded that laparoscopic appendectomy can be performed in a cost-effective manner with specific techniques that avoid the use of expensive instruments.
In summary, while some studies suggest that laparoscopic appendectomy is more expensive than open appendectomy, others argue that the clinical benefits and potential cost savings in terms of reduced complications and shorter hospital stays may justify the use of laparoscopic appendectomy in an academic hospital setting. The decision may ultimately depend on balancing the costs with the clinical outcomes and the specific circumstances of the academic institution. |
Instruction: Elevated levels of plasma phenylalanine in schizophrenia: a guanosine triphosphate cyclohydrolase-1 metabolic pathway abnormality?
Abstracts:
abstract_id: PUBMED:24465804
Elevated levels of plasma phenylalanine in schizophrenia: a guanosine triphosphate cyclohydrolase-1 metabolic pathway abnormality? Background: Phenylalanine and tyrosine are precursor amino acids required for the synthesis of dopamine, the main neurotransmitter implicated in the neurobiology of schizophrenia. Inflammation, increasingly implicated in schizophrenia, can impair the function of the enzyme Phenylalanine hydroxylase (PAH; which catalyzes the conversion of phenylalanine to tyrosine) and thus lead to elevated phenylalanine levels and reduced tyrosine levels. This study aimed to compare phenylalanine, tyrosine, and their ratio (a proxy for PAH function) in a relatively large sample of schizophrenia patients and healthy controls.
Methods: We measured non-fasting plasma phenylalanine and tyrosine in 950 schizophrenia patients and 1000 healthy controls. We carried out multivariate analyses to compare log transformed phenylalanine, tyrosine, and phenylalanine:tyrosine ratio between patients and controls.
Results: Compared to controls, schizophrenia patients had higher phenylalanine (p<0.0001) and phenylalanine: tyrosine ratio (p<0.0001) but tyrosine did not differ between the two groups (p = 0.596).
Conclusions: Elevated phenylalanine and phenylalanine:tyrosine ratio in the blood of schizophrenia patients have to be replicated in longitudinal studies. The results may relate to an abnormal PAH function in schizophrenia that could become a target for novel preventative and interventional approaches.
abstract_id: PUBMED:32977055
Acyl-Carnitine plasma levels and their association with metabolic syndrome in individuals with schizophrenia. The metabolic syndrome (MetS) affects individuals with schizophrenia at a higher rate when compared to individuals in the general population. Accumulating evidence indicated that subjects with MetS generally manifest elevated levels of acyl-carnitines, which are important carriers for transporting fatty acyl group. Abnormalities of acyl-carnitines in individuals with schizophrenia with or without MetS had not been sufficiently characterized. We conducted this post-hoc analysis with our published data to further evaluate the differences of 29 acyl-carnitines in 46 individuals with schizophrenia with MetS and 123 without MetS. The rate of MetS was 27.2% (46/169) in the individuals with schizophrenia. After FDR correction, the individuals with schizophrenia and MetS showed significantly higher levels of 17 plasma acyl-carnitines, compared to individuals without MetS. Eight acyl-carnitines (i.e., C3, C4, C5, C6: 1, C10: 1, C10: 2, C14: 2-OH, C16: 2-OH) were significantly different between two groups after adjusting for age and sex. The correlation analysis reported that acyl-carnitine concentrations have potential correlations with certain metabolic parameters. Our findings provide valuable new clues for exploring the roles of acyl-carnitines in the diagnosis and treatment of schizophrenia. More data and molecular biology evidences are needed to replicate our findings and elucidate relevant mechanisms.
abstract_id: PUBMED:34832073
Misregulation of Wnt Signaling Pathways at the Plasma Membrane in Brain and Metabolic Diseases. Wnt signaling pathways constitute a group of signal transduction pathways that direct many physiological processes, such as development, growth, and differentiation. Dysregulation of these pathways is thus associated with many pathological processes, including neurodegenerative diseases, metabolic disorders, and cancer. At the same time, alterations are observed in plasma membrane compositions, lipid organizations, and ordered membrane domains in brain and metabolic diseases that are associated with Wnt signaling pathway activation. Here, we discuss the relationships between plasma membrane components-specifically ligands, (co) receptors, and extracellular or membrane-associated modulators-to activate Wnt pathways in several brain and metabolic diseases. Thus, the Wnt-receptor complex can be targeted based on the composition and organization of the plasma membrane, in order to develop effective targeted therapy drugs.
abstract_id: PUBMED:25520919
Metabolic Abnormality and Sleep Disturbance are Associated with Clinical Severity of Patients with Schizophrenia. Schizophrenic patients suffer from more metabolic or sleep problems. Little is known about risk factors. We recruited 17 patients with chronic schizophrenia from the rehabilitation center in a medical center in Taiwan and measured their demographic data, cognitive performance, and physical fitness, metabolic profiles and sleep parameters. They were divided into two groups according to clinical severity, then compared in terms of metabolic and sleep parameters. Those with more severe symptomatology had more metabolic abnormality and shorter slow wave sleep (SWS). Our findings suggest clinical symptoms as linked with heavier body weight, wider neck circumference, elevated blood pressure, and shorter SWS. Further studies are warranted to confirm the preliminary finding and to elucidate the underlying mechanism.
abstract_id: PUBMED:35890117
Adverse Drug Reactions in Relation to Clozapine Plasma Levels: A Systematic Review. Clozapine is the gold standard for treatment-resistant schizophrenia. Serious and even life-threatening adverse effects, mostly granulocytopenia, myocarditis, and constipation, are of great clinical concern and constitute a barrier to prescribing clozapine, thus depriving many eligible patients of a lifesaving treatment option. Interestingly, clozapine presents variable pharmacokinetics affected by numerous parameters, leading to significant inter- and intra-individual variation. Therefore, therapeutic drug monitoring of plasma clozapine levels confers a significant benefit in everyday clinical practice by increasing the confidence of the prescribing doctor to the drug and the adherence of the patient to the treatment, mainly by ensuring effective treatment and limited dose-related side effects. In the present systematic review, we aimed at identifying how a full range of adverse effects relates to plasma clozapine levels, using the Jadad grading system for assessing the quality of the available clinical evidence. Our findings indicate that EEG slowing, obsessive-compulsive symptoms, heart rate variability, hyperinsulinemia, metabolic syndrome, and constipation correlate to plasma clozapine levels, whereas QTc, myocarditis, sudden death, leucopenia, neutropenia, sialorrhea, are rather unrelated. Rapid dose escalation at the initiation of treatment might contribute to the emergence of myocarditis, or leucopenia. Strategies for managing adverse effects are different in these conditions and are discussed accordingly.
abstract_id: PUBMED:25610158
The effects of atypical antipsychotic usage duration on serum adiponectin levels and other metabolic parameters. Objective: Although atypical antipsychotics are well-tolerated and effective treatment options for schizophrenia, they have metabolic side effects, including weight gain and increased risk of Type II Diabetes Mellitus (DM). Adiponectin, produced exclusively in adipocytes, is the most abundant serum adipokine. Low levels of adiponectin are correlated with DM, insulin resistance and coronary heart disease. Usage of atypical antipsychotics may create a risk of metabolic syndrome. The aim of this study was to evaluate the effects of antipsychotic usage on parameters related to development of metabolic syndrome.
Materials And Methods: A total of 27 patients (n=27) (13 women and 14 men) were recruited from our out-patient psychiatry clinic. All patients had been treated with atypical antipsychotics for at least 3 months and were in remission. Patients were evaluated for levels of HDL (High Density Lipoprotein), LDL (Low Density Lipoprotein), TG (Triglyceride) total cholesterol and fasting blood glucose, body weight, BMI (Body Mass Index), waist circumference and serum adiponectin levels.
Results: Serum adiponectin levels were significantly lower (p:0.000) and body weights were significantly higher (p:0.003) in the patients who had been using atypical antipsychotics for longer than a year in comparison to patients who had been using atypical antipsychotics for one year or less.
Conclusion: Our findings supported the hypothesis that the length of administration of atypical antipsychotics has an effect on metabolic changes. They also highlight the fact that when investigating metabolic changes generated by atypical antipsychotic effects, the length of time that the patient has been on the atypical antipsychotics should also be considered.
abstract_id: PUBMED:31081413
Evaluation of plasma agmatine level and its metabolic pathway in patients with bipolar disorder during manic episode and remission period. Objectives: Agmatine is a cationic amine resulting from the decarboxylation of l-arginine. Agmatine has neuroprotective, anti-inflammatory, anti-stress, and anti-depressant properties. In this study, plasma agmatine, arginine decarboxylase, and agmatinase levels were measured during manic episode and remission period in patients with bipolar disorder. Methods: Thirty healthy volunteers and 30 patients who meet Bipolar Disorder Manic Episode diagnostic criteria were included in the study. Additionally, the changes in the patient group between manic episode and remission period were examined. We evaluated the relationship between levels of l-arginine and arginine decarboxylase in the agmatine synthesis pathway, and level of agmatinase that degrades agmatine. Results: Levels of agmatine and l-arginine were significantly increased than control group during manic episode (p < .01). All parameters were increased during manic episode compared to remission period (p < .05). Agmatinase was significantly decreased both during manic episode (p < .01) and remission period (p < .05) in comparison to the control group. Arginine decarboxylase levels did not show a significant difference between the groups (p > .05). Conclusions: This study indicate that there may be a relationship between bipolar disorder and agmatine and its metabolic pathway. Nonetheless, we believe more comprehensive studies are needed in order to reveal the role of agmatine in etiology of bipolar disorder. Key points Agmantine, agmatinase, l-arginine and arginine decarboxylase levels in BD have not been explored before. Various neuro-chemical mechanisms act to increase agmatine in BD; however, agmatine could have elevated to compensate agmatine deficit prior to the manifestation of the disease as in schizophrenia. Elevated agmatine degradation resulting from excess expression of agmatinase which is suggested to be effective in pathogenesis of mood disorders was compensated by this way. Elevated agmatine may be one of the causes which play a role in mania development. Elevated agmatine levels are also suggested to trigger psychosis and be related with the etiology of manic episode and lead to BD.
abstract_id: PUBMED:25827962
Plasma adiponectin levels in schizophrenia and role of second-generation antipsychotics: a meta-analysis. Background: People with schizophrenia are more likely than general population to suffer from metabolic abnormalities, with second-generation antipsychotics (SGAs) increasing the risk. Low plasma adiponectin levels may lead to metabolic dysregulations but evidence in people with schizophrenia, especially for the role of SGAs, is still inconclusive.
Objective: To compare plasma adiponectin levels between people with schizophrenia and healthy controls, and to estimate the relative effect of schizophrenia and SGAs on adiponectin.
Methods: We performed a systematic review and meta-analysis of observational studies published up to 13 June 2014 in main electronic databases. Pooled standardized mean differences (SMDs) between index and control groups were generated. Appropriate subanalyses and additional subgroup analyses were carried out.
Results: Data from 2735 individuals, 1013 with and 1722 without schizophrenia, respectively, were analysed. Schizophrenia was not associated with lower adiponectin levels (SMD of -0.28, 95%CI: -0.59, 0.04; p=0.09). However, individuals with schizophrenia taking SGAs had plasma levels significantly lower than controls (p=0.002), which was not the case of drug free/drug naïve subjects (p=0.52). As regards single antipsychotic drugs clozapine (p<0.001) and olanzapine (p=0.04)--but not risperidone (p=0.88)--were associated with adiponectin levels lower than controls.
Conclusions: People with schizophrenia per se may not have levels of adiponectin lower than controls, though treatment with SGAs is associated with this metabolic abnormality. This bears clinical significance because of hypoadiponectinemia involvement in cardiovascular diseases, even if mechanisms whereby SGAs affect adiponectin remain unexplained. Longitudinal studies evaluating long-term effects of SGAs on adiponectin are needed.
abstract_id: PUBMED:28009525
Effects of Alpha-Lipoic Acid Supplementation on Plasma Adiponectin Levels and Some Metabolic Risk Factors in Patients with Schizophrenia. Adiponectin is an adipocyte-derived plasma protein with insulin-sensitizing and anti-inflammatory properties and is suggested to be a biomarker of metabolic disturbances. The aim of this study was to investigate the effects of alpha-lipoic acid (ALA) on plasma adiponectin and some metabolic risk factors in patients with schizophrenia. The plasma adipokine levels (adiponectin and leptin), routine biochemical and anthropometric parameters, markers of oxidative stress, and the serum phospholipid fatty acid profile in eighteen schizophrenic patients at baseline, in the middle, and at the end of a 3-month long supplementation period with ALA (500 mg daily) were determined. A significant increase in the plasma adiponectin concentrations, as well as a decrease in fasting glucose and aspartate aminotransferase activity (AST), was found. Baseline AST activity was independently correlated with the adiponectin concentrations. Our data show that ALA can improve plasma adiponectin levels and may play a potential role in the treatment of metabolic risk factor in patients with schizophrenia. Future randomized controlled trials are needed to confirm these preliminary investigations.
abstract_id: PUBMED:38243017
Soluble terminal complement complex blood levels are elevated in schizophrenia. The role of the complement system in schizophrenia (Sz) is inconclusive due to heterogeneity of the disease and study designs. Here, we assessed the levels of complement activation products and functionality of the classical pathway in acutely ill unmedicated Sz patients at baseline and after 6 weeks of treatment versus matched controls. The study included analyses of the terminal complement complex (sTCC) and C5a in plasma from 96 patients and 96 controls by enzyme-linked immunosorbent assay. Sub-group analysis of serum was conducted for measurement of C4 component and activity of the classical pathway (28 and 24 cases per cohort, respectively). We found no differences in levels of C5a, C4 and classical pathway function in patients versus controls. Plasma sTCC was significantly higher in patients [486 (392-659) ng/mL, n = 96] compared to controls [389 (304-612) ng/mL, n = 96] (p = 0.027, δ = 0.185), but not associated with clinical symptom ratings or treatment. The differences in sTCC between Sz and controls were confirmed using an Aligned Rank Transformation model considering the covariates age and sex (p = 0.040). Additional analysis showed that sTCC was significantly associated with C-reactive protein (CRP; p = 0.006). These findings suggest that sTCC plays a role in Sz as a trait marker of non-specific chronic immune activation, as previously described for CRP. Future longitudinal analyses with more sampling time points from early recognition centres for psychoses may be helpful to better understand the temporal dynamics of innate immune system changes during psychosis development.
Answer: The study reported in PUBMED:24465804 found that compared to healthy controls, schizophrenia patients had higher levels of plasma phenylalanine and an increased phenylalanine:tyrosine ratio, but no difference in tyrosine levels. These findings suggest a potential abnormality in the Phenylalanine hydroxylase (PAH) function, which is responsible for converting phenylalanine to tyrosine. The elevated phenylalanine and phenylalanine:tyrosine ratio in the blood of schizophrenia patients may indicate a guanosine triphosphate cyclohydrolase-1 (GTPCH1) metabolic pathway abnormality, as GTPCH1 is involved in the synthesis of tetrahydrobiopterin (BH4), a cofactor required for PAH activity. However, the study calls for these results to be replicated in longitudinal studies and suggests that this abnormal PAH function could become a target for novel preventative and interventional approaches in schizophrenia. |
Instruction: Multiple primaries in pancreatic cancer patients: indicator of a genetic predisposition?
Abstracts:
abstract_id: PUBMED:11101540
Multiple primaries in pancreatic cancer patients: indicator of a genetic predisposition? Background: The genetic basis of several familial cancers including breast and colon cancers has been identified recently. The occurrence of multiple cancers in one individual is also suggestive of a genetic predisposition. To evaluate inherited predisposition in pancreatic cancer we compared the clinical data of pancreatic cancer patients with and without multiple primaries as well as the frequency of malignancies among their relatives.
Methods: Detailed data on 69 pancreatic cancer patients included survival time and TNM-classification. Index case data were separated into two groups. The first group (group 1) developed only pancreatic cancer during their lifetime, whereas the second group (group 2) developed additional primary tumours. A systematic family history was taken from 59 of these pancreatic cancer patients using a standardized questionnaire. The pancreatic cancers and the multiple primaries of the 59 patients were histologically proven.
Results: Of the 69 pancreatic cancer patients, 13 (18.8%) had multiple primaries. Neither the clinical data nor the survival data of the index cases revealed differences between the two groups (all nominal P-values >0.05). In the family history study blood relatives developed a malignancy in 51% (24 of 47) of the families in group 1 compared to 75% (9 of 12) in group 2. The risk of relatives in group 2 of developing a malignant tumour was significantly higher (P = 0.034) than in group 1 after adjustments for family size and age of disease onset of the index case. The cancer spectrum of the 59 families mainly included tumours of the digestive tract and the reproductive organs.
Conclusions: A multiple primary cancer history is a common condition among pancreatic cancer patients. Relatives of these patients seem to have an increased risk for the development of distinct malignant solid tumours, which might be caused by an inherited predisposition. Clinical and genetic investigation of pancreatic cancer patients with multiple primaries and their families might lead to the identification of predisposing gene defects providing a new goal for the understanding of a shared genetic basis of different solid tumours.
abstract_id: PUBMED:17373213
Pathogenic patterns of genetic predisposition to endocrine tumors Multiple endocrine neoplasia (MEN) are major predisposition syndromes to endocrine tumours and are characterised by an autosomal dominant disorder and full penetrance. MEN-1 is a major form of hyperparathyroidism associated with a high prevalence of endocrine tumours of the pancreas, pituitary gland, adrenal cortex and the lymphoid and bronchial endocrine tissues. MEN-2 is the familial syndrome of medullary thyroid carcinoma, associated with pheochromocytoma and hyperparathyroidism. Apart from the clinical expression of their allelic variants, both syndromes are different in their physiopathogenesis, in that MEN-2 is related to the constitutional activation of the proto-oncogene RET that encodes a putative tyrosine kinase receptor, while MEN-1 is a tumour suppressor gene model, related to mutations in the menin adapter-protein of multiple intracellular functions. The study of other rarer forms of predisposition to endocrine tumours, and especially to hyperparathyroidism, has uncovered new genes such as HRPT2, which show that multiple physiological routes, including the close regulation of transcription and genetic stability, may lead to the same clinical outcome. These hereditary models of endocrine cancer contribute as much to further physiopathogenic knowledge as to the therapeutic recommendations for managing these syndromes.
abstract_id: PUBMED:27959889
Childhood neuroendocrine tumours: a descriptive study revealing clues for genetic predisposition. Background: Neuroendocrine tumours (NETs) are rare in children and limited data are available. We aimed to specify tumour and patient characteristics and to investigate the role of genetic predisposition in the aetiology of paediatric NETs.
Methods: Using the Dutch Pathology Registry PALGA, we collected patient- and tumour data of paediatric NETs in the Netherlands between 1991 and 2013 (N=483).
Results: The incidence of paediatric NETs in the Netherlands is 5.40 per one million per year. The majority of NETs were appendiceal tumours (N=441;91.3%). Additional surgery in appendiceal NETs was indicated in 89 patients, but performed in only 27 of these patients. Four out of five patients with pancreatic NETs were diagnosed with Von Hippel-Lindau disease (N=2) and Multiple Endocrine Neoplasia type 1 (N=2). In one patient with an appendiceal NET Familial Adenomatous Polyposis was diagnosed. On the basis of second primary tumours or other additional diagnoses, involvement of genetic predisposition was suggestive in several others.
Conclusions: We identified a significant number of patients with a confirmed or suspected tumour predisposition syndrome and show that paediatric pancreatic NETs in particular are associated with genetic syndromes. In addition, we conclude that treatment guidelines for appendiceal paediatric NETs need revision and improved implementation.
abstract_id: PUBMED:37417291
Geographical, ethnic, and genetic differences in pancreatic cancer predisposition. Pancreatic cancer remains a leading cause of cancer-related mortality worldwide. Treatment outcomes remain largely dismal despite significant medical advancements. This lends urgency to the need to understand its risk factors in order to guide early detection and improve outcomes. There are both modifiable and non-modifiable risk factors, the more established of such being that of age, smoking, obesity, diabetes mellitus (DM), alcohol and certain genetic predisposition syndromes with underlying germline mutations. Some genetic predisposition syndromes such as BRCA1/2, PALB2, ATM, and CDKN2A are well-established, arising from germline mutations that result in carcinogenesis through mechanisms such as cell injury, dysregulation of cell growth, dysfunctional DNA repair, and disruption of cell mobility and adhesion. There is also a significant proportion of familial pancreatic cancer (FPC) for which the underlying predisposing genetic mechanism is not yet understood. Nuances have emerged in the ethnic and geographical differences of pancreatic cancer predisposition, and these may be attributed to differences in lifestyle, standard of living, socioeconomic factors, and genetics. This review describes in detail the factors contributing to pancreatic cancer with focus on ethnic and geographical differences and hereditary genetic syndromes. Greater insight into the interplay of these factors can guide clinicians and healthcare authorities in addressing modifiable risk factors, implementing measures for early detection in high-risk individuals, initiating early treatment of pancreatic cancer, and directing future research towards existing knowledge deficits, in order to improve survival outcomes.
abstract_id: PUBMED:11075991
Multiple primary tumors as an indicator for p16INK4a germline mutations in pancreatic cancer patients? Multiple primary tumors in pancreatic cancer patients might indicate a genetic predisposition to the development of malignancies. In this study we evaluated whether the mutation rate of the TP53 and p16INK4a genes of pancreatic cancers differs in pancreatic cancer patients with and without multiple primaries. Furthermore, we investigated whether pancreatic cancer patients with multiple primaries carry germline mutations in either p16INK4a, TP53, or BRCA2 tumor suppressor genes to detect a genetic alteration that predisposes to the development of different primaries. Fourteen (23%) of 60 pancreatic cancer patients developed histologically verified additional primaries during their lifetimes. Normal constitutional and tumor DNA of the 14 patients with a positive cancer history, but negative family history, were analyzed for p16INK4a, TP53, and BRCA2 mutations by single-strand conformational variant (SSCV) analysis and direct sequencing. Hypermethylation of the p16INK4a promoter region in pancreatic cancers was identified by methylation-specific polymerase chain reaction (PCR; MSP). Four of 14 pancreatic carcinomas carried somatic intragenic p16INK4a mutations, and another four tumors revealed hypermethylation of the p16INK4a promoter region. Somatic intragenic TP53 mutations were identified in six of 14 tumors. None of the pancreatic cancer patients carried TP53 or BRCA2 germline mutations. In contrast, one of 14 pancreatic cancer patients with multiple primaries carried the p16INK4a mutation A68V in his germline. This mutation was localized in the conserved second ankyrin repeat of p16INK4a and did not occur in 100 control patients. The frequency of somatic TP53 and p16INK4a mutations in pancreatic cancer is similar in patients with and without multiple primaries. TP53 and BRCA2 germline mutations seem not to be significantly associated with the occurrence of multiple primaries in pancreatic cancer patients. However, p16INK4a germline mutations might be causative for tumor development in some pancreatic cancer patients with multiple primaries. The genetic investigation of patients with accumulation of different cancers even without a positive family history may be a new approach for the understanding of the relation of different cancers.
abstract_id: PUBMED:25152581
Genetic predisposition to pancreatic cancer. Pancreatic adenocarcinoma (PC) is the most deadly of the common cancers. Owing to its rapid progression and almost certain fatal outcome, identifying individuals at risk and detecting early lesions are crucial to improve outcome. Genetic risk factors are believed to play a major role. Approximately 10% of PC is estimated to have familial inheritance. Several germline mutations have been found to be involved in hereditary forms of PC, including both familial PC (FPC) and PC as one of the manifestations of a hereditary cancer syndrome or other hereditary conditions. Although most of the susceptibility genes for FPC have yet to be identified, next-generation sequencing studies are likely to provide important insights. The risk of PC in FPC is sufficiently high to recommend screening of high-risk individuals; thus, defining such individuals appropriately is the key. Candidate genes have been described and patients considered for screening programs under research protocols should first be tested for presence of germline mutations in the BRCA2, PALB2 and ATM genes. In specific PC populations, including in Italy, hereditary cancer predisposition genes such as CDKN2A also explain a considerable fraction of FPC.
abstract_id: PUBMED:35570472
Genetic testing for hereditary predisposition to breast cancer in the real world: Initial experience. Background: Around 5%-10% of breast cancers are due to hereditary breast and ovarian cancer syndrome. Genetic testing is important to identify these cases, enabling the adoption of specific risk-reducing treatment strategies.
Objective: To analyze the performance of genetic testing and its implications in patients with indication of genetic testing to identify hereditary predisposition to breast cancer.
Methods: This is a retrospective observational cross-sectional study, including 176 patients with clinical indication of genetic testing for pathogenic variants related to breast, ovarian and pancreatic cancers (among others), managed from 1999 to 2021 in an Oncology private clinic located in the city of Teresina (PI), Brazil.
Results: There was a predominance of female patients (98.9%) and those with a family (91.0%) and personal history (64.2%) of cancer. In the study, 102 patients (57.9%) received genetic testing. BRCA1 and BRCA2 pathogenic variants occurred in 26 cases (90%). Another three PALB2 and TP53 pathogenic variants were detected. Eleven pathogenic variant carriers (38%) underwent risk-reducing surgeries.
Conclusions: BRCA1/BRCA2 pathogenic variants occurred in around 25% of tested patients. Approximately 42.0% of the patients did not undergo genetic testing, despite clinical indication.
abstract_id: PUBMED:15264268
Inherited predisposition to cancer: a historical overview. The hereditary predisposition to cancer dates historically to interest piqued by physicians as well as family members wherein striking phenotypic features were shown to cluster in families, inclusive of the rather grotesque cutaneous findings in von Recklinghausen's neurofibromatosis, which date back to the sixteenth century. The search for the role of primary genetic factors was heralded by studies at the infrahuman level, particularly on laboratory mouse strains with strong susceptibility to carcinogen-induced cancer, and conversely, with resistance to the same carcinogens. These studies, developed in the 19th and 20th centuries, continue today. This article traces the historical aspects of hereditary cancer dealing with identification and ultimate molecular genetic confirmation of commonly occurring cancers, particularly of the colon in the case of familial adenomatous polyposis and its attenuated form, both due to the APC germline mutation; the Lynch syndrome due to mutations in mismatch repair genes, the most common of which were found to be MSH2, MLH1, and MSH6 germline mutations; the hereditary breast-ovarian cancer syndrome with BRCA1 and BRCA2 germline mutations; the Li-Fraumeni (SBLA) syndrome due to the p53 mutation; and the familial atypical multiple mole melanoma in association with pancreatic cancer due to the CDKN2A (p16) germline mutation. These and other hereditary cancer syndromes have been discussed in some detail relevant to their characterization, which, for many conditions, took place in the late 18th century and, in the more modern molecular genetic era, during the past two decades. Emphasis has been placed upon the manner in which improved cancer control will emanate from these discoveries.
abstract_id: PUBMED:32178688
CTLA-4 polymorphisms and predisposition to digestive system malignancies: a meta-analysis of 31 published studies. Background: The results of genetic association studies regarding cytotoxic T lymphocyte-associated antigen 4 (CTLA-4) polymorphisms and digestive system malignancies were controversial. The authors designed this meta-analysis to more precisely estimate relationships between CTLA-4 polymorphisms and digestive system malignancies by pooling the results of related studies.
Methods: The authors searched PubMed, Embase, Web of Science, and CNKI for eligible studies. Thirty-one eligible studies were pooled analyzed in this meta-analysis.
Results: The pooled meta-analysis results showed that genetic distributions of rs231775, rs4553808, and rs733618 polymorphisms among patients with digestive system malignancies and controls differed significantly. Moreover, genotypic distribution differences were also observed for rs231775 polymorphism among patients with colorectal cancer/pancreatic cancer and controls, for rs4553808 and rs5742909 polymorphisms among patients with gastric cancer and controls, for rs3087243 polymorphism among patients with liver cancer and controls, and for rs733618 polymorphism among patients with colorectal cancer and controls in pooled meta-analyses.
Conclusions: This meta-analysis suggested that rs231775 polymorphism was associated with predisposition to colorectal cancer and pancreatic cancer, rs4553808 and rs5742909 polymorphisms were associated with predisposition to gastric cancer, rs3087243 polymorphism was associated with predisposition to liver cancer, and rs733618 polymorphism was associated with predisposition to colorectal cancer.
abstract_id: PUBMED:28283772
Identification, genetic testing, and management of hereditary melanoma. Several distinct melanoma syndromes have been defined, and genetic tests are available for the associated causative genes. Guidelines for melanoma genetic testing have been published as an informal "rule of twos and threes," but these guidelines apply to CDKN2A testing and are not intended for the more recently described non-CDKN2A melanoma syndromes. In order to develop an approach for the full spectrum of hereditary melanoma patients, we have separated melanoma syndromes into two types: "melanoma dominant" and "melanoma subordinate." Syndromes in which melanoma is a predominant cancer type are considered melanoma dominant, although other cancers, such as mesothelioma or pancreatic cancers, may also be observed. These syndromes are associated with defects in CDKN2A, CDK4, BAP1, MITF, and POT1. Melanoma-subordinate syndromes have an increased but lower risk of melanoma than that of other cancer(s) seen in the syndrome, such as breast and ovarian cancer or Cowden syndrome. Many of these melanoma-subordinate syndromes are associated with well-established predisposition genes (e.g., BRCA1/2, PTEN). It is likely that these predisposition genes are responsible for the increased susceptibility to melanoma as well but with lower penetrance than that observed for the dominant cancer(s) in those syndromes. In this review, we describe our extension of the "rule of twos and threes" for melanoma genetic testing. This algorithm incorporates an understanding of the spectrum of cancers and genes seen in association with melanoma to create a more comprehensive and tailored approach to genetic testing.
Answer: Yes, multiple primaries in pancreatic cancer patients can be an indicator of a genetic predisposition. A study found that 18.8% of pancreatic cancer patients had multiple primaries, and relatives of these patients had a significantly higher risk of developing a malignancy, suggesting an inherited predisposition (PUBMED:11101540). Genetic predisposition syndromes like Multiple Endocrine Neoplasia (MEN) types 1 and 2 are characterized by a high prevalence of endocrine tumors, including those of the pancreas, indicating a genetic basis for these cancers (PUBMED:17373213). In pediatric patients, neuroendocrine tumors (NETs) have been associated with genetic syndromes such as Von Hippel-Lindau disease and Multiple Endocrine Neoplasia type 1, further supporting the role of genetic predisposition (PUBMED:27959889).
Moreover, certain genetic predisposition syndromes, including BRCA1/2, PALB2, ATM, and CDKN2A, have been established as contributing to pancreatic cancer risk through mechanisms like dysfunctional DNA repair and dysregulation of cell growth (PUBMED:37417291). A study evaluating TP53 and p16INK4a genes in pancreatic cancer patients with multiple primaries found that while TP53 and BRCA2 germline mutations were not significantly associated, p16INK4a germline mutations might be causative for tumor development in some patients (PUBMED:11075991).
Approximately 10% of pancreatic cancer is estimated to have familial inheritance, with several germline mutations involved in hereditary forms of the disease. Genetic testing is recommended for high-risk individuals, particularly for mutations in BRCA2, PALB2, ATM, and CDKN2A genes (PUBMED:25152581). Genetic testing for hereditary predisposition to breast cancer, which is related to pancreatic cancer, has identified pathogenic variants in BRCA1/BRCA2 and other genes, highlighting the importance of genetic testing in identifying individuals at risk (PUBMED:35570472). |
Instruction: Cerebral aneurysm pulsation: do iterative reconstruction methods improve measurement accuracy in vivo?
Abstracts:
abstract_id: PUBMED:24970550
Cerebral aneurysm pulsation: do iterative reconstruction methods improve measurement accuracy in vivo? Background And Purpose: Electrocardiogram-gated 4D-CTA is a promising technique allowing new insight into aneurysm pathophysiology and possibly improving risk prediction of cerebral aneurysms. Due to the extremely small pulsational excursions (<0.1 mm in diameter), exact segmentation of the aneurysms is of critical importance. In vitro examinations have shown improvement of the accuracy of vessel delineation by iterative reconstruction methods. We hypothesized that this improvement shows a measurable effect on aneurysm pulsations in vivo.
Materials And Methods: Ten patients with cerebral aneurysms underwent 4D-CTA. Images were reconstructed with filtered back-projection and iterative reconstruction. The following parameters were compared between both groups: image noise, absolute aneurysm volumes, pulsatility, and sharpness of aneurysm edges.
Results: In iterative reconstruction images, noise was significantly reduced (mean, 9.8 ± 4.0 Hounsfield units versus 8.0 ± 2.5 Hounsfield units; P = .04), but the sharpness of aneurysm edges just missed statistical significance (mean, 3.50 ± 0.49 mm versus 3.42 ± 0.49 mm; P = .06). Absolute volumes (mean, 456.1 ± 775.2 mm(3) versus 461.7 ± 789.9 mm(3); P = .31) and pulsatility (mean, 1.099 ± 0.088 mm(3) versus 1.095 ± 0.082 mm(3); P = .62) did not show a significant difference between iterative reconstruction and filtered back-projection images.
Conclusions: CT images reconstructed with iterative reconstruction methods show a tendency toward shorter vessel edges but do not affect absolute aneurysm volumes or pulsatility measurements in vivo.
abstract_id: PUBMED:7123484
Ex vivo renal artery reconstruction with autotransplantation. Ex vivo artery reconstruction with autotransplantation was performed on 34 occasions in 33 patients over the past 10 years. The cause of the renal artery disease was fibromuscular disease in 26 patients, arteriosclerosis or reoperation in 5 patients, acute dissection of the thoracic and abdominal aorta in 1 patient, and renal artery aneurysm in a single kidney in 1 patient. All patients were thought to be inoperable by in situ reconstruction. Many patients were treated with a combination of methods including bilateral ex vivo reconstruction, unilateral in situ and contralateral ex vivo reconstruction, and unilateral ex vivo reconstruction with contralateral nephrectomy. Arterial autografts were used in all but one patient to replace the diseased segment of renal artery. Follow-up was from 6 months to 10 years. The following results were obtained. One patient died 7 days after surgery from a ruptured berry aneurysm, and one patient required nephrectomy 6 months after reconstruction because of restenosis. There was no morbidity in the remaining patients. Results in the remaining patients were as follows. Twenty patients were classified as having excellent results, seven patients good results, 2 patients fair results, and two patients poor results. Combining the excellent and good groups showed an 86% cure or considerable improvement rate. These results suggest that ex vivo renal artery reconstruction is an effective and safe method of treating renal vascular hypertension when indicated.
abstract_id: PUBMED:15940064
Quantitative evaluation of measurement accuracy for three-dimensional angiography system using various phantoms. Purpose: The purpose of this study was to evaluate the spatial resolution and accuracy of three-dimensional (3D) distance measurements performed with 3D angiography using various phantoms.
Materials And Methods: With a 3D angiography system, digital images with a 512 x 512 matrix were obtained with the C-arm sweep, which rotates at a speed of 30 degrees/second. A 3D comb phantom was designed to assess spatial resolution and artifacts at 3D angiography and consisted of six combs with different pitches: 0.5 mm, 0.6 mm, 0.7 mm, 0.8 mm, 0.9 mm, and 1.0 mm. Frame rate, field of view (FOV) size, reconstruction matrix, and direction of the phantom were changed. In order to investigate the accuracy of 3D distance measurements, aneurysm phantoms and stenosis phantoms were used. Aneurysm phantoms simulated intracranial saccular aneurysms and parent arteries; 2-mm- or 4-mm-inner-diameter cylinder and five different spheres (diameter: 10, 7, 5, 3, 2 mm) were used. Stenosis phantoms were designed to simulate intracranial steno-occlusive diseases; the nonpulsatile phantoms were made of four cylinders (diameter: 3.0, 3.6, 4.0, 5.0 mm) that had areas of 50% and 75% stenosis. The dimensions of the spheres and cylinders were measured on magnified multiplanar reconstruction (MPR) images.
Results: The pitch of the 0.5 mm comb phantom was identified clearly on 3D images reconstructed with a frame rate of 30 frame/sec and 512(3) reconstruction mode. In any reconstruction matrixes and any angles of the phantom, the resolution and artifacts worsened when frame rates were decreased. With regard to the angle of the phantom to the axis of rotational angiography, spatial resolution and artifacts worsened with increase in angle. Spatial resolution and artifacts were better with a FOV of 7 x 7 inch than with one of 9 x 9 inch. All spheres on the aneurysm phantom were clearly demonstrated at any angle; measurement error of sphere size was 0.3 mm or less for 512(3) reconstruction. In 512(3) reconstruction, the error of percent stenosis was 3% or less except for a cylinder diameter of 3.0 mm and 5% for a cylinder diameter of 3.0 mm.
Conclusion: Spatial resolution of the reconstructed 3D images in this system was 0.5 mm or less. Measurement error of sphere size was 0.3 mm or less when 512(3) reconstruction was used. When using proper imaging parameters and postprocessing methods, measurements of aneurysm size and percent stenosis on the reconstructed 3D angiograms were substantially reliable.
abstract_id: PUBMED:9862002
Second-generation three-dimensional reconstruction for rotational three-dimensional angiography. Rationale And Objectives: The purpose of this study was to assess the feasibility and accuracy of three-dimensional (3D) reconstruction techniques for digital subtraction angiography (DSA) in planning and evaluation of minimally invasive image-controlled therapy.
Materials And Methods: Using a standard, commercially available system, the authors acquired DSA images and corrected them for inherent distortions. They designed and implemented parallel and multiresolution versions of cone-beam reconstruction techniques to reconstruct high-resolution targeted volumes in a short period of time. Testing was performed on anatomically correct, calibrated in vitro models of a cerebral aneurysm. These models were used with a pulsatile circulation circuit to allow for blood flow simulation during DSA, computed tomographic (CT) angiography, and magnetic resonance (MR) angiography image acquisitions.
Results: The multiresolution DSA-based reconstruction protocol and its implementation allowed the authors to achieve reconstruction times and levels of accuracy for the volume measurement of the aneurysmal cavities that were considered compatible with actual clinical practice. Comparison with data obtained from other imaging modalities shows that, besides vascular tree depiction, the DSA-based true 3D technique provides volume estimates at least as good as those obtained from CT and MR angiography.
Conclusion: The authors demonstrated the feasibility and potential of true 3D reconstruction for angiographic imaging with DSA. On the basis of the model testing, this work addresses both the timing and quantification required to support minimally invasive image-controlled therapy.
abstract_id: PUBMED:37541083
Systematic review of adherence to the standards for reporting of diagnostic accuracy studies (STARD) 2015 reporting guideline in cerebral aneurysm imaging diagnostic accuracy studies. Background: Diagnostic neuroimaging plays an essential role in guiding clinical decision-making in the management of patients with cerebral aneurysms. Imaging technologies for investigating cerebral aneurysms constantly evolve, and clinicians rely on the published literature to remain up to date. Reporting guidelines have been developed to standardise and strengthen the reporting of clinical evidence. Therefore, it is essential that radiological diagnostic accuracy studies adhere to such guidelines to ensure completeness of reporting. Incomplete reporting hampers the reader's ability to detect bias, determine generalisability of study results or replicate investigation parameters, detracting from the credibility and reliability of studies.
Objective: The purpose of this systematic review was to evaluate adherence to the Standards for Reporting of Diagnostic Accuracy Studies (STARD) 2015 reporting guideline amongst imaging diagnostic accuracy studies for cerebral aneurysms.
Methods: A systematic search for cerebral aneurysm imaging diagnostic accuracy studies was conducted. Journals were cross examined against the STARD 2015 checklist and their compliance with item numbers was recorded.
Results: The search yielded 66 articles. The mean number of STARD items reported was 24.2 ± 2.7 (71.2% ± 7.9%), with a range of 19 to 30 out of a maximum number of 34 items.
Conclusion: Taken together, these results indicate that adherence to the STARD 2015 guideline in cerebral aneurysm imaging diagnostic accuracy studies was moderate. Measures to improve compliance include mandating STARD 2015 adherence in instructions to authors issued by journals.
abstract_id: PUBMED:36337881
High-resolution medical image reconstruction based on residual neural network for diagnosis of cerebral aneurysm. Objective: Cerebral aneurysms are classified as severe cerebrovascular diseases due to hidden and critical onset, which seriously threaten life and health. An effective strategy to control intracranial aneurysms is the regular diagnosis and timely treatment by CT angiography (CTA) imaging technology. However, unpredictable patient movements make it challenging to capture sub-millimeter-level ultra-high resolution images in a CTA scan. In order to improve the doctor's judgment, it is necessary to improve the clarity of the cerebral aneurysm medical image algorithm.
Methods: This paper mainly focuses on researching a three-dimensional medical image super-resolution algorithm applied to cerebral aneurysms. Although some scholars have proposed super-resolution reconstruction methods, there are problems such as poor effect and too much reconstruction time. Therefore, this paper designs a lightweight super-resolution network based on a residual neural network. The residual block structure removes the B.N. layer, which can effectively solve the gradient problem. Considering the high-resolution reconstruction needs to take the complete image as the research object and the fidelity of information, this paper selects the channel domain attention mechanism to improve the performance of the residual neural network.
Results: The new data set of cerebral aneurysms in this paper was obtained by CTA imaging technology of patients in the Department of neurosurgery, the second affiliated of Guizhou Medical University Hospital. The proposed model was evaluated from objective evaluation, model effect, model performance, and detection comparison. On the brain aneurysm data set, we tested the PSNR and SSIM values of 2 and 4 magnification factors, and the scores of our method were 33.01, 28.39, 33.06, and 28.41, respectively, which were better than those of the traditional SRCNN, ESPCN and FSRCNN. Subsequently, the model is applied to practice in this paper, and the effect, performance index and diagnosis of auxiliary doctors are obtained. The experimental results show that the high-resolution image reconstruction model based on the residual neural network designed in this paper plays a more influential role than other image classification methods. This method has higher robustness, accuracy and intuition.
Conclusion: With the wide application of CTA images in the clinical diagnosis of cerebral aneurysms and the increasing number of application samples, this method is expected to become an additional diagnostic tool that can effectively improve the diagnostic accuracy of cerebral aneurysms.
abstract_id: PUBMED:10094344
In vitro and in vivo comparison of three MR measurement methods for calculating vascular shear stress in the internal carotid artery. Background And Purpose: Vascular abnormalities, such as atherosclerosis and the growth and rupture of cerebral aneurysms, result from a derangement in tissue metabolism and injury that are, in part, regulated by hemodynamic stress. The purpose of this study was to establish the feasibility and accuracy of determining wall shear rate in the internal carotid artery from phase-contrast MR data.
Methods: Three algorithms were used to generate shear rate estimates from both ungated and cardiac-gated 2D phase-contrast data. These algorithms were linear extrapolation (LE), linear estimation with correction for wall position (LE*), and quadratic extrapolation (QE). In vitro experiments were conducted by using a phantom under conditions of both nonpulsatile and pulsatile flow. The findings from five healthy volunteers were also studied. MR imaging-derived shear rates were compared with values calculated by solving the fluid flow equations.
Results: Findings of in vitro constant-flow experiments indicated that at one or two excitations, QE has the advantage of good accuracy and low variance. Results of in vitro pulsatile flow experiments showed that neither LE* nor QE differed significantly from the predicted value of wall shear stress, despite errors of 17% and 22%, respectively. In vivo data showed that QE did not differ significantly from the predicted value, whereas LE and LE* did. The percentages of errors for QE, LE, and LE* in vivo measurements were 98.5%, 28.5%, and 36.1%, respectively. The average residual of QE was low because the residuals were both above and below baseline whereas, on average, LE* tended to be a more biased overestimator of the shear rate in volunteers. The average and peak wall shear force in five volunteers was approximately 8.10 dyne/cm2 and 13.2 dyne/cm2, respectively.
Conclusion: Our findings show that LE consistently underestimates the shear rate. Although LE* and QE may be used to estimate shear rate, errors of up to 36% should be expected because of variance above and below the true value for individual measurements.
abstract_id: PUBMED:29521585
Accuracy of detecting enlargement of aneurysms using different MRI modalities and measurement protocols. Objective: Aneurysm growth is considered predictive of future rupture of intracranial aneurysms. However, how accurately neuroradiologists can reliably detect incremental aneurysm growth using clinical MRI is still unknown. The purpose of this study was to assess the agreement rate of detecting aneurysm enlargement employing generally used MRI modalities.
Methods: Three silicone flow phantom models, each with 8 aneurysms of various sizes at different sites, were used in this study. The aneurysm models were identical except for an incremental increase in the sizes of the 8 aneurysms, which ranged from 0.4 mm to 2 mm. The phantoms were imaged on 1.5-T and 3-T MRI units with both time-of-flight (TOF) and contrast-enhanced MR angiography. Three independent expert neuroradiologists measured the aneurysms in a blinded manner using different measurement approaches. The individual and agreement detection rates of aneurysm enlargement among the 3 experts were calculated.
Results: The mean detection rate of any increase in any aneurysmal dimension was 95.7%. The detection rates of the 3 observers (observers A, B, and C) were 98.0%, 96.6%, and 92.7%, respectively (p = 0.22). The detection rates of each MRI modality were 91.3% using 1.5-T TOF, 97.2% using 1.5-T with Gd, 95.8% using 3.0-T TOF, and 97.2% using 3.0-T with Gd (p = 0.31). On the other hand, the mean detection rate for aneurysm enlargement was 54.8%. Specifically, the detection rates of observers A, B, and C were 49.0%, 46.1%, and 66.7%, respectively (p = 0.009). As the incremental enlargement value increased, the detection rate for aneurysm enlargement increased. The use of 1.5-T Gd improved the detection rate for small incremental enlargement (e.g., 0.4–1 mm) of the aneurysm (p = 0.04). The location of the aneurysm also affected the detection rate for aneurysm enlargement (p < 0.0001).
Conclusions: The detection rate and interobserver agreement were very high for aneurysm enlargement of 0.4–2 mm. The detection rate for at least 1 increase in any aneurysm dimension did not depend on the choice of MRI modality or measurement protocol. Use of Gd improved the accuracy of measurement. Aneurysm location may influence the accuracy of detecting enlargement.
abstract_id: PUBMED:31751234
4D Flow MRI Pressure Estimation Using Velocity Measurement-Error-Based Weighted Least-Squares. This work introduces a 4D flow magnetic resonance imaging (MRI) pressure reconstruction method which employs weighted least-squares (WLS) for pressure integration. Pressure gradients are calculated from the velocity fields, and velocity errors are estimated from the velocity divergence for incompressible flow. Pressure gradient errors are estimated by propagating the velocity errors through Navier-Stokes momentum equation. A weight matrix is generated based on the pressure gradient errors, then employed for pressure reconstruction. The pressure reconstruction method was demonstrated and analyzed using synthetic velocity fields as well as Poiseuille flow measured using in vitro 4D flow MRI. Performance of the proposed WLS method was compared to the method of solving the pressure Poisson equation which has been the primary method used in the previous studies. Error analysis indicated that the proposed method is more robust to velocity measurement errors. Improvement on pressure results was found to be more significant for the cases with spatially-varying velocity error level, with reductions in error ranging from 50% to over 200%. Finally, the method was applied to flow in patient-specific cerebral aneurysms. Validation was performed with in vitro flow data collected using Particle Tracking Velocimetry (PTV) and in vivo flow measurement obtained using 4D flow MRI. Pressure calculated by WLS, as opposed to the Poisson equation, was more consistent with the flow structures and showed better agreement between the in vivo and in vitro data. These results suggest the utility of WLS method to obtain reliable pressure field from clinical flow measurement data.
abstract_id: PUBMED:30842308
In vitro accuracy and inter-observer reliability of CT angiography in detecting intracranial aneurysm enlargement. Background And Purpose: To evaluate the accuracy and inter-observer variability when CT angiography is used to identify unruptured intracranial aneurysm growth.
Methods: Two silicone phantom models were used in this study. Each phantom had eight aneurysms of variable size. The size and location of aneurysms in phantom 1 were representative of real patient aneurysms who presented to our institution. Phantom 2 contained aneurysms in the same locations, but with enlargement in various directions. Three blinded board-certified neuroradiologists were asked to identify the size of each aneurysm in three dimensions using CT angiography. The individual enlargement detection rates and inter-observer agreement rates of aneurysm enlargement among the three experts were calculated.
Results: The detection rate of aneurysm enlargement in one dimension was 58.3% among the three observers. Accurate detection of enlargement in all dimensions was 12.5% among the three observers. Detection accuracy was not related to the size of enlargement. Significant inter-observer measurement variability was present.
Conclusion: The use of CT angiography was associated with a poor ability to identify aneurysm enlargementaccurately. Further human studies are required to confirm our findings.
Answer: The study described in abstract PUBMED:24970550 investigated whether iterative reconstruction methods improve the accuracy of measuring cerebral aneurysm pulsations in vivo. The study involved ten patients with cerebral aneurysms who underwent 4D-CTA imaging. The images were reconstructed using both filtered back-projection and iterative reconstruction methods. The researchers compared image noise, absolute aneurysm volumes, pulsatility, and sharpness of aneurysm edges between the two groups.
The results showed that images reconstructed with iterative reconstruction methods had significantly reduced noise (P = .04). However, the sharpness of aneurysm edges only showed a tendency toward improvement, which just missed statistical significance (P = .06). There were no significant differences between the two reconstruction methods in terms of absolute aneurysm volumes (P = .31) and pulsatility measurements (P = .62).
In conclusion, while iterative reconstruction methods did reduce image noise, they did not significantly affect the sharpness of aneurysm edges, absolute aneurysm volumes, or pulsatility measurements in vivo. Therefore, based on this study, iterative reconstruction methods do not appear to improve the accuracy of measuring cerebral aneurysm pulsations in vivo. |
Instruction: Do physicians address other medical problems during preventive gynecologic visits?
Abstracts:
abstract_id: PUBMED:24390881
Do physicians address other medical problems during preventive gynecologic visits? Background: The patient-centered medical home model may be a strategic approach to improve delivery of women's health care and consistently provide women with accessible and comprehensive care. We examined whether primary care physicians (family medicine, internal medicine, and hospital general medicine clinics) and obstetrician-gynecologists differ in scope and the number of medical issues addressed during preventive gynecologic visits.
Methods: We analyzed data from the National Ambulatory Medical Care Survey and National Hospital Ambulatory Medical Care Survey to characterize visits with a primary diagnosis of gynecological examination or routine cervical Papanicolaou test between 1999 and 2008. We compared the number and type of concurrent nongynecologic diagnoses addressed by primary care physicians and obstetrician-gynecologists during visits.
Results: A total of 7882 visits were included, representing 271 million primary visits for Papanicolaou tests. Primary care physicians were 2.41 times more likely to include one or more concurrent medical diagnoses during the preventive gynecologic visit compared with obstetrician-gynecologists (odds ratio, 2.41; 95% confidence interval, 1.63-3.57).
Conclusions: Primary care physicians are significantly more likely to address concurrent medical problems during preventive gynecologic visits compared with obstetrician-gynecologists. These findings demonstrate the vital role of primary care physicians in providing comprehensive health care to women, consistent with principles of the patient-centered medical home model.
abstract_id: PUBMED:8126402
How do family physicians prioritize delivery of multiple preventive services? Background: In spite of the recommendations of experts, little is known about the priority that physicians assign to various preventive services provided to patients within the time pressures and competing demands of the office visit.
Methods: A survey presenting the case of a 53-year-old woman was sent to a national random sample of 480 practicing family physicians. Physicians were asked which items on a list of preventive services they would provide during 5 minutes remaining at the end of an illness visit for sinusitis, and during a visit for a 30-minute physical examination. Descriptive analyses rank ordered the most commonly provided services. Additional analyses using chi-square and analysis of variance were used to characterize physicians who performed high and low levels of services recommended and not recommended by the US Preventive Services Task Force (USPSTF).
Results: Among 268 responding physicians, more than 50% provided smoking cessation advice, blood pressure, height, and weight measurements, and the scheduling of a return visit during the illness visit. During a physical examination visit, many other services, including breast examination, Papanicolaou test, pelvic examination, and ordering a mammogram were also commonly chosen. Physicians performing a high level of USPSTF-recommended preventive services and a low level of not recommended services were characterized by their young age, residency training, not being in solo practice, and greater experience with USPSTF recommendations.
Conclusions: Physicians offer more preventive services during patient visits for physical examinations than during visits for illness. Physician characteristics associated with the delivery of recommended levels of preventive services may be useful in identifying interventions that will direct medical resources toward the most effective preventive services.
abstract_id: PUBMED:19046443
Estimated time spent on preventive services by primary care physicians. Background: Delivery of preventive health services in primary care is lacking. One of the main barriers is lack of time. We estimated the amount of time primary care physicians spend on important preventive health services.
Methods: We analyzed a large dataset of primary care (family and internal medicine) visits using the National Ambulatory Medical Care Survey (2001-4); analyses were conducted 2007-8. Multiple linear regression was used to estimate the amount of time spent delivering each preventive service, controlling for demographic covariates.
Results: Preventive visits were longer than chronic care visits (M = 22.4, SD = 11.8, M = 18.9, SD = 9.2, respectively). New patients required more time from physicians. Services on which physicians spent relatively more time were prostate specific antigen (PSA), cholesterol, Papanicolaou (Pap) smear, mammography, exercise counseling, and blood pressure. Physicians spent less time than recommended on two "A" rated ("good evidence") services, tobacco cessation and Pap smear (in preventive visits), and one "B" rated ("at least fair evidence") service, nutrition counseling. Physicians spent substantial time on two services that have an "I" rating ("inconclusive evidence of effectiveness"), PSA and exercise counseling.
Conclusion: Even with limited time, physicians address many of the "A" rated services adequately. However, they may be spending less time than recommended for important services, especially smoking cessation, Pap smear, and nutrition counseling. Future research is needed to understand how physicians decide how to allocate their time to address preventive health.
abstract_id: PUBMED:21380948
Trends in the provision of preventive women's health services by family physicians. Background And Objectives: Family medicine has experienced variations in scope and comprehensiveness of care in recent years. To investigate whether these changes in practice have impacted women's health services, we measured trends in the proportion of preventive women's health visits provided by family physicians nationally.
Methods: We analyzed the National Ambulatory Medical Care Survey to identify the trend in the proportion of preventive women's health visits to family physicians and obstetrician-gynecologists and others between 1995 to 2007.
Results: A total of 6,088 sample records were included in the study, representing 239 million preventive women's health visits. The percentage of preventive women's health visits provided by family physicians remained stable over the 12-year study period from 18.6% in 1995-1996 to 20.3% in 2007. Family physicians provided care for 28% of total preventive women's health visits occurring in non-metropolitan statistical areas.
Conclusions: Family physicians provided a stable amount of preventive women's health services between 1995 and 2007. Family medicine should continue to foster comprehensive residency training in preventive women's health care and inclusion of such services in future scope of practice.
abstract_id: PUBMED:34092371
Differences in the number of services provided by nurse practitioners and physicians during primary care visits. Background: Due to differential training, nurse practitioners (NPs) and physicians may provide different quantities of services to patients.
Purpose: To assess differences in the number of laboratory, imagining, and procedural services provided by primary care NPs and physicians.
Methods: Secondary analysis of 2012-2016 National Ambulatory Medical Care Survey (NAMCS), containing 308 NP-only and 73,099 physician-only patient visits, using multivariable regression and propensity score techniques.
Findings: On average, primary care visits with NPs versus physicians were associated with 0.521 fewer laboratory (95% CI -0.849, -0.192), and 0.078 fewer imaging services (95% CI -0.103,-0.052). Visits for routine and preventive care with NPs versus physicians were associated with 1.345 fewer laboratory (95% CI -2.037,-0.654), and 0.086 fewer imaging services (95% CI -0.118,-0.054) on average. Primary care visits for new problems with NPs versus physicians were associated with 0.051 fewer imaging services (95% CI -0.094,-0.007) on average.
Discussion: NPs provide fewer laboratory and imaging services than physicians during primary care visits.
abstract_id: PUBMED:29454081
Do preventive medicine physicians practice medicine? As some preventive medicine physicians have been denied medical licenses for not engaging in direct patient care, this paper attempts to answer the question, "Do preventive medicine physicians practice medicine?" by exploring the requirements of licensure, the definition of "practice" in the context of modern medicine, and by comparing the specialty of preventive medicine to other specialties which should invite similar scrutiny. The authors could find no explicit licensure requirement for either a certain amount of time in patient care or a number of patients seen. No physicians board certified in Public Health and General Preventive Medicine sit on any state medical boards. The authors propose that state medical boards accept a broad standard of medical practice, which includes the practice of preventive medicine specialists, for licensing purposes.
abstract_id: PUBMED:19273868
How often do physicians address other medical problems while providing prenatal care? Purpose: It is unknown to what extent physicians address multiple problems while providing prenatal care. The objective of this study was to determine the percentage of prenatal encounters with 1 or more secondary and tertiary nonobstetric diagnoses and compare rates between family physicians and obstetricians.
Methods: Using the National Ambulatory Medical Care Survey, 1995-2004, I analyzed prenatal visits to family physicians' and obstetricians' offices. The outcome measure was the percentage of prenatal encounters with 1 or more secondary and tertiary nonobstetric diagnoses seen by family physicians and obstetricians.
Results: There were 6,203 visit records that met study criteria, representing 223 million visits to obstetricians and 21 million visits to family physicians. Of the prenatal encounters with a family physician, 17.6% (95% confidence interval [CI], 12.9%-22.4%) included 1 or more secondary and tertiary nonobstetric diagnoses compared with 7.8% (95% CI, 6.1%-9.6%) of prenatal encounters with an obstetrician (P <.01). After controlling for other variables, being seen by a family physician, compared with being seen by an obstetrician, remained an independent predictor of a prenatal visit with an additional nonobstetric diagnosis (OR = 2.57; 95% CI, 1.82-3.64).
Conclusions: Family physicians diagnose nonobstetric problems frequently and considerably more often than obstetricians while providing prenatal care. This practice style enhances access to comprehensive primary care for women.
abstract_id: PUBMED:9597995
Illuminating the 'black box'. A description of 4454 patient visits to 138 family physicians. Background: The content and context of family practice outpatient visits have never been fully described, leaving many aspects of family practice in a "black box," unseen by policymakers and understood only in isolation. This article describes community family practices, physicians, patients, and outpatient visits.
Methods: Practicing family physicians in northeast Ohio were invited to participate in a multimethod study of the content of primary care practice. Research nurses directly observed consecutive patient visits, and collected additional data using medical record reviews, patient and physician questionnaires, billing data, practice environment checklists, and ethnographic fieldnotes.
Results: Visits by 4454 patients seeing 138 physicians in 84 practices were observed. Outpatient visits to family physicians encompassed a wide variety of patients, problems, and levels of complexity. The average patient paid 4.3 visits to the practice within the past year. The mean visit duration was 10 minutes. Fifty-eight percent of visits were for acute illness, 24% for chronic illness, and 12% for well care. The most common uses of time were history-taking, planning treatment, physical examination, health education, feedback, family information, chatting, structuring the interaction, and patient questions.
Conclusions: Family practice and patient visits are complex, with competing demands and opportunities to address a wide range of problems of individuals and families over time and at various stages of health and illness. Multimethod research in practice settings can identify ways to enhance the competing opportunities of family practice to improve the health of their patients.
abstract_id: PUBMED:7328190
Presidential address: preventive medical practice. Review of national programs in the past decade suggests that there is a developing consensus regarding the need for preventive services, but the proportion of them that physicians provide is decreasing. As teachers of preventive medicine, we should have a particular concern with the physician's performance in providing preventive services. Specialization, practice organizations, and comprehensiveness of payment ofr medical care appear to be related to the volume of preventive services provided. Organized primary care practice sites, where other health professionals are available, seem especially well-suited to providing preventive services. A review of several effective preventive activities involving physicians (child and adult immunizations, early detection and treatment of PKU infants, and stroke prevention) indicates that current prevention practice is less than desirable. Better performance can be attained through successful national and community programs of consumer and physician education. Implications of these observations for medical undergraduate and graduate education in prevention are discussed.
abstract_id: PUBMED:36369964
Preventive care visits with OB/GYNs and generalist physicians among reproductive-age women with chronic conditions. Objective: To examine services delivered during preventive care visits among reproductive-age women with and without chronic conditions by physician specialty.
Data Sources: National Ambulatory Medical Care Surveys (2011-2018).
Study Design: We examined provision of specific services during preventive care visits by physician specialty among reproductive-age female patients, overall and among women with five common chronic conditions (diabetes, hypertension, depression, hyperlipidemia, and asthma).
Data Collection/extraction Methods: The sample included preventive visits to OB/GYNs or generalist physicians where the patient was female, age 18-44, and not pregnant.
Principal Findings: In OB/GYN preventive visits, reproductive health services were more likely to be provided, while non-reproductive health services were less likely to be provided, both among reproductive-age female patients overall and among those with chronic conditions. For example, pap tests were provided in 44.5% of OB/GYN preventive visits (95% CI: 40.6-48.4) and in 21.4% of generalist preventive visits (95% CI: 17.2-26.6). Lipid testing was provided in 2.8% of OB/GYN preventive visits (95% CI: 1.7-3.9) and in 30.3% of generalist preventive visits (95% CI: 26.1-34.6).
Conclusions: Understanding the full range of care received in preventive visits across settings could guide recommendations to optimize where reproductive-age women with chronic conditions seek care.
Answer: Yes, physicians do address other medical problems during preventive gynecologic visits. A study analyzing data from the National Ambulatory Medical Care Survey and National Hospital Ambulatory Medical Care Survey found that primary care physicians were significantly more likely to include one or more concurrent medical diagnoses during preventive gynecologic visits compared with obstetrician-gynecologists. Specifically, primary care physicians were 2.41 times more likely to address concurrent medical problems during these visits (PUBMED:24390881). This demonstrates the vital role of primary care physicians in providing comprehensive health care to women, consistent with principles of the patient-centered medical home model. |