input
stringlengths
6.82k
29k
Instruction: Wrist electrogoniometry: are current mathematical correction procedures effective in reducing crosstalk in functional assessment? Abstracts: abstract_id: PUBMED:23538456 Wrist electrogoniometry: are current mathematical correction procedures effective in reducing crosstalk in functional assessment? Background: The recording of human movement is an essential requirement for biomechanical, clinical, and occupational analysis, allowing assessment of postural variation, occupational risks, and preventive programs in physical therapy and rehabilitation. The flexible electrogoniometer (EGM), considered a reliable and accurate device, is used for dynamic recordings of different joints. Despite these advantages, the EGM is susceptible to measurement errors, known as crosstalk. There are two known types of crosstalk: crosstalk due to sensor rotation and inherent crosstalk. Correction procedures have been proposed to correct these errors; however no study has used both procedures in clinical measures for wrist movements with the aim to optimize the correction. Objective: To evaluate the effects of mathematical correction procedures on: 1) crosstalk due to forearm rotation, 2) inherent sensor crosstalk; and 3) the combination of these two procedures. Method: 43 healthy subjects had their maximum range of motion of wrist flexion/extension and ulnar/radials deviation recorded by EGM. The results were analyzed descriptively, and procedures were compared by differences. Results: There was no significant difference in measurements before and after the application of correction procedures (P<0.05). Furthermore, the differences between the correction procedures were less than 5° in most cases, having little impact on the measurements. Conclusions: Considering the time-consuming data analysis, the specific technical knowledge involved, and the inefficient results, the correction procedures are not recommended for wrist recordings by EGM. abstract_id: PUBMED:25204740 Analysis of crosstalk in the mechanomyographic signals generated by forearm muscles during different wrist postures. Introduction: In this study, we analyzed the crosstalk in mechanomyographic (MMG) signals generated by the extensor digitorum (ED), extensor carpi ulnaris (ECU), and flexor carpi ulnaris (FCU) muscles of the forearm during wrist flexion (WF) and extension (WE) and radial (RD) and ulnar (UD) deviations. Methods: Twenty right-handed men (mean ± SD age=26.7 ± 3.83 years) performed the wrist postures. During each wrist posture, MMG signals were detected using 3 accelerometers. Peak cross-correlations were used to quantify crosstalk. Results: The level of crosstalk ranged from 1.69 to 64.05%. The wrist postures except the RD did not influence the crosstalk significantly between muscle pairs. However, muscles of the forearm compartments influenced the level of crosstalk for each wrist posture significantly. Conclusions: The results may be used to improve our understanding of the mechanics of the forearm muscles during wrist postures. abstract_id: PUBMED:31550976 Functional outcomes after salvage procedures for the destroyed wrist: an overview. The most widely used procedures for salvaging a destroyed wrist are four-corner arthrodesis, radiocarpal arthrodesis, proximal row carpectomy, total wrist arthrodesis, and total wrist replacement or resurfacing. The purpose of this article is to give an overview of the functional results obtained with the various salvage procedures and of the common methods for assessing the surgical outcomes. The outcomes are assessed by clinical measurements and scoring methods, but the actual functional status and well-being of the patients should be presented together with patient-reported outcomes. No salvage procedure can restore entirely full wrist function. Understanding indications, risks, and the outcomes of these procedures would favour a better decision for surgery and help choose the proper treatment from among the surgical options discussed with patients. abstract_id: PUBMED:35684906 Crosstalk Correction for Color Filter Array Image Sensors Based on Lp-Regularized Multi-Channel Deconvolution. In this paper, we propose a crosstalk correction method for color filter array (CFA) image sensors based on Lp-regularized multi-channel deconvolution. Most imaging systems with CFA exhibit a crosstalk phenomenon caused by the physical limitations of the image sensor. In general, this phenomenon produces both color degradation and spatial degradation, which are respectively called desaturation and blurring. To improve the color fidelity and the spatial resolution in crosstalk correction, the feasible solution of the ill-posed problem is regularized by image priors. First, the crosstalk problem with complex spatial and spectral degradation is formulated as a multi-channel degradation model. An objective function with a hyper-Laplacian prior is then designed for crosstalk correction. This approach enables the simultaneous improvement of the color fidelity and the sharpness restoration of the details without noise amplification. Furthermore, an efficient solver minimizes the objective function for crosstalk correction consisting of Lp regularization terms. The proposed method was verified on synthetic datasets according to various crosstalk and noise levels. Experimental results demonstrated that the proposed method outperforms the conventional methods in terms of the color peak signal-to-noise ratio and structural similarity index measure. abstract_id: PUBMED:24114225 An assessment of error-correction procedures for learners with autism. Prior research indicates that the relative effectiveness of different error-correction procedures may be idiosyncratic across learners, suggesting the potential benefit of an individualized assessment prior to teaching. In this study, we evaluated the reliability and utility of a rapid error-correction assessment to identify the least intrusive, most effective procedure for teaching discriminations to 5 learners with autism. The initial assessment included 4 commonly used error-correction procedures. We compared the total number of trials required for the subject to reach the mastery criterion under each procedure. Subjects then received additional instruction with the least intrusive procedure associated with the fewest number of trials and 2 less effective procedures from the assessment. Outcomes of the additional instruction were consistent with those from the initial assessment for 4 of 5 subjects. These findings suggest that an initial assessment may be beneficial for identifying the most appropriate error-correction procedure. abstract_id: PUBMED:36379758 Design of multi-axis micromotion system in TERS and its nonlinearity and crosstalk correction method. Tip-Enhanced Raman Spectroscopy (TERS) is an advanced analytical measurement technology combining Raman spectroscopy with Scanning Probe Microscopy, which can detect the molecular structure and chemical composition in micro-nano-scale. As an indispensable part, the micromotion system directly determines TERS spatial resolution. The existing multi-axis system is often composed of several single-axis nonlinear systems, which solves whole problems with a superposition idea of single-axis part. But the multi-axis crosstalk under an overall idea is not fully considered and will cause system uncooperative and even oscillational. Therefore, a multi-axis micromotion system in TERS and its correction method are proposed. The improved Duhem model, simple calculation without inversion, accurate matching and fast response, has been built for nonlinearity. And the feedforward decoupling method is designed for crosstalk, having a favorable multi-axis coordination, good error tracking and simplified controllers. Experimental results show that it can greatly correct the nonlinearity and crosstalk of multi-axis system simultaneously. abstract_id: PUBMED:32938290 A systematic review of outcomes of wrist arthrodesis and wrist arthroplasty in patients with rheumatoid arthritis. Surgical management of end-stage rheumatoid wrists is a contentious topic. The standard surgical treatment has traditionally been wrist arthrodesis. Wrist arthroplasty, however, offers an alternative that preserves some wrist motion. A systematic review of MEDLINE, EMBASE and CENTRAL databases was conducted. Data from 23 studies representing 343 cases of wrist arthrodesis and 618 cases of wrist arthroplasty were included. Complication rates were 17% for arthrodesis and 19% for arthroplasty, and both procedures were effective at alleviating pain and improving grip strength. Functional assessment by Disabilities of the Arm, Shoulder, and Hand and Patient-Related Wrist Evaluation of arthroplasty patients revealed clinically meaningful functional improvement compared with preoperative measurements. In contrast to previously published findings both procedures demonstrated comparable complication rates. While this can be speculated to be from advancements in prosthetics, robust long-term follow-up data on wrist arthroplasty are not available yet. abstract_id: PUBMED:29675825 Using an abbreviated assessment to identify effective error-correction procedures for individual learners during discrete-trial instruction. Previous research comparing the effectiveness of error-correction procedures has involved lengthy assessments that may not be practical in applied settings. We used an abbreviated assessment to compare the effectiveness of five error-correction procedures for four children with autism spectrum disorder or a developmental delay. During the abbreviated assessment, we sampled participants' responding with each procedure and completed the assessment before participants reached our mastery criterion. Then, we used the results of the abbreviated assessment to predict the most efficient procedure for each participant. Next, we conducted validation assessments, comparing the number of sessions, trials, and time required for participants to master targets with each procedure. Results showed correspondence between the abbreviated assessment and validation assessments for two of four participants and partial correspondence for the other two participants. Findings suggest that a brief assessment may be a useful tool for identifying the most efficient error-correction procedure for individual learners. abstract_id: PUBMED:19495720 Radiological procedures in the traumatised wrist Injuries of the wrist are difficult to diagnose because of the complex and narrow anatomic structures. On the basis of precise clinical examination, X-rays, CT and MRI are valuable additional tools that can be used. In the case of bone injury, spiral computer tomography with multiplanar reformatting is currently the method of choice. MRI is indicated for the identification of soft tissue or ligamentous injury and avital fragments or necrosis. Other diagnostic tools for the wrist are currently of minor importance. Technical and methodological innovations allow ever better visualisation and classification of lesions, as well as their extent, thus enabling more targeted therapy. However, prerequisites of effective use include differential assessment and precise knowledge of the procedures. abstract_id: PUBMED:16467182 The rheumatoid wrist. Wrist involvement is common in patients with rheumatoid arthritis. Individual patient assessment is important in determining functional deficits and treatment goals. Patients with persistent disease despite aggressive medical management are candidates for surgery. Soft-tissue procedures offer good symptomatic relief and functional improvement in the short term. Extensor and flexor tendons may rupture because of synovial infiltration and bony irritation. When rupture occurs, direct repair usually is not possible. However, when joints that are motored by the ruptured tendon are still functional, tendon transfer or grafting may be considered. Because of the progressive nature of the disease, dislocation and end-stage arthritis often require stabilization with bony procedures. The distal radioulnar joint is usually affected first and is commonly treated with either the Darrach or the Sauvé-Kapandji procedure. Partial wrist fusion offers a compromise between achieving stability of the affected radiocarpal joint and maintaining motion at the midcarpal joint. For pancarpal arthritis, total wrist fusion offers reliable pain relief at the cost of motion. Total wrist arthroplasty is an alternative that preserves motion; however, the outcomes of total wrist replacement are still being evaluated. Answer: Based on the provided abstracts, current mathematical correction procedures for reducing crosstalk in wrist electrogoniometry do not appear to be effective. In a study evaluating the effects of mathematical correction procedures on crosstalk due to forearm rotation, inherent sensor crosstalk, and the combination of these two procedures, no significant difference in measurements was found before and after the application of correction procedures. Furthermore, the differences between the correction procedures were less than 5° in most cases, having little impact on the measurements. Considering the time-consuming data analysis, the specific technical knowledge involved, and the inefficient results, the correction procedures are not recommended for wrist recordings by electrogoniometry (PUBMED:23538456).
Instruction: Do individuals with autism spectrum disorders infer traits from behavior? Abstracts: abstract_id: PUBMED:19298473 Do individuals with autism spectrum disorders infer traits from behavior? Background: Traits and mental states are considered to be inter-related parts of theory of mind. Attribution research demonstrates the influential role played by traits in social cognition. However, there has been little investigation into how individuals with autism spectrum disorders (ASD) understand traits. Method: The ability of individuals with ASD to infer traits from descriptions of behavior was investigated by asking participants to read trait-implying sentences and then to choose one of two words that best related to the sentence. Results: In Experiment 1, individuals with ASD performed similarly to matched controls in being faster at choosing the trait in comparison to the semantic associate of one of the words in the sentence. The results from Experiments 1 and 2 provided converging evidence in suggesting that inferring traits from textual descriptions of behavior occurs with relatively little effort. The results of Experiment 3 suggested that making trait inferences took priority over inferring actions or making semantic connections between words. Conclusions: Individuals with ASD infer traits from descriptions of behavior effortlessly and spontaneously. The possibility of trait inference being a spared socio-cognitive function in autism is discussed. abstract_id: PUBMED:26702351 Is impaired joint attention present in non-clinical individuals with high autistic traits? Background: Joint attention skills are impaired in individuals with autism spectrum disorder (ASD). Recently, varying degrees of autistic social attention deficit have been detected in the general population. We investigated gaze-triggered attention in individuals with high and low levels of autistic traits under visual-auditory cross-modal conditions, which are more sensitive to social attention deficits than unimodal paradigms. Methods: Sixty-six typically developing adults were divided into low- and high-autistic-trait groups according to scores on the autism-spectrum quotient (AQ) questionnaire. We examined gaze-triggered attention under visual-auditory cross-modal conditions. Two sounds (a social voice and a non-social tone) were manipulated as targets to infer the relationship between the cue and the target. Two types of stimulus onset asynchrony (SOA) conditions (a shorter 200-ms SOA and a longer 800-ms SOA) were used to directly test the effect of gaze cues on the detection of a sound target across different temporal intervals. Results: Individuals with high autistic traits (high-AQ group) did not differ from those with low autistic traits (low-AQ group) with respect to gaze-triggered attention when voices or tones were used as targets under the shorter SOA condition. In contrast, under the longer SOA condition, gaze-triggered attention was not observed in response to tonal targets among individuals in the high-AQ group, whereas it was observed among individuals in the low-AQ group. The results demonstrated that cross-modal gaze-triggered attention is short-lived in individuals with high autistic traits. Conclusions: This finding provides insight into the cross-modal joint attention function among individuals along the autism spectrum from low autistic traits to ASD and may further our understanding of social behaviours among individuals at different places along the autistic trait continuum. abstract_id: PUBMED:29997661 A Comparison of Autistic Like Traits in the Relatives of Patients with Autism and Schizophrenia Spectrum Disorder. Objective: This study aimed to identify autistic like traits in relatives of patients with schizophrenia and autism spectrum disorder. Method: causal-comparative research design was utilized. Fifty individuals among the first degree relatives of patients with autism spectrum disorder and 50 individuals among the first degree relatives of patients with schizophrenia spectrum disorder were selected. Autistic-like traits were evaluated by Autism-Spectrum Quotient (AQ). Multivariate analysis of variance was used to compare the autistic like traits in two groups. Results: First degree relatives of individuals with autism spectrum disorder got higher scores in deficiency of social skill, deficiency of communication, deficiency of attention, and attention to details. As well as they got lower scores in deficiency of imagination, in comparison to relatives of individuals with schizophrenia spectrum disorder. Conclusion: Relatives of individuals with autism spectrum disorder compared to relatives of patients with schizophrenia spectrum disorder showed higher rates of autistic like traits. Only the exception was imagination subscale. abstract_id: PUBMED:31260907 Autistic traits in individuals self-defining as transgender or nonbinary. Background: Autism spectrum traits are increasingly being reported in individuals who identify as transgender, and the presence of such traits have implications for clinical support. To-date little is known about autism traits in individuals who identify as nonbinary. Aims: To empirically contribute to current research by examining autistic traits in a self-identifying transgender and nonbinary gender group. Method: One hundred and seventy-seven participants responded to a survey consisting of the Autism Spectrum Quotient (AQ), the Empathy Quotient (EQ), the Systematising Quotient (SQ) and the Reading the Mind in the Eyes Task (RME). Comparisons were made between cisgender, transgender and nonbinary groups. Results: Individuals with autism spectrum disorder (ASD) or meeting the AQ cut-off score for ASD were over-represented in both the transgender and nonbinary groups. The key variables differentiating the transgender and nonbinary groups from the cisgender group were systematising and empathy. Levels of autistic traits and cases of ASD were higher in individuals assigned female at birth than those assigned male at birth. Conclusions: A proportion of individuals seeking help and advice about gender identity will also present autistic traits and in some cases undiagnosed autism. Lower levels of empathy, diminished theory of mind ability and literalness may impede the delivery of effective support. Clinicians treating transgender and nonbinary individuals, should also consider whether clients, especially those assigned female at birth, have an undiagnosed ASD. abstract_id: PUBMED:38125863 Lack of action-sentence compatibility effect in non-clinical individuals with high autistic traits. Introduction: Patients with autism spectrum disorder (ASD) exhibit atypical responses to language use and comprehension. Recently, various degrees of primary autistic symptoms have been reported in the general population. We focused on autistic traits and examined the differences in mechanisms related to language comprehension using the action-sentence compatibility effect (ACE). ACE is a phenomenon in which response is facilitated when the action matches the behavior described in the statement. Methods: In total, 70 non-clinical individuals were divided into low autistic and high autistic groups according to their autism spectrum quotient (AQ) scores. ACEs with adverbs and onomatopoeias were examined using a stimulus set of movement-related sentences. A choice-response task helped determine the correct sentence using antonym adverbs (slow and fast) and onomatopoeia (quick and satto) related to the speed of the movement. Results: The low-AQ group showed ACEs that modulated the reaction time in antonym sentences. The high-AQ group showed less temporal modulation, and their overall reaction time was shorter. The low-AQ group showed faster reaction times for onomatopoeic words; however, the high-AQ group showed a tendency to reverse this trend. In individuals with intermediate autistic traits, the angle effect may be moderated by individual differences in motor skills and experience rather than autistic traits. The stimulus presentation involved a passive paradigm. Discussion: This study provides insight into language comprehension processes in non-clinical individuals ranging from low to high autistic idiosyncrasy and elucidates language and behavior in individuals at different locations on the autistic trait continuum. abstract_id: PUBMED:29101779 Decreased reward value of biological motion among individuals with autistic traits. The Social Motivation Theory posits that a reduced sensitivity to the value of social stimuli, specifically faces, can account for social impairments in Autism Spectrum Disorders (ASD). Research has demonstrated that typically developing (TD) individuals preferentially orient towards another type of salient social stimulus, namely biological motion. Individuals with ASD, however, do not show this preference. While the reward value of faces to both TD and ASD individuals has been well-established, the extent to which individuals from these populations also find human motion to be rewarding remains poorly understood. The present study investigated the value assigned to biological motion by TD participants in an effort task, and further examined whether these values differed among individuals with more autistic traits. The results suggest that TD participants value natural human motion more than rigid, machine-like motion or non-human control motion, but this preference is attenuated among individuals reporting more autistic traits. This study provides the first evidence to suggest that individuals with more autistic traits find a broader conceptualisation of social stimuli less rewarding compared to individuals with fewer autistic traits. By quantifying the social reward value of human motion, the present findings contribute an important piece to our understanding of social motivation in individuals with and without social impairments. abstract_id: PUBMED:37248706 The effect of autistic traits on prosocial behavior: The chain mediating role of received social support and perceived social support. Lay Abstract: Autistic traits are known to be associated with a set of core symptoms of autism spectrum disorder. The impact of autistic traits on prosocial behavior, including a consideration of the role of social support, has never been explored. We investigated whether and how social support mediates the autistic trait-prosocial behavior relationship. We found that autistic traits can influence prosocial behavior not only through received social support and perceived social support but also indirectly through the chain mediating effects of received social support and perceived social support. This study contributes to the understanding of how and to what extent prosocial behavior is influenced by autistic traits. Future work is required to further investigate the clinical autism spectrum disorder samples and cross-cultural applicability of the model found in this study. abstract_id: PUBMED:32425859 The Effect of Autistic Traits on Social Orienting in Typically Developing Individuals. Autism spectrum disorder (ASD) is a complex neurodevelopmental disorder characterized by wide ranging and heterogeneous changes in social and cognitive abilities, including deficits in orienting attention during early processing of stimuli. Investigators have found that there is a continuum of autism-like traits in the general population, suggesting that these autistic traits may be examined in the absence of clinically diagnosed autism. To provide evidence for the continuum of autistic traits in terms of social attention and to provide insights into social attention deficits in people with autism, the current study was conducted to examine the effect of autistic traits of typically developing individuals on social orienting using a spatial cueing paradigm. The typically developing individuals who participated in this study were divided into high autistic traits (HA) and low autistic traits groups using the Autism Quotient scale. All participants completed a spatial cueing task in which social cues (gaze) and non-social cues (arrow) were presented under different cue predictability conditions (predictive vs. non-predictive) with different SOAs (100 ms vs. 400 ms). The results showed that compared to low autistic individuals, high autistic individuals had less benefit from non-predictive social cues but greater benefit from non-social ones, providing evidence that such spatial attention impairment in high autistic individuals is specific to the social domain. Interestingly, the smaller benefit from non-predictive social cues in high autistic individuals was shown only in the 400 ms condition, not in the 100 ms condition, suggesting that their difficulties in orienting to non-predictive social cues may be caused by a deficiency in spontaneously effortful control processing. abstract_id: PUBMED:31738945 The sex-specific association between autistic traits and eating behavior in childhood: An exploratory study in the general population. Children with Autism Spectrum Disorder (ASD) often exhibit problematic eating behaviors, an observation mostly based on male dominated, clinical ASD study samples. It is, however, important to evaluate both children with an ASD diagnosis and children with subclinical autistic traits as both often experience difficulties. Moreover, considering the suggestion of a possible girl-specific ASD phenotype, there is a need to determine whether autistic traits are related with problematic eating behaviors in girls as well. This study explores the sex-specific association between autism (both autistic traits and diagnosed ASD) and eating behavior in middle childhood in Generation R, a prospective population-based cohort from fetal life onwards. We collected parental reports of autistic traits at six years (Social Responsiveness Scale) and of eating behavior at ten years (Children's Eating Behaviour Questionnaire). In this cohort of 3559 children, autistic traits at six years were associated with more Picky Eating, Emotional Eating and Food Responsiveness in later childhood (e.g. adjusted B for Picky Eating = 0.07; 95% CI: 0.03, 0.11). Stratified analyses showed that in girls, autistic traits were associated with more Emotional Overeating and Emotional Undereating (e.g. adjusted B for Emotional Undereating = 0.12; 95% CI: 0.04, 0.20), while no associations were found for boys. Results comparing children with and without an ASD diagnosis in the cohort largely confirm these associations (e.g. in girls, adjusted B for Emotional Undereating = 0.72; 95% CI: 0.01, 1.42). Our results point to a sex-specific association between autism and eating behavior in middle childhood. Also, our study is the first study to show that autistic traits are associated with emotionally based eating problems in girls and possibly represent part of a girl-specific ASD phenotype. abstract_id: PUBMED:20804247 Executive function in individuals with subthreshold autism traits. Objective: Recent research has documented increased psychosocial difficulties in individuals who report higher-than-typical autistic traits but without an Autism Spectrum Disorder (ASD) diagnosis. Less is known, however, regarding the cognitive profile of individuals with subthreshold autism symptomatology. The objective of the present study was to provide additional insight into this issue and examine whether young adults who report higher degrees of autism traits also report experiencing increased difficulties with executive control. Method: The Behavior Rating Inventory of Executive Function was utilized to evaluate behavioral aspects of executive functioning in 66 and 28 individuals who endorsed high and low subthreshold levels of autism symptomatology, respectively. Results: After accounting for Attention Deficit/Hyperactivity Disorder (ADHD) symptomatology at both the group and individual participant levels, we found that autism traits continued to explain a significant amount of variance in participants' overall level of executive function (Global Executive Composite) as well as within most individual executive domains. Interestingly, the high and low trait groups did not differ on the inhibitory control and organization of materials scales, areas of functioning that appears to be largely spared in individuals with ASD as well. Conclusions: Findings from the present study are consistent with past research linking ASD and executive control impairment. In addition, ASD and ADHD traits were associated with unique contributions to the executive control profile of individuals with subthreshold autism symptomatology. This finding underscores the importance of accounting for ADHD symptomatology in studying ASD. Answer: Yes, individuals with autism spectrum disorders (ASD) do infer traits from behavior. A study investigated this by asking participants with ASD to read sentences that implied certain traits and then choose a word that best related to the sentence. The results showed that individuals with ASD performed similarly to matched controls, being faster at choosing the trait in comparison to a semantic associate of one of the words in the sentence. This suggests that inferring traits from textual descriptions of behavior occurs with relatively little effort for individuals with ASD, and making trait inferences took priority over inferring actions or making semantic connections between words. The study concluded that the ability to infer traits from descriptions of behavior is an effortless and spontaneous process for individuals with ASD, indicating that this aspect of socio-cognitive function may be spared in autism (PUBMED:19298473).
Instruction: Can we predict the duration of chemotherapy-induced neutropenia in febrile neutropenic patients, focusing on regimen-specific risk factors? Abstracts: abstract_id: PUBMED:16322116 Can we predict the duration of chemotherapy-induced neutropenia in febrile neutropenic patients, focusing on regimen-specific risk factors? A retrospective analysis. Background: The aim of the study was to elaborate a predictive model for the duration of chemotherapy-induced neutropenia (CIN) allowing the identification of patients with a higher risk of complications, especially complicated febrile neutropenia, who might benefit from preventive measures. Patients And Methods: A score ranging from 0 to 4 on the basis of expected CIN was attributed to each cytotoxic agent given as part of chemotherapy treatment in solid tumours for patients with febrile neutropenia (FN). The individual scores were combined into several overall scores. Results: A total of 203 patients with FN were eligible for this retrospective analysis. We were able to identify two groups of patients with statistically different neutropenia durations with median durations until hematological recovery of ANC > or =0.5 and > or =1.0 x 10(9)/l, being respectively 6 versus 4 days (P = 0.03) and 8 versus 6 days (P = 0.01). Conclusions: The duration of neutropenia is directly influenced by the aggressiveness of the chemotherapy regimen. In this retrospective study, we were able to identify a group of patients who needed two more additional days to recover from grade 3 and grade 4 neutropenia, based on the degree of aggressiveness of the cytotoxic agents used. abstract_id: PUBMED:31667604 Is current initial empirical antibiotherapy appropriate to treat bloodstream infections in short-duration chemo-induced febrile neutropenia? Introduction: Fever of unknown origin is by far the most common diagnosis in low-risk febrile neutropenic patients undergoing chemotherapy. The current empirical regimen combines amoxicillin-clavulanic acid and fluoroquinolones in low-risk neutropenic patients. The aim of this study was to assess the appropriateness of antibiotherapy and the outcome of bloodstream infections (BSI) in patients with expected neutropenia of short duration. Methods: This 2-year monocentric retrospective study included all consecutive neutropenic febrile adult patients with expected duration of neutropenia ≤ 7 days. They were classified into low- and high-risk groups for complications using the MASCC index. Appropriateness of initial empirical antibiotic regimen was assessed for each BSI. Multivariate analysis was performed to identify factors associated with mortality. Results: Over the study period, 189 febrile episodes with positive blood cultures in neutropenic patients were reported, of which 44 occurred during expected duration of neutropenia ≤ 7 days. Patients were classified as high-risk (n = 27) and low-risk (n = 17). Gram-negative bacteria BSI represented 57% of cases, including only two multidrug-resistant bacteria in high-risk patients. Initial empirical antibiotherapy was appropriate in 86% of cases, and inappropriate in the event of coagulase-negative Staphylococcus BSI (14%), although the outcome was always favorable. In low-risk patients, no deaths and only 12% of severe complications were reported, contrasting with mortality and complication rates of 48% (p < 0.001) and 63% in high-risk patients (p < 0.001), respectively. Conclusions: Outcome of BSI is favorable in low-risk febrile neutropenic patients, even with inappropriate empirical initial antibiotic regimen for coagulase-negative Staphylococcus BSI. Initial in-hospital assessment and close monitoring of these patients are however mandatory. abstract_id: PUBMED:37664262 Classification of Chemotherapy-Induced Febrile Neutropenic Episodes Into One of the Three Febrile Neutropenic Syndromes. Introduction Febrile neutropenia is a commonly encountered medical emergency in patients undergoing cancer treatment and can delay and modify the course of treatment and even lead to dire outcomes, including death. The cause of fever in a post-chemotherapy-induced neutropenic patient can be confusing to treating physicians. A review of the literature demonstrated that blood culture results could determine the cause of febrile neutropenia in only approximately 10% to 25% of patients. The objective of our study was to measure the incidence of positive blood cultures, urine cultures, and other body fluid cultures resulting in chemotherapy-induced neutropenia and further classify fever episodes into three neutropenic fever syndromes, such as microbiologically documented, clinically suspected, or unknown causes of fever, respectively. Methods We conducted a prospective observational study on 399 chemotherapy-induced neutropenic fever episodes with the aim of classifying them into one of the three neutropenic syndromes. We tried to document the cause of the fever in these patients. We also noted the type of cancer treatment regimen they were on and correlated their clinical profile with their body fluid cultures, including blood cultures, urine cultures, and other body fluid cultures. We then categorized each fever episode into one of three neutropenic syndromes. Results We studied 399 febrile neutropenic episodes. We were able to microbiologically document the cause of fever in 39% of the cases, and we obtained growth in 51 out of 399 blood cultures (13%), which was comparable to the available literature, and urine culture showed growth in 62 out of 399 cultures (16%), while other body cultures such as pus culture, bile culture, and bronchioalveolar lavage cultures collectively showed growth in 42 out of 399 episodes (10%). The most common bacteria isolated in both blood and urine cultures were Escherichia coli. Cumulatively, including blood, urine, and body fluid cultures, we were able to classify 39% (155 out of 399 cases) of febrile neutropenic episodes as microbiologically documented. The cause of fever was clinically suspected by means of careful history taking and an extensive physical examination in 31% (125 out of 399) without growth evidence in blood cultures, urine cultures, or any other body fluid culture. The cause of fever remained unknown in 119 cases (30%) of patients and was classified under the unknown cause of fever. Conclusions We conclude by stating that the study of fever in a neutropenic patient should include a thorough history and clinical evaluation of blood, urine, and other body fluid cultures instead of solely relying on blood culture results. We recommend further classifying patients into one of the three neutropenic fever syndromes, such as those that are microbiologically documented, clinically suspected, or unknown. Our blood cultures were able to give us a 13% positivity rate, whereas microbiologically, we were able to isolate an organism likely causing fever in 39% of patients. The cause of fever was suspected clinically in 31% of patients, but we were unsuccessful in microbiologically documenting any culture growth in blood, urine, or any other body fluid culture. The cause of fever remained a mystery and unknown to us without any microbiological or clinical cues in 119 cases (30%) of febrile neutropenic episodes. abstract_id: PUBMED:35893100 Procalcitonin as a Predictive Tool for Death and ICU Admission among Febrile Neutropenic Patients Visiting the Emergency Department. Background and Objectives: Risk stratification tools for febrile neutropenia exist but are infrequently utilized by emergency physicians. Procalcitonin may provide emergency physicians with a more objective tool to identify patients at risk of decompensation. Materials and Methods: We conducted a retrospective cohort study evaluating the use of procalcitonin in cases of febrile neutropenia among adult patients presenting to the Emergency Department compared to a non-neutropenic, febrile control group. Our primary outcome measure was in-hospital mortality with a secondary outcome of ICU admission. Results: Among febrile neutropenic patients, a positive initial procalcitonin value was associated with significantly increased odds of inpatient mortality after adjusting for age, sex, race, and ethnicity (AOR 9.912, p < 0.001), which was similar, though greater than, our non-neutropenic cohort (AOR 2.18, p < 0.001). All febrile neutropenic patients with a positive procalcitonin were admitted to the ICU. Procalcitonin had a higher sensitivity and negative predictive value (NPV) in regard to mortality and ICU admission for our neutropenic group versus our non-neutropenic control. Conclusions: Procalcitonin appears to be a valuable tool when attempting to risk stratify patients with febrile neutropenia presenting to the emergency department. Procalcitonin performed better in the prediction of death and ICU admission among patients with febrile neutropenia than a similar febrile, non-neutropenic control group. abstract_id: PUBMED:30953135 Factors associated with emergent colectomy in patients with neutropenic enterocolitis. Purpose: Neutropenic enterocolitis (NEC) is a severe complication of neutropenia. NEC is characterized by segmental ulceration, intramural inflammation, and necrosis. Factors present in patients who underwent colectomy have never been studied. The present study aimed to describe the clinical factors present in patients who underwent emergent colectomy for the treatment of neutropenic enterocolitis. Methods: Patients admitted with neutropenic enterocolitis from November 2009 to May 2018 were retrospectively analyzed. Logistic regression analysis was used to determine clinical factors associated with emergent colectomy. Results: Thirty-nine patients with NEC were identified. All patients had a hematological disorder. Medical treatment was the only management in 30 (76.9%) patients, and 9 (23.1%) patients underwent colectomy. No differences were found between the treatment groups regarding sex, age, or comorbidities. Patients were more likely to undergo colectomy if they developed abdominal distention (OR = 12, p = 0.027), hemodynamic failure (OR = 6, p = 0.042), respiratory failure (OR = 17.5, p = 0.002), multi-organic failure (OR = 9.6, p = 0.012), and if they required ICU admission (OR = 11.5, p = 0.007). Respiratory failure was the only independent risk factor for colectomy in multivariable analysis. In-hospital mortality for the medical and surgical treatment groups was 13.3% (n = 4) and 44.4% (n = 4), respectively (p = 0.043). Conclusions: In our study, most NEC patients were treated conservatively. Patients were more likely to undergo colectomy if they developed organ failures or required ICU admission. Early surgical consultation is suggested in all patients with NEC. abstract_id: PUBMED:31385488 Risk factors for bacteremia in children with febrile neutropenia Background/aim: Bacteremia remains an important cause of morbidity and mortality during febrile neutropenia (FN) episodes. We aimed to define the risk factors for bacteremia in febrile neutropenic children with hemato-oncological malignancies. Materials And Methods: The records of 150 patients aged ≤18 years who developed FN in hematology and oncology clinics were retrospectively evaluated. Patients with bacteremia were compared to patients with negative blood cultures. Results: The mean age of the patients was 7.5 ± 4.8 years. Leukemia was more prevalent than solid tumors (61.3% vs. 38.7%). Bacteremia was present in 23.3% of the patients. Coagulase-negative staphylococci were the most frequently isolated microorganism. Leukopenia, severe neutropenia, positive peripheral blood and central line cultures during the previous 3 months, presence of a central line, previous FN episode(s), hypotension, tachycardia, and tachypnea were found to be risk factors for bacteremia. Positive central line cultures during the previous 3 months and presence of previous FN episode(s) were shown to increase bacteremia risk by 2.4-fold and 2.5-fold, respectively. Conclusion: Presence of a bacterial growth in central line cultures during the previous 3 months and presence of any previous FN episode(s) were shown to increase bacteremia risk by 2.4-fold and 2.5-fold, respectively. These factors can predict bacteremia in children with FN. abstract_id: PUBMED:21629636 Microbial etiology of febrile neutropenia. Bacterial and fungal infections are a major cause of morbidity and mortality among neutropenic patients. The choice of empiric antimicrobial regimen is based on susceptibility pattern of locally prevalent pathogens. From 64 febrile neutropenic patients with clinical sepsis, blood and other appropriate clinical specimens were processed to determine bacterial and fungal spectrum and their antimicrobial susceptibility pattern. Risk factors for developing sepsis were determined by case-control study. 68 organisms were recovered. Fifteen (22.05%) were Gram-positive cocci with predominance of methicillin Sensitive S. aureus (10.29%), 47 (69.11%) were Gram-negative rods with predominance of Klebsiella pneumoniae (30.88%) and four were Non albicans Candida. 81% and 60% of Klebsiella and E. coli were ESBL producers. All species of Candida were sensitive to amphoterecin B and voriconazole. Duration and extent of neutropenia, chemotherapy, immunosuppressive therapy, altered mucosal barriers and presence of central venous lines were statistically significant risk factors for developing sepsis. Gram-negative bacteria were the predominant isolates. The choice of therapy in neutropenic patients should be formulated based on local spectrum of microbes and local and regional resistance patterns. abstract_id: PUBMED:30251730 The relationship between mortality and microbiological parameters in febrile neutropenic patients with hematological malignancies. Objectives: To determine effective risk factors on mortality in febrile neutropenic cases with hematologic malignancy. Patients with hematologic diseases are more prone to infections and those are frequent causes of mortality. Methods: This retrospective study was performed using data of 164 febrile neutropenic cases with hematologic malignancies who were followed up in a hematology clinic of a tertiary health care center between 2011-2015. The relationship between descriptive and clinical parameters rates and rates of mortality on the 7th and the 21st days were investigated. Results: Patients with absolute neutrophil count less than 100/mm3, duration of neutropenia longer than 7 days, pneumonia or gastrointestinal foci of infection, central catheterization (p=0.025), isolation of Gram (-) bacteria in culture, carbapenem resistance, septic shock, and bacterial growth during intravenous administration of antibiotic treatment were under more risk for mortality on both the 7th and the 21st days. The final multivariate logistic regression results showed that pneumonia (p less than 0.0001), septic shock (p=0.004) and isolation of Gram-negative bacteria (p=0.032) were statistically significant risk factors. Conclusion: Early diagnosis and appropriate treatment of serious infections, which are important causes of morbidity and mortality, are crucial in patients with febrile neutropenia. Thus, each center should closely follow up causes of infection and establish their empirical antibiotherapy protocols to accomplish better results in the management of febrile neutropenia. abstract_id: PUBMED:29456976 Respiratory Viruses in Febrile Neutropenic Patients with Respiratory Symptoms. Background: Respiratory infections are a frequent cause of fever in neutropenic patients, whereas respiratory viral infections are not frequently considered as a diagnosis, which causes high morbidity and mortality in these patients. Materials And Methods: This prospective study was performed on 36 patients with neutropenia who admitted to hospital were eligible for inclusion with fever (single temperature of >38.3°C or a sustained temperature of >38°C for more than 1 h), upper and lower respiratory symptoms. Sampling was performed from the throat of the patient by the sterile swab. All materials were analyzed by quantitative real-time multiplex polymerase chain reaction covering the following viruses; influenza, parainfluenza virus (PIV), rhinovirus (RV), human metapneumovirus, and respiratory syncytial virus (RSV). Results: RV was the most frequently detected virus and then RSV was the most. PIV was not present in any of the tested samples. Furthermore, no substantial differences in the distribution of specific viral species were observed based on age, sex, neutropenia duration, hematological disorder, and respiratory tract symptoms and signs (P > 0.05). Conclusion: Our prospective study supports the hypothesis that respiratory viruses play an important role in the development of neutropenic fever, and thus has the potential to individualize infection treatment and to reduce the extensive use of antibiotics in immunocompromised patients with neutropenia. abstract_id: PUBMED:25769601 Febrile neutropenic infection occurred in cancer patients undergoing autologous peripheral blood stem cell transplantation. Objective: The objective of this study was to investigate the incidence, risk factors, and clinical and prognostic characteristics of febrile infection that occurred during the neutropenic period in cancer patients who underwent autologous peripheral blood stem cell transplantation (APBSCT). Methods: Clinical data of all the APBSCT cases from 2007 to 2009 in Sun Yat-sen University Cancer Center were collected and retrospectively analyzed. Results: Eighty-nine APBSCT subjects were investigated. Neutropenia usually occurred on the 4th day (range, 0-15) after transplantation and lasted 6 (range, 3-27) days. Febrile neutropenia occurred in 78.7% patients and lasted around 3 (range, 1-20) days and no infection-related deaths were observed. For febrile episodes, 12 patients (17.1%) were diagnosed as having microbiologically documented infections (MDI). Initial empirical antimicrobial therapy was given to all cases of post-APBSCT febrile neutropenia, of which 44 cases (62.9%) were effective. Febrile neutropenia occurred in 25/34 (73.5%) patients who were given antifungal drugs for prophylaxis. Conclusions: Neutropenic infection is still the major complication in APBSCT patients and duration of neutropenia is one of the major risk factors. Prophylactic administration of antifungal drugs seems to be invalid to reduce post-APBSCT infection. Sufficient doses of broad-spectrum antibiotics at the early stage lead to better results of initial antimicrobial treatment. Answer: Yes, it is possible to predict the duration of chemotherapy-induced neutropenia (CIN) in febrile neutropenic patients by focusing on regimen-specific risk factors. A retrospective analysis aimed to develop a predictive model for the duration of CIN, which could help identify patients at higher risk of complications, such as complicated febrile neutropenia. This study assigned a score to each cytotoxic agent used in chemotherapy treatment for solid tumors in patients with febrile neutropenia. The scores were based on the expected CIN and were combined into overall scores. The analysis of 203 patients revealed that the duration of neutropenia is directly influenced by the aggressiveness of the chemotherapy regimen. Two groups of patients with statistically different neutropenia durations were identified, with median durations until hematological recovery of ANC ≥0.5 and ≥1.0 x 10^9/l being 6 versus 4 days and 8 versus 6 days, respectively. This suggests that patients who received more aggressive chemotherapy regimens needed approximately two additional days to recover from grade 3 and grade 4 neutropenia (PUBMED:16322116). Therefore, by considering the specific chemotherapy regimen and the aggressiveness of the cytotoxic agents used, healthcare providers can predict the duration of CIN in febrile neutropenic patients and potentially tailor preventive measures accordingly.
Instruction: Can type of school be used as an alternative indicator of socioeconomic status in dental caries studies? Abstracts: abstract_id: PUBMED:21457574 Can type of school be used as an alternative indicator of socioeconomic status in dental caries studies? A cross-sectional study. Background: Despite the importance of collecting individual data of socioeconomic status (SES) in epidemiological oral health surveys with children, this procedure relies on the parents as respondents. Therefore, type of school (public or private schools) could be used as an alternative indicator of SES, instead of collecting data individually. The aim of this study was to evaluate the use of the variable type of school as an indicator of socioeconomic status as a substitute of individual data in an epidemiological survey about dental caries in Brazilian preschool children. Methods: This study followed a cross-sectional design, with a random sample of 411 preschool children aged 1 to 5 years, representative of Catalão, Brazil. A calibrated examiner evaluated the prevalence of dental caries and parents or guardians provided information about several individual socioeconomic indicators by means of a semi-structured questionnaire. A multilevel approach was used to investigate the association among individual socioeconomic variables, as well as the type of school, and the outcome. Results: When all significant variables in the univariate analysis were used in the multiple model, only mother's schooling and household income (individual socioeconomic variables) presented significant associations with presence of dental caries, and the type of school was not significantly associated. However, when the type of school was used alone, children of public school presented significantly higher prevalence of dental caries than those enrolled in private schools. Conclusions: The type of school used as an alternative indicator for socioeconomic status is a feasible predictor for caries experience in epidemiological dental caries studies involving preschool children in Brazilian context. abstract_id: PUBMED:25568628 Assessments of the socioeconomic status and diet on the prevalence of dental caries at school children in central bosnian canton. Aim: The main aim of this research was to determine the influence of socioeconomic status and residence/living conditions on the status of oral health (e.g. health of mouth and teeth) in primary school students residing in Canton Central Bosnia. Methods: The study was designed as a cross-sectional study. Our research included two-phased stratified random sample of 804 participants. The quantitative research method and newly designed survey instrument were utilized in order to provide data on the oral health of the examined children. The alternate hypothesis foresaw that "there were significant statistical differences between the levels of incidence of dental caries in comparison to the incidence in children of different socioeconomic status. Results: The Chi square () of 22.814, degree of freedom (Df) = 8, coefficient of contingency of 0.163 and T-test (Stat) of-0.18334 showed that there were no significant statistical differences at p < 0.05 level between the primary school children from urban and rural areas. The obtained results showed that the caries indexes in elementary schools in Central Bosnia Canton were fairly uniform. Research showed that there were a difference in the attitudes towards a regular dental visits, which correlated with social-educational structure of the children's' families. Conclusion: According to the results, we can see that the socioeconomic status of patients had an effect on the occurrence of dental caries and oral hygiene in patients in relation to the rural and urban areas, because we can see that by the number of respondents, the greater unemployment of parents in both, rural and urban areas, caused a host of other factors, which were, either, directly or indirectly connected with the development of caries. abstract_id: PUBMED:33238892 Associations of socioeconomic status and lifestyle factors with dental neglect of elementary school children: the MEXT Super Shokuiku School Project. Background: Despite the fact that there are parents who do not take children with untreated dental caries to a dental clinic, few studies have been conducted to identify the responsible underlying social and family factors. The aim of this study was to investigate whether socioeconomic status and lifestyle factors are associated with dental neglect in elementary school children. Methods: This study was conducted in 2016 with 1655 children from the Super Shokuiku School Project in Toyama. Using Breslow's seven health behaviors, the survey assessed: the grade, sex, and lifestyle of the children; parental internet and game use and lifestyle; socioeconomic status. The odds ratios (OR) and 95% confidence intervals (CIs) for having untreated dental caries were calculated using logistic regression analysis. Results: Among the children participating, 152 (3.2%) had untreated dental caries. Among them, 53 (34.9%) had not been taken to a dental clinic despite the school dentist's advice. Dental neglect was significantly associated with children in higher grades (OR, 2.08; 95% CI, 1.14-3.78), father's Internet and game use ≥ 2 h/day (OR, 1.99; 95% CI, 1.02-3.88), not being affluent (OR, 2.78; 95% CI, 1.14-6.81), and non-engagement in afterschool activities (OR, 1.99; 95% CI, 1.10-3.62). Conclusions: Socioeconomic status was the strongest factor associated with dental neglect despite the fact that the children's medical expenses are paid in full by the National Health Insurance in Toyama, Japan. Future studies should investigate what factors prevent parents of non-affluent families from taking their children to dental clinics and how they can be socially supported to access adequate medical care. abstract_id: PUBMED:27433061 Influence of socioeconomic and working status of the parents on the incidence of their children's dental caries. Background And Objective: In the contemporary scenario of both parents employed, there seems to be limited focus on the dietary habits and dental health of their children. Hence, we attempted to correlate the socioeconomic and working status of the parents to the incidence of their children's dental caries. Materials And Methods: One thousand school children aged between 3 and 12 years were enrolled in the study. Socioeconomic and working status of their parents was obtained by a pretested questionnaire following which these children were examined for their dental caries status. The data collected were statistically analyzed using logistic regression analysis and calculation of odds ratio. Results: A significant correlation was observed between working status of the parents and dental caries status of their children. Though, the socioeconomic status and dental caries had a weak correlation, the odds ratio was high, indicating that the children of lower socioeconomic status or family with both parents employed were at a higher risk for dental caries. Conclusion: Efforts are needed to implement programs at the school level to enhance the oral and dental health among children, as parental responsibilities toward this maybe inadequate due to economic or time constraints. abstract_id: PUBMED:22928394 Relationship between dental status and family, school and socioeconomic level. The aim of this study was to analyze the association between the knowledge, attitudes, practices and formal schooling ofparents and the oral health status in schoolchildren enrolled in educational institutions of different socioeconomic levels, using dental caries as the tracer disease. A convenience sample of 300 school children aged 6-14 years old and living in Mar del Plata city, Argentina, was composed according to income characterization in three strata: low, middle and high income. The children were grouped according to age (6-8, 9-11 and 12-14 years old). A validated questionnaire on knowledge, attitudes and oral health practices was administered to parents. Children were examined for dental and gingival status. DMFS, dmfs, plaque and gingival bleeding indexes were determined. Mean and SEM and/or frequency distribution of each variable were determined and diferences assessed by ANOVA, chi-squared, Yates chi-squared and Scheffé tests (p < 0.05). Association among variables was tested by chi-squared test. The children from low income families showed significantly higher levels oforal disease in all the studied age groups. These families revealed significantly less healthy practices and attitudes along with lower formal schooling level. Dental indicators were inversely and significantly associated with parents' knowledge, attitudes and formal schooling and with plaque index. Bleeding on probing was inversely and significantly associated with plaque index, parents 'formal schooling and practices. Plaque index was found to be inversely associated with parents' knowledge, attitudes and formal schooling. Parents 'knowledge, formal schooling, attitudes and health practices are intervening variables on oral health status ofschool children and an intervention field with potential impactfor the oral component of health. abstract_id: PUBMED:11359204 Relationship among caries, gingivitis and fluorosis and socioeconomic status of school children Objective: To determine the relationship between the socioeconomic status and dental caries, gingivitis and fluorosis among Brazilian school children. Methods: One thousand students aged 12 from private and public schools were examined. The indexes used were DMFT or S (Decayed, Missing and Filled Teeth or Surfaces Index), BI (Bleeding Index), and TFI (Thylstrup and Feyerskov Index). The socioeconomic level was determined according family income and parents' educational level. Results: Parents' educational level data revealed a strong Pearson's correlation with income. No correlation was observed between dental caries prevalence, gingivitis and fluorosis and the studied social economic variables. The DMFT in private schools was 1.54+/-2.02, and in public schools was 2.48+/-2.51. BI was 14.7%+/-12.7% in private schools and 21.7%+/- 17.9% in public ones. The prevalence of fluorosis was 60.8% and 49.9%, respectively). These differences were statistically significant (p<0.05). Individuals with a larger number of decayed surfaces and the ones with a larger percent of bleeding surfaces were seen in public schools. Conclusions: The socioeconomic level variables, income and parents' educational level, did not correlate with the events analyzed in the study. Other socioeconomic variables probably contributed to the observed differences between students from private and public schools. abstract_id: PUBMED:21091525 Associations between school deprivation indices and oral health status. Background: Despite an overall improvement in oral health status in several countries over the past decades, chronic oral diseases (COD) remain a public health problem, occurring mostly among children in the lower social strata. The use of publicly available indicators at the school level may be an optimal strategy to identify children at high risk of COD in order to organize oral health promotion and intervention in schools. Objective: To investigate whether school deprivation indices were associated with schoolchildren oral health status. Methods: This ecological study used a sample of 316 elementary public schools in the province of Quebec, Canada. Data from two sources were linked using school identifiers: (i) Two school deprivation indices (in deciles) from the Ministry of Education, a poverty index based on the low income cut-offs established by Statistics Canada and a socioeconomic environment index defined by the proportions of maternal under-schooling and of unemployed parents and (ii) Oral health outcomes from the Quebec Schoolchildren Oral Health Survey 1998-99 aggregated at the school level. These included proportions of children with dental caries and reporting oral pain. The relation between school deprivation indices and oral health outcomes was assessed with linear regression for dental caries experience and logistic regression for oral pain. Results: The mean DMF-S (mean number of decayed, missing and filled permanent teeth surfaces) by school was 0.7 (SD = 0.5); the average proportions of children with dental caries and reporting oral pain were 25.0% and 3.0%, respectively. The poverty index was not associated with oral health outcomes. For the socioeconomic environment index, dental caries experience was 6.9% higher when comparing schools in unfavourable socioeconomic environments to the most favourable ones [95% confidence interval (CI): 2.1, 11.7%]. Furthermore, the most deprived schools, as compared to least deprived ones, were almost three times as likely to have children reporting oral pain in the previous week. Conclusion: The school socioeconomic environment index was associated with oral health outcomes, and should be studied for its potential usefulness in planning school-based oral health promotion and screening strategies. abstract_id: PUBMED:29288402 Long-term effect of intensive prevention on dental health of primary school children by socioeconomic status. Objectives: Children in a German region took part in regular toothbrushing with fluoride gel during their time in primary school after having received a preventive program in kindergarten. The study aimed at determining the dental health of the students as a function of prevention in kindergarten and at school while taking into account their socioeconomic status and other confounders. Materials And Methods: The subjects were in six groups: groups 1 and 2, intensive prevention in kindergarten with and without fluoride gel at school; groups 3 and 4, basic prevention in kindergarten with and without fluoride gel at school; groups 5 and 6, no organized prevention in kindergarten with and without fluoride gel at school. Two dental examinations were performed for assessing caries experience and calculating caries increment from second grade (7-year-olds) to fourth grade (9-year-olds). A standardized questionnaire was used to record independent variables. To compare caries scores and preventive measures of various subgroups, non-parametric tests and a binary logistic regression analysis were performed. Results: A significant difference was found in the mean decayed, missing, and filled tooth/teeth (DMFT) depending on socioeconomic status (no prevention in kindergarten, fluoride gel at school in children with low SES: DMFT = 0.47 vs. DMFT = 0.18 in children with high SES; p = 0.023). Class-specific differences were no longer visible among children who had taken part in an intensive preventive program combining daily supervised toothbrushing in kindergarten and application of fluoride gel in school. Conclusions: Early prevention, focusing on professionally supported training of toothbrushing in kindergarten and at school, has a positive effect on dental health and is able to reduce class-specific differences in caries distribution. Clinical Relevance: Early training of toothbrushing and fissure sealing of first permanent molars are the most important factors for the dental health of primary school children. abstract_id: PUBMED:31334165 Effect of socioeconomic status on dental caries during pregnancy. Background And Objective: It is generally agreed that people with low socioeconomic status have a significantly worse oral and general health compared to people with higher socioeconomic status. The aim of the study was to find out the role of socioeconomic status of pregnant women on their oral health by evaluating the dental caries risk factor, the salivary Streptococcus mutans count and DMFT index. Materials And Methods: A total of 50 first time pregnant women were randomly sampled and were divided into different socioeconomic strata following the Kuppuswamy socioeconomic status scale. DMFT index and microbiological assessment of dental caries risk factor parameter, the salivary Streptococcus mutans colony forming units, were carried out during the three trimesters and postpartum period. The observations were compared with 50 non-pregnant women of same age group. Results: Of the randomly sampled subjects, majority of the pregnant women (28/50) of the study group were of middle/lower middle class, 11 pregnant women were of upper middle class, and 11 were of the lower/upper lower class. When followed in their three trimesters and postpartum period, statistically significant increase in the salivary Streptococcus mutans colony forming units was noted in 2nd trimester, 3rd trimester and postpartum period (P value < 0.05) of all pregnant women. Comparison of the caries risk factor between the pregnant women of three socioeconomic strata, however, revealed that no particular class was at a higher risk for dental caries. Conclusion: This study gives an insight into the possible changing trends towards awareness for oral hygiene in various socioeconomic strata of pregnant women. abstract_id: PUBMED:30126170 Socioeconomic Factors and Caries in People between 19 and 60 Years of Age: An Update of a Systematic Review and Meta-Analysis of Observational Studies. This study is aimed to perform an update of a systematic review and meta-regression to evaluate the effect modification of the socioeconomic indicators on caries in adults. We included studies that associated social determinants with caries, with no restriction of year and language. The Newcastle-Ottawa Scale was used to evaluate the risk of bias. With regard to the meta-analysis, statistical heterogeneity was evaluated by I², and the random effect model was used when it was high. A subgroup analysis was conducted for socioeconomic indicators, and a meta-regression was performed. Publication bias was assessed through Egger's test. Sixty-one studies were included in the systematic review and 25 were included in the meta-analysis. All of the studies were published between 1975 and 2016. The most frequent socioeconomic indicators were schooling, income, and socioeconomic status (SES). In the quantitative analysis, the DMFT (decayed, missing, filled teeth) variation was attributed to the studies' heterogeneity. The increase of 10.35 units in the proportion of people with lower SES was associated with an increase of one unit in DMFT, p = 0.050. The findings provide evidence that populations with the highest proportions of people with low SES are associated with a greater severity of caries. The results suggest the need for actions to reduce the inequalities in oral health (PROSPERO [CRD42017074434]). Answer: Yes, the type of school can be used as an alternative indicator of socioeconomic status in dental caries studies. A study conducted in Brazil evaluated the use of the variable type of school as an indicator of socioeconomic status as a substitute for individual data in an epidemiological survey about dental caries in preschool children. The results showed that when the type of school was used alone, children of public school presented significantly higher prevalence of dental caries than those enrolled in private schools, suggesting that the type of school is a feasible predictor for caries experience in epidemiological dental caries studies involving preschool children in the Brazilian context (PUBMED:21457574). This finding is consistent with other research that has found associations between socioeconomic status and dental caries. For example, a study in the Central Bosnian Canton found that socioeconomic status had an effect on the occurrence of dental caries and oral hygiene in patients in relation to rural and urban areas (PUBMED:25568628). Another study in Japan found that socioeconomic status was the strongest factor associated with dental neglect, even though children's medical expenses are paid in full by the National Health Insurance (PUBMED:33238892). Similarly, a study correlating the socioeconomic and working status of parents to the incidence of their children's dental caries found that children of lower socioeconomic status or families with both parents employed were at a higher risk for dental caries (PUBMED:27433061). Furthermore, research in Argentina showed that children from low-income families had significantly higher levels of oral disease and that dental indicators were inversely and significantly associated with parents' knowledge, attitudes, and formal schooling (PUBMED:22928394). Another study found no correlation between dental caries prevalence, gingivitis, and fluorosis and socioeconomic variables, but observed differences between students from private and public schools, suggesting that other socioeconomic variables may contribute to these differences (PUBMED:11359204). In summary, while individual socioeconomic indicators are important, the type of school can serve as a useful alternative indicator of socioeconomic status in dental caries studies, as it reflects underlying socioeconomic disparities that are associated with dental health outcomes.
Instruction: Low family income and food insufficiency in relation to overweight in US children: is there a paradox? Abstracts: abstract_id: PUBMED:11576013 Low family income and food insufficiency in relation to overweight in US children: is there a paradox? Objectives: To investigate associations between family income, food insufficiency, and being overweight in US children aged 2 to 7 and 8 to 16 years, to discuss mechanisms that may explain these associations, and to propose design and data requirements for further research that could effectively examine this issue. Methods: Data from the Third National Health and Nutrition Examination Survey were analyzed. Children were classified as food insufficient if the family respondents reported that their family sometimes or often did not get enough food to eat. The prevalence of overweight was compared by family income category and food sufficiency status within age-, sex-, and race-ethnic-specific groups. Odds ratios for food insufficiency are reported, adjusted for family income and other potential confounding factors. Results: Among older non-Hispanic white children, children in families with low income were significantly more likely to be overweight than children in families with high income. There were no significant differences by family income for younger non-Hispanic white children, non-Hispanic black children, or Mexican American children. After adjusting for confounding variables, there were no differences in overweight by food sufficiency status, except that younger food-insufficient girls were less likely to be overweight, and non-Hispanic white older food-insufficient girls were more likely to be overweight than food-sufficient girls (P<.10). Conclusion: Further research to evaluate whether food insecurity causes overweight in American children requires longitudinal quantitative and in-depth qualitative methods. abstract_id: PUBMED:16777308 Family food insufficiency is related to overweight among preschoolers. This paper studies the relationship between family food insufficiency and being overweight in a population-based cohort of preschool children (n=2103) using data from the Longitudinal Study of Child Development in Québec (1998-2002) (LSCDQ). Family food insufficiency status was derived when children were 1.5 years of age (from birth to 1.5 years) and at 4.5 years of age (from 3.5 to 4.5 years). Children's height and weight were measured at home at 4.5 years. Overweight was defined according to the US CDC sex- and age-specific growth charts and Cole's criteria. Statistical analyses were done with SAS (version 8.2). In multivariate analyses, mean body mass index (BMI) was higher for children from food insufficient families compared to children from food sufficient families, even when important factors associated with BMI, such as child's birth weight, parental BMI, maternal education, and family income sufficiency were considered. We did not report any gender effects in the multivariate analyses. The presence of family food insufficiency at some point during preschool years more than tripled (OR 3.4, 95% CI 1.5-7.6) the odds for obesity using the Cole criteria, and doubled (OR 2.0, 95% CI 1.1-3.6) the odds for overweight at 4.5 years using the CDC growth curves indicator. We observed an interaction between birth weight and family food insufficiency in relation to being overweight at 4.5 years. Low-birth-weight children living in a household that experienced food insufficiency during preschool years are at higher risk of overweight at 4.5 years. Given this important finding, supportive interventions targeting low-income and food insufficient families, including pregnant women, are recommended for preventing overweight and obesity among their children. abstract_id: PUBMED:24768937 Associations between family food behaviors, maternal depression, and child weight among low-income children. Although low-income children are at greater risk for overweight and obesity than their higher income counterparts, the majority of poor children are not overweight. The current study examined why such variation exists among diverse young children in poor families. Cross-sectional data were collected on 164 low-income, preschool aged children and their mothers living in two Rhode Island cities. Over half of the sample was Hispanic (55%). Mothers completed measures of family food behaviors and depression while trained assistants collected anthropometric data from children at seven day care centers and a Supplemental Nutrition Assistance Program outreach project. Multivariate analysis of covariance revealed that higher maternal depression scores were associated with lower scores on maternal presence when child eats (P < .05), maternal control of child's eating routines (P < .03), and food resource management skills (P < .01), and with higher scores on child control of snacking (P < .03) and negative mealtime practices (P < .05). Multiple regression results revealed that greater maternal presence whenever the child ate was significantly associated with lower child BMI z scores (β = .166, P < .05). Logistic regression analyses indicated that higher scores on food resource management skills reduced the odds of child overweight (odds ratios = .72-.95, P < .01). Maternal depression did not modify the relationship between family food behaviors and child weight. Overall, caregiver presence whenever a child eats, not just at meals, and better parental food resource management skills may promote healthier weights in low-income preschoolers. Further research is needed to identify the mechanisms that connect caregiver presence and food resource management skills to healthier weights for this age group. abstract_id: PUBMED:34402203 Food insecurity is associated with higher food responsiveness in low-income children: The moderating role of parent stress and family functioning. Background: Food insecurity (FI) may increase the odds for childhood obesity, yet little is known about the mechanism explaining this relationship. Parents experience greater psychosocial stress in the context of FI. In these environments, children from FI households may exhibit different appetitive behaviours. Objectives: To examine associations between FI and appetitive behaviours in children (3-5 years) and to explore whether social, emotional and structural properties of the home environment moderate this relationship. Methods: In a low-income sample of 504 parent-child dyads, parents completed the household food security module and the Children's Eating Behavior Questionnaire. A subsample (n = 361) self-reported perceived stress, depressive symptoms, household chaos and family functioning. Children were categorized as food secure, household FI and child FI. Results: Food responsiveness (LSmeans ± SE; child FI: 2.56 ± 0.13; food secure: 2.31 ± 0.10, p < 0.05) and emotional overeating (LSmeans ± SE; child FI: 1.69 ± 0.10; food secure: 1.48 ± 0.08, p < 0.05) were higher among children in the child FI group compared to the food secure group. Child FI was only associated with higher food responsiveness among children of parents reporting high levels of perceived stress (p = 0.04) and low levels of family functioning (p = 0.01). There were no differences in food responsiveness by food security status at mean or low levels of perceived stress or at mean or high levels of family functioning (p > 0.05). Conclusions: Child FI may contribute to obesity risk through differences in appetitive behaviours. For low-income families, stress management and improving family dynamics may be important factors for interventions designed to improve children's appetitive behaviours. abstract_id: PUBMED:24462491 Food preparation supplies predict children's family meal and home-prepared dinner consumption in low-income households. Frequent family meals and home food preparation are considered important for children's nutritional health and weight maintenance. This cross-sectional study tested whether these parent-driven behaviors are related to the availability of food preparation supplies in low-income urban households. Caregivers of children ages 6-13 provided information on family meal frequency, child consumption of home-prepared dinners, household food insecurity, and attitudes towards cooking. Researchers used a newly developed Food Preparation Checklist (FPC) to assess the availability of 41 food preparation supplies during a physical audit of the home environment. Caregivers and children provided anthropometric measurements and jointly reported on child dietary intake. In ordinal logistic regression models, greater home availability of food preparation supplies was associated with more frequent family meals and child consumption of home-prepared dinners. Associations were independent of household financial strain, food insecurity, caregiver attitudes toward cooking, and sociodemographic characteristics. Fewer food preparation supplies were available in households characterized by greater food insecurity, lower income, and negative caregiver attitudes towards cooking, but did not differ by child or caregiver weight status. As in prior studies, more frequent family meals and consumption of home-prepared dinners were associated with healthier child dietary intake in several areas. We conclude that food preparation supplies are often limited in the most socioeconomically disadvantaged households, and their availability is related to the frequency with which children consume family meals and home-prepared dinners. The potential role of food preparation supplies as contributors to socioeconomic disparities in child nutritional health and obesity deserves further study. abstract_id: PUBMED:38182053 Understanding family food purchasing behaviour of low-income urban UK families: An analysis of parent capability, opportunity and motivation. Objective: Family food purchasing decisions have a direct influence on children's food environments and are powerful predictors of obesity and dietary quality. This study explored parents' capability, opportunities, and motivations regarding food purchasing for their families, as well as barriers and facilitators of healthy food purchasing behaviour, in an ethnically diverse, low-income area. Design: Semi-structured interviews with parents of under-11-year-old children were conducted to investigate family food purchases, both when eating inside and outside the home. Interviews were analysed using framework analysis mapped against the COM-B model (Michie et al., 2011). Setting: An ethnically diverse, low-income area in Birmingham, UK. Participants: Sixteen parents (13F, 3M) of under-11-year-old children. 75% Pakistani, 12.5% White British, 6.3% White and Black Caribbean, and 6.3% "Other". Results: Four themes were identified: i) I know how to provide healthy meals for my family, ii) Family food purchase decisions are complex, iii) I want what they are eating and iv) Healthy eating is important but eating outside of the home is a treat. The barriers of healthy family food purchasing were predominantly at family and community levels, including time, cost, and both parents' and children's food enjoyment and preferences. Facilitators of healthy family food purchasing were primarily identified at an individual level, with high levels of capability and motivation for healthy food provision. Conclusions: Attempts to enhance parental capability to improve healthy food purchasing through nutrition education is not likely to be a useful intervention target in this group. Emphasis on enjoyment, palatability and value for money could be key to increasing parental motivation to purchase healthy family foods. abstract_id: PUBMED:28034737 Strategies used by overweight and obese low-income mothers to feed their families in urban Brazil. Objective: To describe and compare strategies adopted by overweight and obese low-income mothers living in different vulnerable contexts to deal with food constraints and feed their families. Design: Qualitative in-depth interviews. Data were analyzed with exploratory content analysis and the number of segments per theme was used to compare neighborhoods. Setting: Three low-income neighborhoods in Santos, Brazil. Participants: A purposive sample of 21 overweight or obese mothers. Results: We identified three main types of strategies, namely, food acquisition, cooking, and eating. Food acquisition included social support and food-sourcing strategies. Social support strategies ranged from macro (governmental programs) to micro (family) levels. Food-sourcing strategies involved price research and use of credit to buy foods. Cooking approaches included optimizing food (e.g., adding water to beans), avoiding wastefulness, and substitutions (e.g., using water instead of milk when making cakes). Eating themes ranged from lack of quantity to lack of quality. Strategies to deal with the lack of food were affected by family dynamics, such as prioritizing provision of fruits to children. Food choices (e.g., low consumption of fruits and high consumption of fatty meats) derived from strategies may help promote overweight and obesity. Furthermore, for participants, financial constraints were perceived as barriers to following nutritionists' recommendations and weight loss. Conclusions: This study highlights the barriers that low-income women face in adopting a healthy diet and sheds light on the importance of the symbolic value of food, even in the context of food insecurity. Finally, it suggests that environmental aspects could increase the accessibility to fruits and vegetables. These findings could be used to inform the planning and implementation of interventions. abstract_id: PUBMED:21359162 Characteristics of prepared food sources in low-income neighborhoods of Baltimore City. The food environment is associated with obesity risk and diet-related chronic diseases. Despite extensive research conducted on retail food stores, little is known about prepared food sources(PFSs). We conducted an observational assessment of all PFSs(N = 92) in low-income neighborhoods in Baltimore. The most common PFSs were carry-outs, which had the lowest availability of healthy food choices. Only a small proportion of these carry-outs offered healthy sides, whole wheat bread, or entrée salads (21.4%, 7.1%, and 33.9%, respectively). These findings suggest that carry-out-specific interventions are necessary to increase healthy food availability in low-income urban neighborhoods. abstract_id: PUBMED:25288488 Outcome of a food observational study among low-income preschool children participating in a family-style meal setting. Introduction: In the United States, one out of every seven low-income children between the ages of 2 and 5 years is at risk for overweight and obesity. Formative research was conducted to determine if preschool children participating in family-style meals consumed the minimum food servings according to U.S. Department of Agriculture dietary guidelines. Method: Participants were 135 low-income children aged 3 to 4 years who attended an urban child care center. Participant's parents completed a Family Demographic Questionnaire to provide information on race/ethnicity, parent's level of education, and household income. Direct observation of children's food and beverage consumption during school breakfast and lunch was collected over 3 consecutive days. Dietary data were assessed using the Nutrition Data System for Research software. Height and weight measurements were obtained to determine risk for obesity. Descriptive statistics were reported by using the Statistical Package for the Social Sciences Version 16. Results: Among 135 participants, 98% identified as Mexican American, 75% lived at or below poverty level, and 24% reported a family history of diabetes. Children consumed less than half of the calories provided between breakfast and lunch and did not consume the minimum recommended dietary food servings. Despite the poor dietary intake, physical measurement findings showed 25% obesity prevalence among study participants. Conclusions: Findings support the need for evidenced-based early childhood obesity prevention programs that provide behavior change opportunities for children, their families, teachers, and menu planners. Family-style meal settings are ideal opportunities for implementing nutrition education strategies to prevent early childhood obesity. abstract_id: PUBMED:31071477 Family income and nutrition-related health: Evidence from food consumption in China. With increasing family income, the prevalence of overweight has risen and become a serious threat to individual health and a major public health challenge in China. This study attempts to shed light on the mechanism of income impact on the adult health outcomes of BMI and overweight through five potential channels: nutritional intakes, dietary diversity, dietary knowledge, food preference, and dining out. Using the panel data from the China Health and Nutrition Survey (CHNS), we investigate the causal relationship between income and health by considering the changes in the minimum wage as a valid instrument to address the endogeneity problem of income in health estimation. The results indicate that rising income increases the adults' BMI and the propensity to be overweight; approximately 15.58% and 16.20% of income impact on BMI and overweight could be explained by the five channels considered, respectively. Among the five channels, dietary diversity plays the most significant role in explaining the income impact. We observe significant heterogeneity in income-BMI gradients across various income quantiles and subsamples. Specifically, income-BMI gradients tend to increase along with income percentiles, and income has a significantly positive impact on BMI and overweight for the male sample but it shows no significant impact for the female sample. Answer: The relationship between low family income, food insufficiency, and overweight in US children is complex and may appear paradoxical. Research has shown that among older non-Hispanic white children, those in families with low income were significantly more likely to be overweight than children in families with high income. However, there were no significant differences by family income for younger non-Hispanic white children, non-Hispanic black children, or Mexican American children. After adjusting for confounding variables, there were no differences in overweight by food sufficiency status, except that younger food-insufficient girls were less likely to be overweight, and non-Hispanic white older food-insufficient girls were more likely to be overweight than food-sufficient girls (PUBMED:11576013). Another study found that family food insufficiency was related to overweight among preschoolers. In multivariate analyses, mean body mass index (BMI) was higher for children from food insufficient families compared to children from food sufficient families. The presence of family food insufficiency at some point during preschool years more than tripled the odds for obesity and doubled the odds for overweight at 4.5 years (PUBMED:16777308). These findings suggest that while there is a relationship between low income, food insufficiency, and overweight, it is not straightforward and may vary by age, ethnicity, and other factors. The paradox may arise from the fact that food insufficiency does not always lead to underweight as might be expected; instead, it can be associated with overweight, potentially due to the consumption of cheaper, high-calorie, nutrient-poor foods that contribute to weight gain. Further research is needed to fully understand the mechanisms behind this relationship and to identify effective interventions for preventing overweight and obesity among children in low-income and food-insufficient families.
Instruction: Malignant biliary obstructions: can we predict immediate postprocedural cholangitis after percutaneous biliary drainage? Abstracts: abstract_id: PUBMED:23529668 Malignant biliary obstructions: can we predict immediate postprocedural cholangitis after percutaneous biliary drainage? Purpose: Percutaneous transhepatic biliary drainage (PTBD) is performed for the palliation of malignant biliary obstructions. The purpose of this study was to identify factors related to the occurrence of immediate cholangitis as a complication after PTBD METHODS: We retrospectively assessed 409 apparently stable patients with malignant biliary obstruction who underwent PTBD between January 2008 and December 2010. New onset cholangitis was defined as fever (&gt;38 °C) that arose within 24 h after the intervention. Variables significantly associated with the occurrence of immediate cholangitis were selected and their odds ratio and 95 % confidence interval were calculated using logistic regression analysis. Results: There were 106 (25.9 %) cases of immediate cholangitis following PTBD, and among those 106 cases, 45 (42.5 %) had sepsis. In multivariate analysis, history of cholangitis (OR 4.7, 95 % CI 2.45-9.18), biliary drainage within 6 months (OR 2.3, 95 % CI 1.26-4.15), CRP ≥ 5 mg/dL (OR 2.2, 95 % CI 1.23-4.03), and serum albumin &lt;3 g/dL (OR 1.9, 95 % CI 1.023-3.40) were predictive of immediate cholangitis after PTBD for malignant biliary obstructions. Conclusions: Cholangitis is a common immediate complication after PTBD. Patients should always be given prophylactic antibiotics before the drainage procedures. The results of this study could highlight the patients who require closer follow-up in order to make PTBD a safer procedure. abstract_id: PUBMED:35310153 Conversion of percutaneous transhepatic biliary drainage to endoscopic ultrasound-guided biliary drainage. Introduction: Percutaneous transhepatic biliary drainage (PTBD) is a useful alternative treatment for malignant biliary obstruction (MBO) when patients have difficulty with endoscopic transpapillary drainage. We examined the feasibility of conversion of PTBD to endoscopic ultrasound-guided biliary drainage (EUS-BD) in patients with MBO unsuited for endoscopic transpapillary biliary drainage. Methods: This retrospective study included patients who underwent conversion of PTBD to EUS-BD between March 2017 and December 2019. Eligible patients had unresectable MBO, required palliative biliary drainage, and were not suited for endoscopic transpapillary drainage. Initial PTBD had been performed for acute cholangitis or obstructive jaundice in all patients. EUS-BD was performed following improvements in cholangitis. Sixteen patients underwent conversion of PTBD to EUS-BD. We evaluated technical success, procedure time, clinical success (defined as subsequent external catheter removal), adverse events (AEs), time to recurrent biliary obstruction (TRBO), and re-intervention rates. Results: Technical success was achieved in all patients (100%). The median procedure time was 45.0 minutes (interquartile range [IQR] 30.0-50.0 minutes). Clinical success was achieved in all patients (100%). There were mild early AEs in two patients (12.5%) (acute cholangitis: 1, bile peritonitis: 1), which improved with antibiotic administration alone. Recurrent biliary obstruction (RBO) occurred in six patients (37.5%). Kaplan-Meier analysis revealed a 50% TRBO of 95 days (IQR 41-246 days). Endoscopic treatment was possible in all RBO cases, and repeat PTBD was not required. Conclusions: Conversion of PTBD to EUS-BD for the management of MBO is both feasible and safe. This approach is expected to be widely practiced at centers with little experience in EUS-BD. abstract_id: PUBMED:35407477 Critically-Ill Patients with Biliary Obstruction and Cholangitis: Bedside Fluoroscopic-Free Endoscopic Drainage versus Percutaneous Drainage. Severe acute cholangitis is a life-threatening medical emergency. Endoscopic biliary drainage (EBD) or percutaneous transhepatic biliary drainage (PTBD) is usually used for biliary decompression. However, it can be risky to transport a critical patient to the radiology unit. We aimed to compare clinical outcomes between bedside, radiation-free EBD and fluoroscopic-guided PTBD in patients under critical care. Methods: A retrospective study was conducted on critically ill patients admitted to the intensive care unit with biliary obstruction and cholangitis from January 2011 to April 2020. Results: A total of 16 patients receiving EBD and 31 patients receiving PTBD due to severe acute cholangitis were analyzed. In the EBD group, biliary drainage was successfully conducted in 15 (93.8%) patients. Only one patient (6.25%) encountered post-procedure pancreatitis. The 30-day mortality rate was no difference between the 2 groups (32.72% vs. 31.25%, p = 0.96). Based on multivariate analysis, independent prognostic factors for the 30-day mortality were a medical history of malignancy other than pancreatobiliary origin (HR: 5.27, 95% confidence interval [CI]: 1.01-27.57) and emergent dialysis (HR: 7.30, 95% CI: 2.20-24.24). Conclusions: Bedside EBD is safe and as effective as percutaneous drainage in critically ill patients. It provides lower risks in patient transportation but does require experienced endoscopists to perform the procedure. abstract_id: PUBMED:25040581 Comparison of percutaneous transhepatic biliary drainage and endoscopic biliary drainage in the management of malignant biliary tract obstruction: a meta-analysis. Background And Aim: To compare percutaneous transhepatic biliary drainage (PTBD) and endoscopic biliary drainage (EBD) for management of malignant biliary tract obstruction (MBTO). Methods: PubMed, Google Scholar, and the Cochrane database were searched to 31 December 2013. Main outcome measurements were therapeutic success rate, 30-day mortality rate, overall complications, cholangitis, and pancreatitis. Results: Eight studies (five retrospective and three randomized controlled trials) were included in the meta-analysis with a total of 692 participants. Combined odds ratio (OR) = 2.18 revealed no significant difference in therapeutic success between PTBD and EBD (95% confidence interval [CI] = 0.73-6.47, P = 0.162). However, after excluding two studies that appeared to be outliers, PTBD exhibited a better therapeutic success rate than EBD (pooled OR = 4.45, 95% CI = 2.68-7.40, P &lt; 0.001). Patients who underwent PTBD were 0.55 times as likely to have cholangitis as those who underwent EBD, whereas the overall complication rate, pancreatitis rate, and 30-day mortality were similar between the two procedures. Conclusions: PTBD may be associated with a better therapeutic success rate and lower incidence of cholangitis than EBD, but the overall complication rate, pancreatitis rate, and 30-day mortality of the two procedures are similar. abstract_id: PUBMED:6976708 Percutaneous biliary drainage in the management of biliary sepsis. Percutaneous biliary drainage was performed in 18 patients with biliary sepsis due to acute obstructive cholangitis and postoperative complications. Internal drainage could be established in 78% of patients, and 22% were managed on external drainage. Nine patients were managed on long-term internal drainage, six underwent uneventful surgery after successful percutaneous decompression, and three died as a result of septic shock. Percutaneous biliary drainage procedures can be lifesaving in biliary sepsis. Once infection and the hyperbilirubinemia are controlled, rational therapy plans can be formulated on the basis of the anatomy and natural history of the underlying disease. abstract_id: PUBMED:7360943 Percutaneous transhepatic biliary drainage: technique, results, and applications. Internal catheter drainage was achieved in 46 of 62 consecutive patients (71.4%) undergoing percutaneous transhepatic biliary drainage (PTHBD). External drainage was achieved in 12 patients (19.3). Thus the overall success rate was 58 of 62 (93.5%). Postprocedural bilirubin levels returned to normal in 14 cases (22.5%), while bilirubin declines greater than 10 mg resulted in half the cases. Complications related to procedures occurred in three patients, although no deaths resulted. Late episodes of cholangitis were common (9/62 or 14.5%). Postprocedural care of the biliary drainage catheter included evaluation and management of acute biliary sepsis, persistent hyperbilirubinemia, electrolyte depletion, as well as catheter occlusion, bleeding, and dislodgement. PTHBD offers an effective new radiological alternative to surgical therapy of biliary obstruction. abstract_id: PUBMED:3771157 Infectious complications of percutaneous biliary drainage. The infectious complications of percutaneous biliary drainage were reviewed in 132 patients with obstructive jaundice. Cholangitic or septic episodes occurred more frequently in patients with malignant (54%) than in those with benign (22%) disease, and frequently were not related to catheter insertions or manipulations. The frequency and mechanisms of bacterial colonization of bile and blood in patients with obstructive jaundice before and after biliary drainage are reviewed. The significant morbidity and mortality related to postdrainage infectious episodes is stressed, and the efficacy of antibiotic prophylaxis is discussed. The significant risks and complications of percutaneous biliary drainage must be considered prior to catheter placement, particularly in the most debilitated patients. abstract_id: PUBMED:30589030 Multifactorial analysis of biliary infection after percutaneous transhepatic biliary drainage treatment of malignant biliary obstruction. Background: The symptoms of patients with malignant biliary obstruction (MBO) could be effectively alleviated with percutaneous transhepatic biliary drainage (PTBD). Postoperative infections were considered as challenging issues for clinicians. In this study, the risk factors of biliary infection in patients after PTBD were analyzed. Methods: From July 2003 to September 2010, 694 patients with MBO received PTBD treatment. Bile specimens were also collected during PTBD. All relevant information and results were collected, including gender, age, obstruction time, types of primary tumor, sites of obstruction, drainage style, tumor stage, hemoglobin, phenotype of peripheral blood monocyte (Treg), total bilirubin, direct bilirubin, albumin, Child-Pugh score, and results of bile bacterial culture. Results: For the 694 patients involved in this study, 485 were male and 209 were female, with a mean age of 62 years (ranged 38-78 years). For the bile culture, 57.1% patients (396/649) were negative and 42.9% patients showed positive (298/694), and then 342 strains of microorganism were identified. The risk factors of biliary system infection after PTBD included: age (χ2 = 4.621, P = 0.032), site of obstruction (χ2 = 17.450, P &lt; 0.001), drainage style (χ2 = 14.452, P &lt; 0.001), tumor stage (χ2 = 4.741, P = 0.029), hemoglobin (χ2 = 3.914, P = 0.048), Child-Pugh score (χ2 = 5.491, P = 0.019), phenotype of peripheral blood monocyte (Treg) (χ2 = 5.015, P = 0.025), and results of bile bacterial culture (χ2 = 65.381, P &lt; 0.001). Multivariate analysis suggested that high-risk factors were drainage style, Child-Pugh score, and results of bile culture. Conclusions: The risk factors of biliary infection after PTBD included: age, site of obstruction, drainage style, tumor stage, hemoglobin, Child-Pugh score, phenotype of peripheral blood monocyte (Treg), and results of bile culture. It was further concluded that drainage style, Child-Pugh score, and results of bile culture were independent risk factors. abstract_id: PUBMED:35535076 Role of Percutaneous Transhepatic Biliary Drainage as an Adjunct to Endoscopic Retrograde Cholangiopancreatography. Background: There is limited literature on the role of percutaneous transhepatic biliary drainage (PTBD) as an adjunct to endoscopic retrograde cholangiopancreatography (ERCP). This study evaluates the role of PTBD in patients with failed ERCP or post-ERCP cholangitis. Methods: Retrospective evaluation of clinical and intervention records of patients with biliary obstruction referred for PTBD following failed ERCP or post-ERCP cholangitis was performed. The cause of biliary obstruction, baseline serum bilirubin, white blood cell (WBC) count, serum creatinine, and procalcitonin were recorded. Technical success and clinical success (resolution of cholangitis, reduction in bilirubin levels, WBC count, creatinine, and procalcitonin) were assessed. Results: Sixty-three patients (35 females, mean age 51.4 years) were included. Indications for ERCP included malignant causes in 47 (74.6%) cases and benign causes in 16 (25.4%) cases. Indications for PTBD were failed ERCP in 21 (33.3%) and post-ERCP cholangitis in 42 (66.7%). PTBD was technically successful in all patients. Clinical success rate was 68.2% in the overall group. Mild hemobilia was noted in five (7.9%) patients. There were no major complications or PTBD related mortality. Cholangitis and acute kidney injury resolved following PTBD in 63.1% and 80% of the patients, respectively. Total serum bilirubin reduced by 47.8% and 69.4% after one week and one month of the PTBD, respectively. The average fall in procalcitonin was 5.17 ng/mL after one week of the PTBD. Conclusion: PTBD is an important adjunctive drainage procedure in patients with ERCP failure or post-ERCP cholangitis. abstract_id: PUBMED:26178753 Endoscopic ultrasound-guided biliary drainage of hilar biliary obstruction. Only 20-30% of patients with hilar cholangiocarcinoma (CC) are candidates for potentially curative resection. However, even after curative (R0) resection, these patients have a disease recurrence rate of up to 76%. The prognosis of hilar cholangiocarcinoma (CC) is limited by tumor spread along the biliary tree leading to obstructive jaundice, cholangitis, and liver failure. Therefore, palliative biliary drainage may be a major goal for patients with hilar CC. Endoscopic retrograde cholangiopancreatography (ERCP) with stent placement is an established method for palliation of patients with malignant biliary obstruction. However, there are patients for whom endoscopic stent placement is not possible because of failed biliary cannulation or tumor infiltration that limits transpapillary access. In this situation, percutaneous transhepatic biliary drainage (PTBD) is an alternative method. However, PTBD has a relatively high rate of complications and is frequently associated with patient discomfort related to external drainage. Endoscopic ultrasound-guided biliary drainage has therefore been introduced as an alternative to PTBD in cases of biliary obstruction when ERCP is unsuccessful. In this review, the indications, technical tips, outcomes, and the future role of EUS-guided intrahepatic biliary drainage, such as hepaticogastrostomy or hepaticoduodenostomy, for hilar biliary obstruction will be summarized. Answer: Yes, it is possible to predict immediate postprocedural cholangitis after percutaneous biliary drainage (PTBD) in patients with malignant biliary obstructions. A study identified several factors that were significantly associated with the occurrence of immediate cholangitis following PTBD. These factors included a history of cholangitis, biliary drainage within the previous 6 months, a C-reactive protein (CRP) level of 5 mg/dL or higher, and a serum albumin level of less than 3 g/dL. The odds ratios and 95% confidence intervals for these predictive factors were calculated using logistic regression analysis. Specifically, a history of cholangitis had an odds ratio (OR) of 4.7, biliary drainage within 6 months had an OR of 2.3, CRP ≥ 5 mg/dL had an OR of 2.2, and serum albumin <3 g/dL had an OR of 1.9. These results suggest that patients with these risk factors require closer follow-up to make PTBD a safer procedure, and prophylactic antibiotics should be given before the drainage procedures to mitigate the risk of cholangitis (PUBMED:23529668).
Instruction: Laparoscopic appendicectomy: an operation for all trainees but does the learning curve continue into consultanthood? Abstracts: abstract_id: PUBMED:33398773 Laparoscopic appendectomy as an index procedure for surgical trainees: clinical outcomes and learning curve. Surgical training is essential to maintain safety standards in healthcare. The aim of this study is to evaluate learning curves and short-term postoperative outcomes of laparoscopic appendectomy (LA) performed by trainees (TRN) and attendings (ATT). The present study included the medical records of patients with acute appendicitis who underwent a fully LA in our department between January 2013 and December 2018. Cases were divided into trainees (TRN and ATT groups based on the experience of the operating surgeon. The primary outcome measures were 30-day morbidity and mortality. Preoperative patients' clinical characteristics, intraoperative findings, operative times, and postoperative hospitalization were compared. Operative times were used to extrapolate learning curves and evaluate the effects of changes in faculty using CUSUM charts. A propensity score matching analysis was performed to reduce differences between cohorts regarding both preoperative characteristics and intraoperative findings. A total of 1173 patients undergoing LA for acute appendicitis were included, of whom 521 (45%) in the TRN group and 652 (55%) in the ATT group. No significant differences were found between the two groups in terms of complication rates, operative times and length of hospital stay. However, CUSUM chart analysis showed decreased operating times in the TRN group. Operative times improved more quickly for advanced cases. The results of this study indicate that LA can be performed by trainees without detrimental effects on clinical outcomes, procedural safety, and operative times. However, the learning curve is longer than previously acknowledged. abstract_id: PUBMED:24943051 Laparoscopic appendicectomy: an operation for all trainees but does the learning curve continue into consultanthood? Background: In public hospitals, the work-up and surgery for patients with appendicitis is predominantly performed by surgical registrars, whereas in private hospitals, it is performed by consultants. This study aims to demonstrate the difference, if any, in the demographics, work-up, management and complication rate of patients in these two groups. Methods: This was a retrospective review of all patients who underwent laparoscopic appendicectomy at a major public hospital and major private hospital over the same 13 months. Data included demographics, admission details, work-up, length of stay, time to surgery, histology and complications. Fisher's exact test and the unpaired t-test were performed to look at the statistical difference between these two groups. Results: Total laparoscopic appendicectomies were 164 (public) and 105 (private). Median waiting times to operation were 13 and 9.5 h, respectively. Histological findings of appendicitis/neoplasia/normal appendix were 83.5/3.0/13.4% and 81.9/1.9/16.2%. Histological findings of gangrene or perforation were 26.2% and 11.6% (P = 0.0081). The proportion of those who had surgery more than 24 h after admission was 12.2% and 4.8% (P = 0.0517). Rates of pelvic collection were 1.2% and 1.9% (P = 0.6448), wound infection rates were 2.4% and 1.9% (P = 1) and overall complication rates were 7.3% and 8.6% (P = 0.8165). Mean operative time was 49.79 min for consultants and 67.98 min for registrars (P &lt; 0.0001). Conclusion: Consultants are faster at laparoscopic appendicectomies than registrars. A consultant lead service in a private hospital has earlier operation times and less patients ending up with gangrenous or perforated appendicitis but does not alter complication rates. abstract_id: PUBMED:33849039 Variable Learning Curve of Basic Rigid Bronchoscopy in Trainees. Background: Despite increased use of rigid bronchoscopy (RB) for therapeutic indications and recommendations from professional societies to use performance-based competency, an assessment tool has not been utilized to measure the competency of trainees to perform RB in clinical settings. Objectives: The aim of the study was to evaluate a previously developed assessment tool - Rigid Bronchoscopy Tool for Assessment of Skills and Competence (RIGID-TASC) - for determining the RB learning curve of interventional pulmonary (IP) trainees in the clinical setting and explore the variability of learning curve of trainees. Methods: IP fellows at 4 institutions were enrolled. After preclinical simulation training, all RBs performed in patients were scored by faculty using RIGID-TASC until competency threshold was achieved. Competency threshold was defined as unassisted RB intubation and navigation through the central airways on 3 consecutive patients at the first attempt with a minimum score of 89. A regression-based model was devised to construct and compare the learning curves. Results: Twelve IP fellows performed 178 RBs. Trainees reached the competency threshold between 5 and 24 RBs, with a median of 15 RBs (95% CI, 6-21). There were differences among trainees in learning curve parameters including starting point, slope, and inflection point, as demonstrated by the curve-fitting model. Subtasks that required the highest number of procedures (median = 10) to gain competency included ability to intubate at the first attempt and intubation time of &lt;60 s. Conclusions: Trainees acquire RB skills at a variable pace, and RIGID-TASC can be used to assess learning curve of IP trainees in clinical settings. abstract_id: PUBMED:35986222 Outcome and learning curve for laparoscopic intra-corporeal inguinal hernia repair in children. Background: Laparoscopic inguinal hernia repair is one of the procedures most commonly performed by paediatric surgeons. Current research on the learning curve for laparoscopic hernia repair in children is scarce. This study aims to evaluate the clinical outcome and learning curve of laparoscopic intra-corporeal inguinal hernia repair in children. Methods: A retrospective single-centre analysis of all paediatric patients who underwent laparoscopic intra-corporeal inguinal hernia repair between 2010 and 2019 was performed. The clinical outcomes were analysed. The data on the achievement of the learning curve by surgical trainees were evaluated with the CUSUM technique, focusing on operative time. Results: There were 719 patients with laparoscopic intra-corporeal inguinal hernia repair (comprising 1051 sides) performed during the study period. The overall ipsilateral recurrence rate was 1.8% without other complications detected. CUSUM analysis showed that there were 3 phases of training, for which the trainees underwent initial learning phase (Phase 1) for the first 7 cases. After mastering of the skills and extrapolating the skills to male patients with smaller body size (Phase 2), they then achieved performance comparable to that of the senior surgeons after 18 procedures (Phase 3). Conclusions: 18 procedures seem to be the number required to reach the learning curve plateau in terms of operative time by surgical trainees. The clinical outcomes show that laparoscopic intra-corporeal inguinal hernia repair is a safe and transferrable technique, even in the hands of trainees, with adequate supervision and careful case selection. It also provides skill acquisition for minimally invasive surgery. abstract_id: PUBMED:34374479 Laparoscopic totally extraperitoneal hernia repair performed by surgical trainees: overcoming the learning curve. Background: Surgical trainees struggle to obtain experience in laparoscopic inguinal hernia repair (LIHR) due to a perceived steep learning curve. The purpose of this study was to compare outcomes in totally extraperitoneal (TEP) repair performed by surgical consultants and trainees under supervision as part of a standardised training regimen to assess the safety of residency training in this technique. Methods: A retrospective review of patients managed by TEP repair by either a consultant or a supervised trainee was performed. Demographic, perioperative and postoperative data were collected and compared. All trainees underwent a standardised approach to teaching TEP repair. Results: Trainees performed 133 procedures and consultants performed 121 procedures. Estimated blood loss was minimal in both cohorts. A significant difference was noted in mean operating time between consultants and trainees (33 vs. 50 min). However, it was also observed that the trainee mean operating time reduced significantly with experience (from 61 to 42 min). No statistically significant difference was demonstrated in postoperative complications, recurrence rate or length of stay. All trainees achieved the ability to complete a laparoscopic TEP repair under unscrubbed consultant supervision during a 1-year placement. Conclusion: With senior supervision and in the presence of a structured training regimen, trainees can safely and effectively perform LIHR, progressing to performing the procedure under unscrubbed consultant supervision. This is valuable information that can serve to inform the structure and direction of surgical training programmes as the ability to offer LIHR is increasingly becoming an expectation of consultant surgeons. abstract_id: PUBMED:25392625 Laparoscopic varicocelectomy: virtual reality training and learning curve. Objectives: To explore the role that virtual reality training might play in the learning curve of laparoscopic varicocelectomy. Methods: A total of 1326 laparoscopic varicocelectomy cases performed by 16 participants from July 2005 to June 2012 were retrospectively analyzed. The participants were divided into 2 groups: group A was trained by laparoscopic trainer boxes; group B was trained by a virtual reality training course preoperatively. The operation time curves were drafted, and the learning, improving, and platform stages were divided and statistically confirmed. The operation time and number of cases in the learning and improving stages of both groups were compared. Testicular artery sparing failure and postoperative hydroceles rate were statistically analyzed for the confirmation of the learning curve. Results: The learning curve of laparoscopic varicocelectomy was 15 cases, and with 14 cases more, it came into the platform stage. The number of cases for the learning stages of both groups showed no statistical difference (P=.49), but the operation time of group B for the learning stage was less than that of group A (P&lt;.00001). The number of cases of group B for the improving stage was significantly less than that of group A (P=.005), but the operation time of both groups in the improving stage showed no difference (P=.30). The difference of testicular artery sparing failure rates among these 3 stages was proved significant (P&lt;.0001), the postoperative hydroceles rate showed no statistical difference (P=.60). Conclusions: The virtual reality training shortened the operation time in the learning stage and hastened the trainees' steps in the improving stage, but did not shorten the learning curve as expected to. abstract_id: PUBMED:34180567 Learning curve for laparoscopic cholecystectomy has not been defined: A systematic review. Background: Laparoscopic cholecystectomy is one of the most performed surgeries worldwide but its learning curve is still unclear. Methods: A systematic review was conducted according to the 2009 Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines. Two independent reviewers searched the literature in a systematic manner through online databases, including Medline, Scopus, Embase, and Google Scholar. Human studies investigating the learning curve of laparoscopic cholecystectomy were included. The Newcastle-Ottawa scale for cohort studies and the GRADE scale were used for the quality assessment of the selected articles. Results: Nine cohort studies published between 1991 and 2020 were included. All studies showed a great heterogeneity among the considered variables. Seven articles (77.7%) assessed intraoperative variables only, without considering patient's characteristics, operator's experience, and grade of gallbladder inflammation. Only five articles (55%) provided a precise cut-off value to see proficiency in the learning curve, ranging from 13 to 200 laparoscopic cholecystectomies. Conclusions: The lack of clear guidelines when evaluating the learning curve in surgery, probably contributed to the divergent data and heterogeneous results among the studies. The development of guidelines for the investigation and reporting of a surgical learning curve would be helpful to obtain more objective and reliable data especially for common operation such as laparoscopic cholecystectomy. abstract_id: PUBMED:26956007 What is the Learning Curve for Laparoscopic Major Hepatectomy? Background: Laparoscopic liver resection is rapidly expanding with more than 9500 cases performed worldwide. While initial series reported non-anatomic resection of benign peripheral hepatic lesions, approximately 50-65 % of laparoscopic liver resections are now being done for malignant tumors, primarily hepatocellular carcinoma (HCC) or colorectal cancer liver metastases (mCRC). Methods: We performed a literature review of published studies evaluating outcomes of major laparoscopic liver resection, defined as three or more Couinaud segments. Results: Initial fears of adverse oncologic outcomes or tumor seeding have not been demonstrated, and dozens of studies have reported comparable 5-year disease-free and overall survival between laparoscopic and open resection of HCC or mCRC in case-cohort and propensity score-matched analyses. Increased experience has led to laparoscopic anatomic liver resections including laparoscopic major hepatectomy. A steep learning curve of 45-60 cases is evident for laparoscopic hepatic resection. Conclusion: Laparoscopic major hepatectomy is safe and effective in the treatment of benign and malignant liver tumors when performed in specialized centers with dedicated teams. Comparable to other complex laparoscopic surgeries, laparoscopic major hepatectomy has a learning curve of 45-60 cases. abstract_id: PUBMED:25392640 How to reduce the laparoscopic colorectal learning curve. Background: The laparoscopic approach for colorectal pathologies is becoming more widely used, and surgeons have had to learn how to perform this new technique. The purpose of this work is to study the indicators of the learning curve for laparoscopic colectomy in a community hospital and to find when the group begins to improve. Methodology: From January 1 2005 to December 31 2012, 313 consecutive laparoscopic colorectal surgeries were performed (105 rectal and 208 colonic) by at least 60% of the same surgical team (6 members) in each operation. We evaluate the learning curve by moving averages and cumulative sums (CUSUM) for different variables related to the surgery outcomes. Results: Moving average curves for postoperative stay, fasting, and second step analgesia show a stabilizing trend toward improvement as we get more experience. However, intensive care unit stay, number of lymph nodes achieved, and operating time did not show a clear decreasing tendency. CUSUM curves of conversion, specimens&lt;12 lymph nodes, and complications all show a clear turning point marked on all the charts around the procedure 60, accumulating a positive trend toward improvement. The CUSUM curve of the "learning variable" shows this improvement point at procedure 70. Conclusions: The laparoscopic colectomy learning curve accelerates with a collective team involvement in each procedure. The CUSUM and moving average curves are useful for initial and ongoing monitoring of new surgical procedures. The markers of the learning curve evidenced in our study are the conversion rate, postoperative surgical morbidity, and the number of patients with a lymph node count&lt;12. WHAT IS NEW IN THIS PAPER?: The significance of this study is the evaluation of the learning curve, in laparoscopic colorectal surgery, of a surgical team in a community hospital, using moving average and CUSUM curves. This study demonstrated that the number of patients needed to achieve skilful practice decreased when there is collective team involvement in each procedure. abstract_id: PUBMED:32141491 Criticism of learning curve in laparoscopic adrenalectomy: a systematic review. Background: Laparoscopic adrenalectomy (LA) has become the treatment of choice for benign adrenal lesions. Lateral Transperitoneal Laparoscopic Adrenalectomy (LTLA) is considered the gold standard. The number of LTLAs a surgeon must perform, in order to complete his learning curve, is not well defined in Literature. Moreover, the few papers dealing with the learning curve for LTLAs show controversial results and consider different evaluation parameters. Methods: The systematic review has been carried out according to PRISMA statement. The literature search included PubMed and Scopus database. Hand searching of reference lists of previous review articles and relevant studies was also performed. The search string was "learning curve AND laparoscopic adrenalectomy". Results: A total of 9 papers met the inclusion criteria out of 94 non duplicate citations. The aim of this systematic review is to provide a multidimensional evaluation by bringing into focus evaluation parameters of surgical performance, (operative time, intraoperative complications, conversion rate and blood loss), factors related to patient's pathology (side, size, adrenal pathology) and surgeon-specific properties. Conclusions: Operative time, intraoperative bleeding, intraoperative complications and conversion rate are the main parameters that have been considered for the achievement of learning curve, and for each there are discrepancies, mainly due to the relative rarity of adrenal tumors, and so for difficulties in obtaining approper analysis that could establish an effective learning curve. So, further evaluations in larger experience are needed. Answer: The learning curve for laparoscopic appendicectomy does indeed continue into consultanthood. A study comparing the performance of consultants and registrars in public and private hospitals found that consultants were faster at performing laparoscopic appendicectomies than registrars, with mean operative times of 49.79 minutes for consultants and 67.98 minutes for registrars. This suggests that even experienced surgeons can continue to refine their skills and become more efficient over time (PUBMED:24943051). However, it is important to note that the learning curve for surgical procedures can be variable and is influenced by multiple factors, including the complexity of cases, the surgeon's prior experience, and the presence of structured training programs. For instance, in the case of laparoscopic totally extraperitoneal hernia repair, trainees were able to perform the procedure safely and effectively under supervision, with their mean operating time decreasing significantly with experience (PUBMED:34374479). Moreover, the learning curve for laparoscopic procedures can be longer than previously acknowledged, as seen in a study where laparoscopic appendectomy performed by trainees showed no significant differences in complication rates, operative times, and length of hospital stay compared to those performed by attending surgeons, but the learning curve was longer than expected (PUBMED:33398773). In conclusion, while laparoscopic appendicectomy is an operation suitable for trainees, the learning curve does not stop at the end of training but continues into consultanthood, with consultants still able to improve their operative times and efficiency with experience.
Instruction: Left ventricular solid body rotation in non-compaction cardiomyopathy: a potential new objective and quantitative functional diagnostic criterion? Abstracts: abstract_id: PUBMED:18815069 Left ventricular solid body rotation in non-compaction cardiomyopathy: a potential new objective and quantitative functional diagnostic criterion? Background: Left ventricular (LV) twist originates from the interaction between myocardial fibre helices that are formed during the formation of compact myocardium in the final stages of the development of myocardial architecture. Since non-compaction cardiomyopathy (NCCM) is probably caused by intrauterine arrest of this final stage, it may be anticipated that LV twist characteristics are altered in NCCM patients, beyond that seen in patients with impaired LV function and normal compaction. Aims: The purpose of this study was to assess LV twist characteristics in NCCM patients compared to patients with non-ischaemic dilated cardiomyopathy (DCM) and normal subjects. Methods And Results: The study population consisted of 10 patients with NCCM, 10 patients with DCM, and 10 healthy controls. LV twist was determined by speckle tracking echocardiography. In all controls and DCM patients, rotation was clockwise at the basal level and counterclockwise at the apical level. In contrast, in all NCCM patients the LV base and apex rotated in the same direction. Conclusions: These findings suggest that 'LV solid body rotation', with near absent LV twist, may be a new sensitive and specific, objective and quantitative, functional diagnostic criterion for NCCM. abstract_id: PUBMED:21345651 Diagnostic value of rigid body rotation in noncompaction cardiomyopathy. Background: The diagnosis of noncompaction cardiomyopathy (NCCM) remains subject to controversy. Because NCCM is probably caused by an intrauterine arrest of the myocardial fiber compaction during embryogenesis, it may be anticipated that the myocardial fiber helices, normally causing left ventricular (LV) twist, will also not develop properly. The resultant LV rigid body rotation (RBR) may strengthen the diagnosis of NCCM. The purpose of the current study was to explore the diagnostic value of RBR in a large group of patients with prominent trabeculations. Methods: The study comprised 15 patients with dilated cardiomyopathy, 52 healthy subjects, and 52 patients with prominent trabeculations, of whom a clinical expert in NCCM defined 34 as having NCCM. LV rotation patterns were determined by speckle-tracking echocardiography and defined as follows: pattern 1A, completely normal rotation (initial counterclockwise basal and clockwise apical rotation, followed by end-systolic clockwise basal and counterclockwise apical rotation); pattern 1B, partly normal rotation (normal end-systolic rotation but absence of initial rotation in the other direction); and pattern 2, RBR (rotation at the basal and apical level predominantly in the same direction). Results: The majority of normal subjects had LV rotation pattern 1A (98%), whereas the 18 subjects with hypertrabeculation not fulfilling diagnostic criteria for NCCM predominantly had pattern 1B (71%), and the 34 patients with NCCM predominantly had pattern 2 (88%). None of the patients with dilated cardiomyopathy showed RBR. Sensitivity and specificity of RBR for differentiating NCCM from "hypertrabeculation" were 88% and 78%, respectively. Conclusions: RBR is an objective, quantitative, and reproducible functional criterion with good predictive value for the diagnosis of NCCM as determined by expert opinion. abstract_id: PUBMED:30651913 A New Diagnosis of Left Ventricular Non-Compaction in a Patient Presenting with Acute Heart Failure. Left ventricular non-compaction is an overall rare cardiomyopathy; however, it is increasingly being recognized with advances in imaging technology. We present the case of a 47-year-old man with new diagnosis of heart failure and left ventricular non-compaction. We review the literature regarding diagnostic imaging criteria and management of this condition. abstract_id: PUBMED:27878729 Features of the Left Ventricular Functional Geometry in Patients with Myocardial Diseases with Varying Degrees of Systolic Dysfunction. We revealed some features of the left ventricular functional geometry in patients with myocardial diseases with different degrees of left ventricular systolic dysfunction. A negative correlation was found between the spatio-temporal heterogeneity of the kinetics of the left ventricular wall during systole and ejection fraction in normal heart and in systolic dysfunction. The differences in the quantitative characteristics of the functional geometry between patients and normal subjects and between different groups of patients depended on the severityof left-ventricular systolic dysfunction. In particular, spatial heterogeneity index that characterizes heterogeneity of systolic movement of the wall segments and end-systolic Fourier shape-power index characterizing complexity of the left ventricle shape during systole differed significantly in the examined groups of patients and have the greatest diagnostic power. abstract_id: PUBMED:38201424 Left Ventricular Non-Compaction in Children: Aetiology and Diagnostic Criteria. Left ventricular non-compaction (LVNC) is a heterogeneous myocardial disorder characterized by prominent trabeculae protruding into the left ventricular lumen and deep intertrabecular recesses. LVNC can manifest in isolation or alongside other heart muscle diseases. Its occurrence among children is rising due to advancements in imaging techniques. The origins of LVNC are diverse, involving both genetic and acquired forms. The clinical manifestation varies greatly, with some cases presenting no symptoms, while others typically manifesting with heart failure, systemic embolism, and arrhythmias. Diagnosis mainly relies on assessing heart structure using imaging tools like echocardiography and cardiac magnetic resonance. However, the absence of a universally agreed-upon standard and limitations in diagnostic criteria have led to ongoing debates in the scientific community regarding the most reliable methods. Further research is crucial to enhance the diagnosis of LVNC, particularly in early life stages. abstract_id: PUBMED:8269199 Diagnostic value of left ventricular dyssynergy patterns in ischemic and non-ischemic cardiomyopathy. Background: The distinction between ischemic and non-ischemic cardiomyopathy has important clinical implications. The objective of the present study was to investigate whether left ventricular dyssynergy patterns, detected by quantitative analysis of ultrasound images, differed in these two pathological processes. Methods: Fifty-six consecutive patients with congestive heart failure (New York Heart Association functional class II-IV) secondary to depressed left ventricular systolic function (ejection fraction &lt; or = 35% during diagnostic cardiac catheterization) were studied. Twenty patients were eliminated from further analysis because they met one or more exclusion criteria. The remaining 36 were divided into two groups based on the presence (ischemic cardiomyopathy) or absence (non-ischemic cardiomyopathy) of a &gt; or = 50% narrowing of the luminal diameter in one or more coronary arteries. In all patients, a standard two-dimensional echocardiographic study was obtained. Apical four- and two-chamber views with optimal endocardial and epicardial resolution were selected for analysis, and the left ventricular contour was divided into six segments of interest. Optimal endocardial and epicardial resolution were defined according to an original internal quality score system. For each of the six segments of interest, regional ejection fraction and regional segmental thickening were estimated. Data analysis was then performed on the average values of regional ejection fraction and regional segmental thickening obtained across the entire left ventricular contour. In each patient, regional ejection fraction range and regional segmental thickening range were calculated by subtracting the minimum from the maximum value of regional ejection fraction and regional segmental thickening obtained across a left ventricular contour. Results: Regional ejection fraction and regional segmental thickening did not differ significantly between the two groups. However, regional ejection fraction range and regional segmental thickening range were significantly greater in patients with ischemic cardiomyopathy than in patients with non-ischemic cardiomyopathy [28.32 +/- 11.17 versus 14.74 +/- 7.73% (P &lt; 0.001) and 47.80 +/- 16.00 versus 24.64 +/- 9.39% (P &lt; 0.001), respectively]. Overlap of findings was observed in 20% of the values for regional ejection fraction range but in only 14% of those for regional segmental thickening range. Conclusions: Patients with ischemic cardiomyopathy demonstrate a non-uniform dyssynergy that can be differentiated from a more uniform hypokinesis observed in those with non-ischemic cardiomyopathy. Computerized ultrasonic image analysis can distinguish characteristic dyssynergic patterns in patients with cardiomyopathy. Measurements of segmental wall thickening provide a more accurate assessment of regional function. abstract_id: PUBMED:32997774 Left Ventricular Noncompaction Detected by Cardiac Magnetic Resonance Screening: A Reexamination of Diagnostic Criteria. In a previous cross-sectional screening study of 5,169 middle and high school students (mean age, 13.1 ± 1.78 yr) in which we estimated the prevalence of high-risk cardiovascular conditions associated with sudden cardiac death, we incidentally detected by cardiac magnetic resonance (CMR) 959 cases (18.6%) of left ventricular noncompaction (LVNC) that met the Petersen diagnostic criterion (noncompaction:compaction ratio &gt;2.3). Short-axis CMR images were available for 511 of these cases (the Short-Axis Study Set). To determine how many of those cases were truly abnormal, we analyzed the short-axis images in terms of LV structural and functional variables and applied 3 published diagnostic criteria besides the Petersen criterion to our findings. The estimated prevalences were 17.5% based on trabeculated LV mass (Jacquier criterion), 7.4% based on trabeculated LV volume (Choi criterion), and 1.3% based on trabeculated LV mass and distribution (Grothoff criterion). Absent longitudinal clinical outcomes data or accepted diagnostic standards, our analysis of the screening data from the Short-Axis Study Set did not definitively differentiate normal from pathologic cases. However, it does suggest that many of the cases might be normal anatomic variants. It also suggests that cases marked by pathologically excessive LV trabeculation, even if asymptomatic, might involve unsustainable physiologic disadvantages that increase the risk of LV dysfunction, pathologic remodeling, arrhythmias, or mural thrombi. These disadvantages may escape detection, particularly in children developing from prepubescence through adolescence. Longitudinal follow-up of suspected LVNC cases to ascertain their natural history and clinical outcome is warranted. abstract_id: PUBMED:27575783 Usefulness of speckle myocardial imaging modalities for differential diagnosis of left ventricular non-compaction of the myocardium. Background/objectives: Current diagnostic criteria for left ventricular non-compaction (LVNC) may result in over-diagnosis of the disease. We evaluate the role of speckle imaging in differential diagnosis of LVNC. Methods And Results: We included all patients who, between January 2012 and May 2015, fulfilled currently accepted criteria for LVNC (28 patients). A control group of 28 healthy individuals and a third group of 13 patients with dilated cardiomyopathy (DCM) were created. Speckle-tracking echocardiography was performed in all groups. Thirteen patients with LVNC had an ejection fraction (EF) &lt;50% (33.5%, SD 10). When compared to controls, patients with LVNC and EF&lt;50% had a larger LV, larger left atrial diameter (LA), reduced e', and reduced global longitudinal strain (GLS). All but one patient with LVNC and EF&lt;50% showed an abnormal LV rotation. This abnormal pattern was observed in 4 LVNC patients (27%) with EF≥50% and in none of the controls. In patients with LVNC, EF ≥50%, and abnormal rotation, GLS was lower than in controls, (-17 (SD 3) vs -21 (SD 3)). Rigid body rotation (RBR) was also observed in 2 DCM patients, with significant differences in EF, GLS, LV diameters relative to the rest of the DCM group. Conclusions: In patients who fulfil the morphologic criteria for LVNC, speckle myocardial imaging techniques could be useful in differentiating between healthy individuals (functionally normal LV) and patients with LVNC (with functional abnormalities in the myocardium in spite of a preserved EF). abstract_id: PUBMED:36216087 The use of 2-D speckle tracking echocardiography in assessing adolescent athletes with left ventricular hypertrabeculation meeting the criteria for left ventricular non-compaction cardiomyopathy. Background: Current echocardiographic criteria cannot accurately differentiate exercise induced left ventricular (LV) hypertrabeculation in athletes from LV non-compaction cardiomyopathy (LVNC). This study aims to evaluate the role of speckle tracking echocardiography (STE) in characterising LV myocardial mechanics in healthy adolescent athletes with and without LVNC echocardiographic criteria. Methods: Adolescent athletes evaluated at three sports academies between 2014 and 2019 were considered for this observational study. Those meeting the Jenni criteria for LVNC (end-systolic non-compacted/compacted myocardium ratio &gt; 2 in any short axis segment) were considered LVNC+ and the rest LVNC-. Peak systolic LV longitudinal strain (Sl), circumferential strain (Sc), rotation (Rot), corresponding strain rates (SRl/c) and segmental values were calculated and compared using a non-inferiority approach. Results: A total of 417 participants were included, mean age 14.5 ± 1.7 years, of which 6.5% were LVNC+ (n = 27). None of the athletes showed any additional LVNC clinical criteria. All average Sl, SRl Sc, SRc and Rot values were no worse in the LVNC+ group compared to LVNC- (p values range 0.0003-0.06), apart from apical SRc (p = 0.2). All 54 segmental measurements (Sl/Sc SRl/SRc and Rot) had numerically comparable means in both LVNC+ and LVNC-, of which 69% were also statistically non-inferior. Conclusions: Among healthy adolescent athletes, 6.5% met the echocardiographic criteria for LVNC, but showed normal LV STE parameters, in contrast to available data on paediatric LVNC describing abnormal myocardial function. STE could better characterise the myocardial mechanics of athletes with LV hypertrabeculation, thus allowing the transition from structural to functional LVNC diagnosis, especially in suspected physiological remodelling. abstract_id: PUBMED:28478416 Clinical examples of left ventricular non-compaction in adults. Introduction: isolated left ventricular non-compaction (LVNC) is a heart disease with rather distinct morphologic and clinical manifestations. Available in the literature information about LVNC considering multiple left ventricle abnormal chords (LVAC) as one of its criterion motivated us to review the results obtained in the study of young patients with this pathology. The aim of the research was to demonstrate different clinical variants of left ventricular non-compaction course in adult patients and to clarify some pathogenetic aspects of this pathology. Materials and metods: comprehensive examination of 28 patients with multiple LVAC, 12 patients with LVNC and dilated idiopathic cardiomyopathy aged 16-36 was performed. Results: according to the results of our research, 16 of 28 patients with multiple LVAC of left ventricle had ejection fraction more than 55%, in 6 patients this index range was 50-54%, in 6 it was 45-49%. Multiple LVAC were found to be associated with significantly greater clinical, phenotypic, structural and hemodynamic changes when compared both to control and solitary LVAC of any location (р&lt;0.05). There were 12 young patients with multiple LVAC and ejection faction 22-41%. The detailed analysis of echocardiographic data in dynamics revealed other criteria of LVNC in all the patients. The data obtained suggest the relationship between pathogenetic mechanisms of heart pathology development in patients with connective tissue dysplasia syndrome and in patients with LVNС. Echocardiographic evidences of multiple LVAC designate the necessity of targeted search of left ventricular non-compacted signs and timely detection of left ventricular dilation. Answer: The concept of left ventricular (LV) solid body rotation as a diagnostic criterion for non-compaction cardiomyopathy (NCCM) has been explored in several studies. In NCCM, it is believed that the myocardial fiber helices, which normally cause LV twist, do not develop properly due to an intrauterine arrest of myocardial fiber compaction during embryogenesis. This altered development may result in LV rigid body rotation (RBR), where the base and apex of the LV rotate predominantly in the same direction, rather than the normal opposite directions (PUBMED:21345651). One study specifically aimed to assess LV twist characteristics in NCCM patients compared to patients with non-ischaemic dilated cardiomyopathy (DCM) and healthy controls. The study found that in NCCM patients, the LV base and apex rotated in the same direction, which is in contrast to the normal clockwise rotation at the basal level and counterclockwise rotation at the apical level observed in controls and DCM patients. This finding suggests that 'LV solid body rotation', with near absent LV twist, may be a new sensitive and specific, objective and quantitative functional diagnostic criterion for NCCM (PUBMED:18815069). Another study explored the diagnostic value of RBR in a large group of patients with prominent trabeculations. The study found that the majority of normal subjects had a completely normal rotation pattern, while patients with NCCM predominantly had RBR. The sensitivity and specificity of RBR for differentiating NCCM from "hypertrabeculation" were 88% and 78%, respectively, indicating that RBR is an objective, quantitative, and reproducible functional criterion with good predictive value for the diagnosis of NCCM (PUBMED:21345651). In summary, LV solid body rotation appears to be a potential new objective and quantitative functional diagnostic criterion for NCCM, with studies showing that it can differentiate NCCM patients from healthy individuals and those with other forms of cardiomyopathy.
Instruction: Does bereavement-related first episode depression differ from other kinds of first depressions? Abstracts: abstract_id: PUBMED:19693418 Does bereavement-related first episode depression differ from other kinds of first depressions? Background: It has never been investigated whether first depression differs in patients who have experienced bereavement compared to patients who have not. Method: Patients discharged with a diagnosis of a single depressive episode from a psychiatric in- or outpatient hospital setting were consecutively sampled from the Danish Psychiatric Central Research Register. Patients participated in an extensive interview including the Schedules for Clinical Assessment in Neuropsychiatry (SCAN) and the Interview of Recent Life Events (IRLE). Results: Among 301 patients with a first depression, 26 patients (4.7%) had experienced death of a first degree relative (parent, sibling, child) or a near friend, 163 patients (54.2%) had experienced other moderate to severe stressful life events and 112 patients had not experienced stressful life events in a 6 months period prior to the onset of depression. Patients who had experienced bereavement did not differ from patients with other stressful life events or from patients without stressful life events in socio-demographic variables or in the phenomenology of the depression, psychiatric comorbidity, family history or response to antidepressant treatment. Conclusion: Bereavement-related first episode depression does not differ from other kinds of first depression. abstract_id: PUBMED:34629870 Comparison of Residual Depressive Symptoms, Functioning, and Quality of Life Between Patients with Recurrent Depression and First Episode Depression After Acute Treatment in China. Objective: This prospective study aimed to investigate the prognosis and rehabilitation of patients with recurrent depression and first episode depression after acute treatment in China. Methods: A total of 434 patients with first-episode or recurrent depression who received acute treatment respectively from sixteen hospitals in thirteen cities in China were enrolled in this prospective study. All patients were followed up for 6 months after acute treatment. The following data were collected at baseline period and 1, 3, and 6 months after acute treatment: general information of patients, medication information and patient's condition changes, brief 16-item quick inventory of depressive symptomatology self-report (QIDS-SR16), patient health questionnaire-15 (PHQ-15), quality of life enjoyment and satisfaction questionnaire-short form (Q-LES-Q-SF), Sheehan disability scale (SDS) and digit symbol substitution test (DSST). Results: During the baseline period, there was a significant difference in QIDS-SR16 between recurrent patients and first-episode patients (p &lt; 0.05), and there was no significant difference in other indicators (p &gt; 0.05). At one month after acute treatment, there were significant differences in the total QIDS-SR16 score, the total Q-LES-SF score, the social life score, and the family life/home responsibilities score of SDS in patients with recurrent depression and first-episode depression (p &lt; 0.05). At three months after acute treatment, there were significant differences in the total Q-LES-SF score and social life score of SDS in patients with recurrent depression and first-episode depression (p &lt; 0.05). At six months after acute treatment, there were significant differences in the total QIDS-SR16 score, the social life score, and the total Q-LES-SF score in patients with recurrent depression and first-episode depression (p &lt; 0.05). Compared with that data during the baseline period, the QIDS-SR16 scores and the SDS scores of all patients decreased, and the Q-LES-SF scores of all patients gradually increased as time went on during the consolidation period. Conclusion: The recurrent patients have more severe social function impairment, depressive symptoms, and lower life quality than that of the first-episode depressed patients. Given the negative impact of depressed symptom on recurrent patient, more attention should be paid to the treatment of recurrent patient and recurrence prevention of first episode patient. abstract_id: PUBMED:36420428 Depression Mediates the Relationships between Hallucinations, Delusions, and Social Isolation in First-Episode Psychosis. Social isolation is common among individuals with schizophrenia spectrum and other psychotic disorders. Research indicates that social isolation relates to poorer mental health outcomes, depression, and negative symptoms, with less known about its relationship with positive symptoms. This study examined depression as a mediator in the relationships between positive symptoms (i.e., hallucinations and delusions) and social isolation among an early treatment phase sample in the United States. Data were obtained from the Recovery After an Initial Schizophrenia Episode project of the National Institute of Mental Health's Early Treatment Program. Participants (N = 404) included adults between ages 15 and 40 in a first episode of psychosis. Data were analyzed using structural equation modeling in Mplus (Version 8). The study showed that delusions (b = .095, SE = 0.04, p &lt; .05) and hallucinations (b = .076, SE = 0.03, p &lt; .01) were directly related to depression, and that both delusions (b = .129, SE = 0.06, p &lt; .05) and depression (b = .254, SE = 0.09, p &lt; .05) were directly related to social isolation. Findings of this study determined that depression functioned as a mediator in the relationships between positive symptoms and social isolation. Targeting psychosis symptomatology and depression in treatment, improving social skills and social support networks, and considering the role of stigma in social isolation are of great importance in the prevention of poorer mental health outcomes. abstract_id: PUBMED:29379888 Factors associated with anxiety and depression in hospitalized patients with first episode of acute myocardial infarction. Introduction: Evaluation of anxiety and depression in cardiac patients is an area of nursing practice that is frequently neglected. The aim of the study was to explore anxiety and depression in hospitalized patients with their first episode of acute myocardial infarction. Material And Methods: The study sample included 148 hospitalized patients who had a first episode of acute myocardial infarction. Data collection was performed by the interview method using a specially designed questionnaire which included socio-demographic, clinical and other patients' characteristics as well as the Hospital Anxiety and Depression Scale (HADS) to assess patients' levels of anxiety and depression. Results: Analysis of data showed that 52% and 38% of participants had high levels of anxiety and depression, respectively. Furthermore, anxiety levels revealed a statistically significant association with anxiolytics (p = 0.005) and antidepressant medication (p = 0.026) in hospital, the belief that they will face difficulties in relations with the social and family environment (p = 0.009 and p = 0.002, respectively) and whether they considered themselves anxious (p = 0.003). Depression was statistically significantly associated with education level (p = 0.001), profession (p = 0.007), antidepressant medication in hospital (p ≤ 0.001), patients' relations with nursing staff (p = 0.019) and patients' belief that they will face difficulties in relations with the social and family environment (p ≤ 0.001 and p ≤ 0.001, respectively). Conclusions: The results showed that socio-demographic and clinical characteristics should be taken into serious consideration when exploring anxiety and depression in patients with a first episode of acute myocardial infarction in order to implement appropriate interventions. abstract_id: PUBMED:33340295 Clinical and sociodemographic characteristics of patients with the first depressive episode and recurrent depression Objective: To comparare socio-demographic and clinical characteristics of patients with the first depressive episode and recurrent depression. Material And Methods: Three hundred and twenty one patients with unipolar depression, including 96 patients with first depressive episode and 225 patients with recurrent depression, were examined using clinical and psychometric methods. Results And Conclusion: There were differences in clinical characteristics between groups but such factors as gender, marital status, level of education, family history of mental disorders and personality were similar. With each new episode of recurrent depression, the next episode tends to be more severe with more intense pessimistic and suicidal thoughts but fewer anxiety and complaints of depressive mood that affects the differences and requires further research, especially considering the effect of therapy. abstract_id: PUBMED:34758106 Functional and cognitive impairment in the first episode of depression: A systematic review. Objectives: To describe the cognitive and functional impairment in individuals with the first episode of major depressive disorder (MDD) as compared to controls and individuals with recurrent MDD. Also to describe the functional and cognitive trajectory after the first episode of MDD. Methods: A total of 52 studies were included in our systematic review. 32 studies compared the cognitive performance between first episode of depression (FED) and controls, 11 studies compared the cognitive performance between recurrent depression (RD) and FED, 10 compared global functioning between RD and FED, four studies assessed cognition in FED over time, and two studies assessed global functioning in FED over time. Results: The majority of studies (n = 22/32, 68.8%) found that FED subjects performed significantly worse than controls on cognitive tests, with processing speed (n = 12) and executive/working memory (n = 11) being the most commonly impaired domains. Seven out of 11 studies (63.6%) found that RD performed significantly worse than FED, with verbal learning and memory being the most commonly impaired domain (n = 4). Most studies (n = 7/10, 70%) did not find a significant difference in global functioning between RD and FED. In three of four longitudinal studies assessing cognition, subgroup analyses were used instead of directly assessing cognition in FED over time while the remaining study found significant cognitive declines over time in FED when compared to controls. The two longitudinal studies assessing functional trajectory found that functioning significantly improved over time, possibly due to the improvement of depressive symptoms. Conclusion: There is strong evidence that cognitive impairment is present during the first episode of depression, and individuals with multiple episodes display greater cognitive impairment than individuals with a single episode. Future studies aimed at identifying predictors of cognitive and functional impairment after the first episode of depression are needed to describe the functional and cognitive trajectory of individuals with the first episode of MDD over time. abstract_id: PUBMED:34908933 Attentional Processing of Facial Expressions and Gaze Direction in Depression and First-Episode Psychosis as Reflected by LPP Modulation. Objective: Facial expressions communicate emotional states and regulate social bonds. An approach or avoidance-based valence might interact with direct or averted gaze to elicit different attentional allocation. These processes might be aberrant in major depression or first-episode psychosis and this requires empirical investigation. Method: This study examined higher order, controlled attentional processing of emotional facial expressions (happy, neutral, angry and fearful), with direct or averted gaze, using electroencephalogram (EEG) measures of the face-elicited Late Positive Potential (LPP), in young people diagnosed with major depression or first-episode psychosis, compared with a healthy control group. Results: In the control group, there was no evidence of increased attentional allocation to emotional facial expressions, or to facial expressions with a matching emotional expression and gaze direction. There was no evidence, in the depression or first-episode psychosis groups, for a threat-based, attentional hypersensitivity to fearful or angry facial expressions, nor for this effect to be potentiated in response to angry direct or fearful averted gaze faces. However, the absence of such effects could not be concluded due to sample size and the absence of stimulus arousal and valence ratings. Importantly, there was significantly increased attentional allocation in the first-episode psychosis group to facial expressions regardless of emotional expression or gaze direction, compared to both the depression and control group. Conclusions: There might be an attentional hypersensitivity to facial expressions regardless of emotional expression or gaze direction in first-episode psychosis. abstract_id: PUBMED:32056388 Changes in inflammation are related to depression and amount of aerobic exercise in first episode schizophrenia. Introduction: Elevated levels of pro-inflammatory cytokines have been reported in meta-analyses of multi-episode schizophrenia patients when compared to controls. However, little is known about whether these same relationships are present in the early course of schizophrenia. Objective: To assess first episode schizophrenia patients for depression and to assay blood samples collected at baseline and at 6 months for interleukin-6 (IL-6). Materials And Methods: Trained raters used the Brief Psychiatric Rating Scale to assess depressive symptoms and a standard lab assay kit to assess for IL-6 levels in plasma. Conclusions: Decreases in pro-inflammatory IL-6 levels were significantly related to decreases in depressive symptoms. Within a subset of patients in a 6-month aerobic exercise protocol, the number of exercise sessions completed was significantly correlated with the amount of decrease in IL-6. The reductions observed in IL-6 with aerobic exercise suggest exercise is a promising intervention to reduce brain inflammation effects in schizophrenia patients. abstract_id: PUBMED:7953057 The nosological status of bereavement-related depressions. Background: The validity of excluding bereavement-related depressive episodes which satisfy all the criteria of major depression is examined in this community study. Method: A total of 658 subjects were interviewed in their homes using the Diagnostic Interview Schedule. The length of the episodes of depression, the dysfunction they caused, and the frequency of seeking and receiving treatment were assessed. Results: The risk of recurrence did not differ whether the depressive episodes that satisfy the DSM-III-R criteria of major depression were or were not related to bereavement. Conclusions: Excluding from the repertoire of depressions all those related to external stressors might need more research, but singling out those related to bereavement does not seem to be supported by our community-based data. abstract_id: PUBMED:31849475 Comparative Study of Cognitive Function Between Treatment-Resistant Depressive Patients and First-Episode Depressive Patients. Objective: Despite reports of cognitive dysfunction during the acute phase of depression, there is a lack of studies in patients with treatment-resistant depression (TRD). The aim of this study was to investigate the cognitive function profile of TRD and compare cognitive dysfunction between subjects with TRD and first-episode depression. Patients And Methods: The study included 31 patients with TRD and 53 with first-episode depression. Cognitive function was assessed by a series of neuropsychological tools such as the verbal fluency test, Modified Wisconsin Card Sorting Test (M-WCST), Tower of Hanoi test, Chinese-revision of the Wechsler Adult Intelligence Scale (WAIS-RC), and Trail Making Test A and B. Results: There were no significant demographic differences between the TRD, first-episode depression, and normal control groups (gender, age, years of education). The full-scale, verbal, and performance intelligence quotients measured with the WAIS-RC were also not significantly different (p&gt;0.05). The normal group scores were all significantly better than TRD and first-episode depression, and the TRD group performed significantly worse than subjects with first-episode depression on Trail Making Test B, two WCST subscales, and the profile score of the Tower of Hanoi test (all p&lt;0.05). Conclusion: Patients with depression exhibited global impairments in cognitive function, and these were more common in TRD. Poor executive function may play an important role in TRD. Answer: Based on the information provided in the abstracts, bereavement-related first episode depression does not differ from other kinds of first depression. A study that specifically investigated this question found that among 301 patients with a first depression, those who had experienced bereavement did not differ from patients with other stressful life events or from patients without stressful life events in socio-demographic variables, the phenomenology of the depression, psychiatric comorbidity, family history, or response to antidepressant treatment (PUBMED:19693418). This suggests that the experience of bereavement does not lead to a distinct subtype of first episode depression when compared to depressions triggered by other life events or with no identified stressful life events.
Instruction: Is neuromuscular electrical stimulation effective for improving pain, function and activities of daily living of knee osteoarthritis patients? Abstracts: abstract_id: PUBMED:23657509 Is neuromuscular electrical stimulation effective for improving pain, function and activities of daily living of knee osteoarthritis patients? A randomized clinical trial. Context And Objective: Neuromuscular electrical stimulation (NMES) has been used in rehabilitation protocols for patients suffering from muscle weakness resulting from knee osteoarthritis. The purpose of the present study was to assess the effectiveness of an eight-week treatment program of NMES combined with exercises, for improving pain and function among patients with knee osteoarthritis. Design And Setting: Randomized clinical trial at Interlagos Specialty Ambulatory Clinic, Sao Paulo, Brazil. Methods: One hundred were randomized into two groups: NMES group and control group. The following evaluation measurements were used: numerical pain scale from 0 to 10, timed up and go (TUG) test, Lequesne index and activities of daily living (ADL) scale. Results: Eighty-two patients completed the study. From intention-to-treat (ITT) analysis comparing the groups, the NMES group showed a statistically significant improvement in relation to the control group, regarding pain intensity (difference between means: 1.67 [0.31 to 3.02]; P = 0.01), Lequesne index (difference between means: 1.98 [0.15 to 3.79]; P = 0.03) and ADL scale (difference between means: -11.23 [-19.88 to -2.57]; P = 0.01). Conclusion: NMES, within a rehabilitation protocol for patients with knee osteoarthritis, is effective for improving pain, function and activities of daily living, in comparison with a group that received an orientation program. CLINICAL TRIAL REGISTRATION ACTRN012607000357459. abstract_id: PUBMED:29325839 The effect of low-load exercise on joint pain, function, and activities of daily living in patients with knee osteoarthritis. Background: Knee osteoarthritis has a lifetime risk of nearly one in two, with obese individuals being most susceptible. While exercise is universally recognized as a critical component for management, unsafe or ineffective exercise frequently leads to exacerbation of joint symptoms. Aim: Evaluate the effect of a 12week lower body positive pressure (LBPP) supported low-load treadmill walking program on knee pain, joint function, and performance of daily activities in patients with knee osteoarthritis (OA). Design: Prospective, observational, repeated measures investigation. Setting: Community based, multidisciplinary musculoskeletal medicine clinic. Patients: Thirty-one patients, aged 50-75, with a BMI ≥25kg/m2 and radiographic confirmed mild to moderate knee OA. Intervention: Twelve week LBPP treadmill walking exercise regimen. Outcome Measures: The Knee Injury and Osteoarthritis Outcome Score (KOOS) and the Canadian Occupational Performance Measure (COPM) were used to quantify joint symptoms and patient function; isokinetic thigh muscle strength was evaluated; and a 10-point VAS was used to quantify acute knee pain while walking. Baseline and follow-up data were compared in order to examine the effect of the 12week exercise intervention. Results: There was a significant difference between baseline and follow-up data: KOOS and COPM scores both improved; thigh muscle strength increased; and acute knee pain during full weight bearing walking diminished significantly. Conclusions: Participation in a 12week LBPP supported treadmill walking exercise regimen significantly enhanced patient function and quality of life, as well as the ability to perform activities of daily living that patient's self-identified as being important, yet difficult to perform. abstract_id: PUBMED:34219332 Gait speed and pain status as discriminatory factors for instrumental activities of daily living disability in older adults with knee osteoarthritis. Aim: Factors related to instrumental activities of daily living disability in older adults with knee osteoarthritis are unclear. This study aimed to examine the discriminatory accuracy for the presence of instrumental activities of daily living disability in older adults with knee osteoarthritis by combining two factors of gait ability and pain status. Methods: A cross-sectional study was conducted on 114 patients with knee osteoarthritis aged ≥ 65 years. Participants were divided into instrumental activities of daily living disabled or non-disabled groups. A logistic regression model was created with usual gait speed and knee injury and osteoarthritis outcome score-pain subscale as independent variables for discriminating the presence of instrumental activities of daily living disability. The area under the receiver operating characteristic curve was inspected to determine discriminatory accuracy of the logistic regression model, usual gait speed, knee injury and osteoarthritis outcome score-pain subscale. Results: Of the 114 patients, 26 (22.8%) had instrumental activities of daily living disability. The area under the curves was 0.91 (95% confidence interval: 0.85-0.96) for the logistic regression model, 0.78 (95% confidence interval: 0.68-0.89) for usual gait speed, and 0.73 (95% confidence interval: 0.61-0.84) for knee injury and osteoarthritis outcome score-pain subscale. Conclusions: This study showed that gait speed and pain status were independent discriminatory factors and combining these factors to discriminate more accurately the presence or absence of instrumental activities of daily living disability in older adults with knee osteoarthritis was important. Geriatr Gerontol Int 2021; 21: 683-688. abstract_id: PUBMED:37145013 Effectiveness of home-based conventional exercise and cryotherapy on daily living activities in patients with knee osteoarthritis: A randomized controlled clinical trial. Background: Knee osteoarthritis (KOA) is a prevalent joint condition associated with aging that causes pain, disability, loss of function, and a decline in quality of life. This study aimed to evaluate the effectiveness of home-based conventional exercise and cryotherapy on daily living activities in patients with KOA. Methods: In this randomized controlled clinical trial, the patients who were diagnosed with KOA were assigned to 3 groups: an experimental group (n = 18), the control group 1 (n = 16), and the control group 2 (n = 15). Control and experimental groups engaged in a 2-month home-based exercise (HBE) program. The experimental group received cryotherapy along with HBE. In contrast, the patients in the second control group received regular therapeutic and physiotherapeutic services at the center. The patients were recruited from the Specialized Center for Rheumatic and Medical Rehabilitation in Duhok, Iraq. Results: The patients in the experimental group had statistically significant better daily activity functions compared to the first and second control groups in pain (2.22 vs 4.81 and 12.7; P &lt; .0001), stiffness (0.39 vs 1.56 and 4.33; P &lt; .0001), physical function (5.72 vs 13.31 and 38.13; P &lt; .0001), and the total score (8.33 vs 19.69 and 55.33; P &lt; .0001) at 2 months. The patients in the experimental and the first control groups had statistically significantly lower balance scores compared to the second control group at 2 months, 8.56 versus 9.30. At 3 months, similar patterns were observed for the daily activity function and balance. Conclusions: This study showed that combining HBE and cryotherapy may be an effective technique to improve function among patients with KOA. Cryotherapy could be suggested as a complementary therapy for KOA patients. abstract_id: PUBMED:29617207 Impact of Balance Confidence on Daily Living Activities of Older People with Knee Osteoarthritis with Regard to Balance, Physical Function, Pain, and Quality of Life - A Preliminary Report. Objectives: The objective of this study was to explore the impact of balance confidence on different activities of daily living (ADL) in older people with knee osteoarthritis (OA). Methods: Forty-seven consecutive participants with knee OA were included in this cross-sectional study. They were divided according to the results of the Activities-specific Balance Confidence (ABC) Scale into a group with a low level of confidence in physical functioning (ABC &lt; 50, n = 22) and a group with moderate and high levels of confidence (ABC ≥ 50, n = 25). Results: In the ABC &lt; 50 group, the effect of pain on ADL, the physician's global assessment of the disease, and the Western Ontario and McMaster Universities Osteoarthritis Index scores were significantly higher, while quality of life (Short form-36) was lower compared to the ABC ≥ 50 group. No significant difference was found between the two groups regarding the static and dynamic balance measurements. Conclusions: Older people with knee OA who were less confident in their daily physical activities had more physical difficulties and a greater effect of pain on ADL, lower quality of life, and a higher physician's global assessment, but no differences were obtained in balance tests. Clinical Implications: In people with knee OA, decreased balance confidence is associated with more physical difficulties, an increased effect of pain on ADL, and lower quality of life. An improved awareness of decreased balance confidence may lead to more effective management of older people with knee OA by improving their mobility and QOL through rehabilitation. Furthermore, future research in that direction is warranted. abstract_id: PUBMED:37674803 The effectiveness of peroneal nerve stimulation combined with neuromuscular electrical stimulation in the management of knee osteoarthritis: A randomized controlled single-blind study. Objectives: This study aimed to compare the effectiveness of neuromuscular electrical stimulation (NMES) combined with peroneal nerve stimulation (PNS) on muscle strength around the knee, proprioception, pain, functional status, and quality of life in patients with knee osteoarthritis (OA). Patients And Methods: The prospective, randomized, single-blinded, controlled trial included 63 patients with clinical and radiological diagnoses of knee OA between December 2019 and March 2020. The patients were divided into two groups: Group 1 (NMES+PNS, n=31) and Group 2 (NMES, n=32). The patients were followed up at two and six weeks. Main outcome measures were the Visual Analog Scale, Western Ontario and McMaster Universities Arthritis Index, Nottingham Health Profile, and 100-m walking test, quadriceps muscle strength, hamstring muscle strength (HMS), and joint position sense were evaluated using a computer-controlled isokinetic dynamometer at 60°/sec, 90°/sec, and 120°/sec angular velocities. The proprioception was evaluated at 30° and 60° flexion angles using the same device. Results: Two patients from Group 1 and two patients from Group 2 were excluded from the study after they failed to show up for the six-week control. As a result, the study was completed with 59 patients (30 females, 29 males; 55.9±6.1 years; range, 40 to 65 years). There was a significant difference between the two groups in the 100-m walking test parameter at the six-week control in favor of Group 1 (p&lt;0.05). There was a significant difference in favor of Group 1 in the parameters of proprioception (30° and 60°) and HMS (60° and 90°) in both the two-week evaluation and six-week controls (p&lt;0.05). The HMS 120° parameter showed a significant difference in favor of Group 1 at the six-week control (p&lt;0.05). Conclusion: In patients with knee OA, we believe that PNS combined with NMES may be more effective than NMES treatment alone in terms of proprioception, HMS, and functional status. abstract_id: PUBMED:37926438 Effectiveness of neuromuscular electrical stimulation training combined with exercise on patient-reported outcomes measures in people with knee osteoarthritis: A systematic review and meta-analysis. Objective: This study examined the effectiveness of neuromuscular electrical stimulation (NMES) added to the exercise or superimposed on voluntary contractions on patient-reported outcomes measures (PROMs) in people with knee osteoarthritis (OA). Methods: This systematic review was described according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Randomized controlled trials (RCTs) were obtained from a systematic literature search in five electronic databases (PubMed, PEDro, LILACS, EMBASE, and SPORTDiscus) in April 2022. We described the effects of intervention according to each PROMs (scores for Pain; Self-reported functional ability; Symptoms (hear clicking, swelling, catching, restricted range of motion, and stiffness); Daily living function; Sports function; and Quality of life) and used a random-effect model to examine the impact of NMES plus exercise on pain compared with exercise in people with knee OA. Results: Six RCTs (n = 367) were included. In the qualitative synthesis, the systematic literature analysis showed improvement in pain after NMES plus exercise compared with exercise alone in three studies. The other three studies revealed no difference between groups in pain, although similar improvement after treatments. In the meta-analysis, NMES at a specific joint angle combined with exercise was not superior to exercise alone in pain management (standardized mean difference = -0.33, 95% CI = -1.05 to 0.39, p = 0.37). There was no additional effect of NMES on exercise on self-reported functional ability, stiffness, and physical function compared with exercise alone. In only one study, symptoms, activities of daily living, sports function, and quality of life improved after whole-body electrostimulation combined with exercise. Conclusion: This review found insufficient evidence for the effectiveness of NMES combined with exercise in treating knee OA considering PROMs. While pain relief was observed in some studies, more high-quality clinical trials are needed to support the use of NMES added to the exercise in clinical practice. Electrical stimulation in a whole-body configuration combined with exercise shows promise as an alternative treatment option. abstract_id: PUBMED:30514113 Short-term effects of neuromuscular electrical stimulation and ultrasound therapies on muscle architecture and functional capacity in knee osteoarthritis: a randomized study. Objective:: To determine the effects of ultrasound therapy and neuromuscular electrical stimulation (NMES) application on the muscle architecture and functional capacity in patients with knee osteoarthritis. Design:: A randomized study. Subjects:: A total of 60 patients with knee osteoarthritis. Interventions:: Participants were randomized into one of the following two intervention groups, five days a week, for three weeks: the combination of NMES application, hot pack, and exercise therapy was applied to the NMES group. The combination of therapeutic ultrasound, hot pack and exercise therapy was applied to the ultrasound therapy group. Main Measures:: Subjects were evaluated for pain and functional capacity with the use of the visual analog pain scale (VAS), Western Ontario and McMaster Universities Arthritis Index (WOMAC), and 15 meter walking test. The muscle architecture (muscle thickness, pennation angle and fascicle length) was assessed from vastus lateralis and quadriceps femoris muscles bilaterally by ultrasonography. Results:: Two groups presented significant improvements in all outcome measures before and after treatment ( P &lt; 0.01). There were significant improvements in VAS rest pain ( P &lt; 0.05), VAS activity pain ( P &lt; 0.05), WOMAC pain ( P &lt; 0.05), WOMAC stiffness score ( P &lt; 0.05), and WOMAC physical function ( P &lt; 0.05) for the ultrasound therapy group in comparison to the NMES group. NMES group exhibited more increases in the muscle thickness and fascicle length values when compared to ultrasound therapy group ( P &lt; 0.05). Conclusion:: Ultrasound therapy appears to be an effective treatment in reducing pain and improving functional capacity. NMES application has more effects on the muscle architecture. abstract_id: PUBMED:30744063 Current Status and Changes in Pain and Activities of Daily Living in Elderly Patients with Osteoarthritis Before and After Unilateral Total Knee Replacement Surgery. Knee osteoarthritis (OA) is a very common disease in the elderly, and total knee replacement (TKR) surgery is currently considered the most effective treatment. A prospective, observational, repeated measures study was performed to explore the current status and changes in pain and activities of daily living (ADL) in 58 OA elderly patients undergoing unilateral TKR. The Wong⁻Baker Faces Pain Rating Scale (WBS) for pain and the self-reported Barthel Index for ADL were measured on the day before surgery, 48 hours after surgery, and the day before discharge. Moderate pain was reported before surgery. Pain significantly improved after surgery and before discharge. At all three time points, pain scores were significantly higher in patients who used assistive devices compared to those who did not. Partial independence in ADL was reported before surgery. The ADL scores reported were highest before surgery, and those reported after surgery were lowest. However, ADL scores gradually increased before discharge. ADL scores were higher in the subjects who lived in a detached, single-family homes compared to those who lived in bungalows at all three time points. The results could be used to screen for knee OA elderly patients at high-risk for pain or low ADL and to provide timely intervention strategies as soon as possible. abstract_id: PUBMED:26537511 Femoral quadriceps neuromuscular electrical stimulation after total knee arthroplasty: a systematic review. The purpose of this study was to evaluate the effects of neuromuscular electrical stimulation in patients submitted to total knee arthroplasty. This was a systematic review with no language or publication status restriction. Our search was made in Cochrane Library, MEDLINE, Embase and LILACS. Randomized or quasi-randomized clinical trials evaluating neuromuscular electrical stimulation after total knee arthroplasty were included. Four studies with moderate risk of bias and low statistical power were included, totalizing 376 participants. There was no statistically significant difference in knee function, pain and range of motion during 12 month follow-up. This review concluded that neuromuscular electrical stimulation was less effective than traditional rehabilitation in function, muscular strength and range of motion. However, this technique was useful for quadriceps activation during the first days after surgery. Answer: Yes, neuromuscular electrical stimulation (NMES) is effective for improving pain, function, and activities of daily living in patients with knee osteoarthritis. A randomized clinical trial showed that an eight-week treatment program of NMES combined with exercises resulted in statistically significant improvements in pain intensity, Lequesne index (a measure of osteoarthritis severity), and activities of daily living (ADL) scale compared to a control group that received an orientation program (PUBMED:23657509). Additionally, another study found that NMES combined with peroneal nerve stimulation (PNS) was more effective than NMES alone in terms of proprioception, hamstring muscle strength, and functional status in patients with knee osteoarthritis (PUBMED:37674803). However, a systematic review and meta-analysis examining the effectiveness of NMES added to exercise on patient-reported outcomes measures (PROMs) in people with knee osteoarthritis found insufficient evidence for the effectiveness of NMES combined with exercise in treating knee OA considering PROMs. While pain relief was observed in some studies, more high-quality clinical trials are needed to support the use of NMES added to exercise in clinical practice (PUBMED:37926438). Moreover, a study on the short-term effects of NMES and ultrasound therapies on muscle architecture and functional capacity in knee osteoarthritis patients indicated that NMES application has more effects on muscle architecture, while ultrasound therapy appears to be an effective treatment in reducing pain and improving functional capacity (PUBMED:30514113). In summary, NMES can be an effective component of rehabilitation protocols for improving pain, function, and activities of daily living in knee osteoarthritis patients, although the evidence for its effectiveness when combined with exercise is not conclusive and requires further research.
Instruction: Does fat intake predict adiposity in healthy children and adolescents aged 2--15 y? Abstracts: abstract_id: PUBMED:11423924 Does fat intake predict adiposity in healthy children and adolescents aged 2--15 y? A longitudinal analysis. Objective: To investigate the relationship between food energy and macronutrient intake and body fatness assessed up to seven times between 2 and 15 y of age. Design: Prospective, observational study. Generalised linear estimating equations were used to evaluate the longitudinal relationship between body fatness and macronutrient intake. Regression analysis was used to assess whether body fatness at a particular age was predicted by intake at any of the previous ages. Setting: Community-based project in Adelaide, South Australia. Subjects: In all 143--243 subjects from a representative birth cohort of healthy children recruited in 1975 and followed over 15 y. Main Outcome Measures: The dependent variables were body mass index (BMI), triceps (TC) and subscapular (SS) skinfolds, expressed as standard deviation (s.d.) scores at each age. The predictor variables were energy-adjusted macronutrient intake and total energy intake, estimated from a 3--4 day diet diary, the previous corresponding measure of body fatness, sex and parental BMI, TC or SS. Results: Across 2--15 y energy-adjusted fat and carbohydrate intakes were respectively directly and inversely related to SS skinfold measures but not to either BMI or TC skinfold. The best predictor of fatness was previous adiposity, with the effect strengthening as the age interval shortened. Parental BMI, maternal SS and paternal TC contributed to the variance of the corresponding measure in children at some but not all ages. Conclusions: The current level of body fatness of the child and parental adiposity are more important predictors than dietary intake variables of risk of children becoming or remaining overweight as they grow. abstract_id: PUBMED:28531133 Urban-Rural Disparities in Energy Intake and Contribution of Fat and Animal Source Foods in Chinese Children Aged 4-17 Years. Objective: Excessive energy intake and poor food choices are major health concerns associated with overweight and obesity risk. This study aims to explore disparities in energy intake and the contributions from fat and animal source foods among Chinese school-aged children and adolescents in different communities based on urbanization levels. Design: Three consecutive 24 h recalls were used to assess dietary intake. Subjects' height and weight were measured using standard equipment. Standardized questionnaires were used to collect household demographic and socioeconomic characteristics by trained interviewers. Setting: The 2011 China Health and Nutrition Survey is part of an ongoing longitudinal household survey across 228 communities in nine provinces and three mega-cities in China. Subjects consisted of children aged 4-17 years (n = 1866; 968 boys and 898 girls). Results: The estimated average energy intake was 1604 kcal/day (1706 kcal/day for boys and 1493 kcal/day for girls). Proportions of energy from fat and animal source foods were 36.8% and 19.8% respectively and did not differ by gender. Total energy intake showed no significant disparity, but the proportion of energy from fat and animal source foods increased with increasing urbanization levels and increasing household income level. The largest difference in consumption percentages between children in rural areas and those in highly urban areas was for milk and dairy products (14.8% versus 74.4%) and the smallest difference was seen in percent consuming meat and meat products (83.1% versus 97.1%). Conclusions: Results of this study highlight the need for developing and implementing community-specific strategies to improve Chinese children's diet quality. abstract_id: PUBMED:37049523 Optimal Protein Intake in Healthy Children and Adolescents: Evaluating Current Evidence. High protein intake might elicit beneficial or detrimental effects, depending on life stages and populations. While high protein intake in elder individuals can promote beneficial health effects, elevated protein intakes in infancy are discouraged, since they have been associated with obesity risks later in life. However, in children and adolescents (4-18 years), there is a scarcity of data assessing the effects of high protein intake later in life, despite protein intake being usually two- to three-fold higher than the recommendations in developed countries. This narrative review aimed to revise the available evidence on the long-term effects of protein intake in children and adolescents aged 4-18 years. Additionally, it discusses emerging techniques to assess protein metabolism in children, which suggest a need to reevaluate current recommendations. While the optimal range is yet to be firmly established, available evidence suggests a link between high protein intake and increased Body Mass Index (BMI), which might be driven by an increase in Fat-Free Mass Index (FFMI), as opposed to Fat Mass Index (FMI). abstract_id: PUBMED:34371228 Preliminary Assessment of the Healthy Beverage Index for US Children and Adolescents: A Tool to Quantify the Overall Beverage Intake Quality of 2- to 19-Year Olds. Background: Improving beverage patterns of children and adolescents is recommended for combatting obesity and reducing disease risk. Therefore, it is important to assess beverage intake quality in this population. For adults, the Healthy Beverage Index (HBI) was created to assess beverage intake quality, but a similar tool did not exist for children and adolescents. Objective: The objective was to develop an HBI for US Children and Adolescents (HBI-CA), and then assess the validity and reliability of this tool. Design: Modeled after the adult HBI, age-specific, evidence-based beverage recommendations were compiled. Ten components were included to assess beverage intake quality. Validity and reliability were assessed using cross-sectional data and methods similar to those used for the evaluation of the Healthy Eating Index. Participants: The 2015-2016 National Health and Nutrition Examination Survey provided 24-hour dietary recall data for 2,874 children and adolescents aged 2 to 19 years. Main Outcome Measures: HBI-CA scores were the main outcome measure. Statistical Analyses Performed: To assess validity, independent t tests were used to determine differences in HBI-CA component and total scores among groups, and principal component analysis was completed to examine multidimensionality of the HBI-CA. Pearson bivariate correlations were used to assess reliability. Results: The HBI-CA produced a (mean ± standard error) total score of 69.2 ± 0.8, which is similar to the adult HBI mean total score of 63. Principal component analysis identified six factors, indicating the multidimensionality of the HBI-CA, with more than one combination of components contributing to variation in total scores. Most HBI-CA components were significantly correlated to the total score, with met fluid requirements, total beverage energy, sugar-sweetened beverage, and water components demonstrating the strongest correlations (r range = 0.335-0.735; P ≤ 0.01). Conclusions: The results provide preliminary evidence to support the validity and reliability of the HBI-CA. If future research establishes the predictive validity and sensitivity of the HBI-CA, this tool could be useful to quantify the beverage intake quality of children and adolescents. abstract_id: PUBMED:31083548 Associations between Physical Activity and Food Intake among Children and Adolescents: Results of KiGGS Wave 2. A balanced diet and sufficient physical activity are essential for the healthy growth of children and adolescents and for obesity prevention. Data from the second wave of the population-based German Health Interview and Examination Survey for Children and Adolescents (KiGGS Wave 2; 2014-2017) were used to analyse the association between food intake and physical activity among 6- to 17-year-old children and adolescents (n = 9842). Physical exercise (PE) and recommended daily physical activity (RDPA) were assessed with self-administered questionnaires and food intake by a semi-quantitative food frequency questionnaire. Multivariable logistic regression was used to analyse the association between food group intake (dependent variable) and level of PE or RDPA. High levels of physical activity (PE or RDPA) were associated with higher consumption of juice, water, milk, dairy products, fruits, and vegetables among both boys and girls, and among boys with a higher intake of bread, potatoes/pasta/rice, meat, and cereals. Higher PE levels were also less likely to be associated with a high soft drink intake. High levels of RDPA were associated with high intake of energy-dense foods among boys, which was not observed for PE. This study indicates that school-aged children and adolescents with higher levels of physical activity consume more beneficial foods and beverages compared to those with lower physical activity levels. abstract_id: PUBMED:36750328 Double burden of malnutrition among children and adolescents aged 6-17 years in 15 provinces (autonomous regions, municipalities) of China in 1991-2015 Objective: To analyze the status and the trends of the double burden of malnutrition among children and adolescents aged 6-17 years in 15 provinces(autonomous regions and municipalities) of China in 1991, 2000, 2009 and 2015. Methods: The data of China Health and Nutrition Surveys in 1991, 2000, 2009 and 2015 were used, children and adolescents aged 6-17 years were selected as the research objects. After excluding those with missing demographic, dietary data and physical measurement data, 2464, 2094, 929 and 1555 children and adolescents were included in the study in each year. The subjects were divided into lean, normal, overweight and obese groups. The dietary information was collected by 3-day 24-hour dietary recall, and edible oil and condiment intakes were collected by weighing method. The dietary micronutrient intake of children and adolescents was calculated according to the food composition table. The estimated average requirement(EAR) was used as the cut-offs of dietary micronutrient intake insufficiency to analyze the situation of micronutrient intake deficiency and double burden of malnutrition. Results: The prevalence of underweight of children and adolescents aged 6-17 years in 15 provinces(autonomous regions and municipalities) during 1991-2015 showed a downward trend, while the prevalence of overweight and obesity showed an upward trend(all P&amp;lt;0.05). The prevalence of double burden of malnutrition increased from 6.5% in 1991 to 24.6% in 2015. In 1991, 2000, 2009 and 2015, 94.2%, 92.8%, 97.2% and 93.4% of children and adolescents had insufficient dietary micronutrient intake. In 1991 and 2000, 81.6% and 73.7% of children and adolescents had insufficient intake of 3-7 dietary micronutrients at the same time; In 2009 and 2015, 81.8% and 80.7% of children and adolescents had insufficient intake of 3-9 dietary micronutrients at the same time. Conclusion: The prevalence of overweight and obesity of children and adolescents in 15 provinces(autonomous regions and municipalities) of China was on the rise, the prevalence of insufficient intake of dietary micronutrients is higher, and the double burden of malnutrition was serious. abstract_id: PUBMED:34438567 Dietary Sugar Intake and Its Association with Obesity in Children and Adolescents. Sugar intake has been associated with increased prevalence of childhood overweight/obesity; however, results remain controversial. The aim of this study was to examine the probability of overweight/obesity with higher sugar intakes, accounting for other dietary intakes. Data from 1165 children and adolescents aged ≥2-18 years (66.8% males) enrolled in the Hellenic National Nutrition and Health Survey (HNNHS) were used; specifically, 781 children aged 2-11 years and 384 adolescents 12-18 years. Total and added sugar intake were assessed using two 24 h recalls (24 hR). Foods were categorized into specific food groups to evaluate the main foods contributing to intakes. A significant proportion of children (18.7%) and adolescents (24.5%) exceeded the recommended cut-off of 10% of total energy intake from added sugars. Sweets (29.8%) and processed/refined grains and cereals (19.1%) were the main sources of added sugars in both age groups, while in adolescents, the third main contributor was sugar-sweetened beverages (20.6%). Being overweight or obese was 2.57 (p = 0.002) and 1.77 (p = 0.047) times more likely for intakes ≥10% of total energy from added sugars compared to less &lt;10%, when accounting for food groups and macronutrient intakes, respectively. The predicted probability of becoming obese was also significant with higher total and added-sugar consumption. We conclude that high consumption of added sugars increased the probability for overweight/obesity among youth, irrespectively of other dietary or macronutrient intakes. abstract_id: PUBMED:20505952 Body fat reference curves for healthy Turkish children and adolescents. Unlabelled: Childhood obesity is a major worldwide health problem. In addition to body mass index (BMI), body fat percentiles may be used to predict future cardiovascular and metabolic health risks. The aim of this study is to define new age- and gender-specific body fat centiles for Turkish children and adolescents. A total of 4,076 (2,276 girls, 1,800 boys) children and adolescents aged 6-18 years were recruited for this study. Total body fat was measured by a bioelectrical impedance noninvasive method. Body fat percentiles were produced by the LMS method. The body fat percentile curves of boys appear to rise from age 6 to 12 years and then slope downwards to age 15 years and then flatten off. The body fat % percentiles of girls increased until 14 years of age through 75th to 97th percentiles and then slope downwards, but through the third to 50th percentiles, they showed a downward slope after 14 years old. Conclusions: Since BMI may not always reflect body fat content, direct assessment of adiposity by a practical method would be significantly useful for clinical decisions. Therefore, this study provides normative data for body fat percentage in healthy Turkish children and adolescents. To this goal we used a practical and clinically applicable method. These references can be useful for evaluation of overweight and obesity. abstract_id: PUBMED:36432482 Dietary Intakes and Eating Behavior between Metabolically Healthy and Unhealthy Obesity Phenotypes in Asian Children and Adolescents. Diet plays a critical role in the development of obesity and obesity-related morbidities. Our study aimed to evaluate the dietary food groups, nutrient intakes and eating behaviors of metabolically healthy and unhealthy obesity phenotypes in an Asian cohort of children and adolescents. Participants (n = 52) were asked to record their diet using a 3-day food diary and intakes were analyzed using a nutrient software. Eating behavior was assessed using a validated questionnaire. Metabolically healthy obesity (MHO) or metabolically unhealthy obesity (MUO) were defined based on criteria of metabolic syndrome. Children/adolescents with MUO consumed fewer whole grains (median: 0.00 (interquartile range: 0.00-0.00 g) vs. 18.5 g (0.00-69.8 g)) and less polyunsaturated fat (6.26% kcal (5.17-7.45% kcal) vs. 6.92% kcal (5.85-9.02% kcal)), and had lower cognitive dietary restraint (15.0 (13.0-17.0) vs. 16.0 (14.0-19.0)) compared to children/adolescents with MHO. Deep fried food, fast food and processed convenience food were positively associated with both systolic (β: 2.84, 95%CI: 0.95-6.62) and diastolic blood pressure (β: 4.83, 95%CI: 0.61-9.04). Higher polyunsaturated fat intake (OR: 0.529, 95%CI: 0.284-0.986) and cognitive dietary restraint (OR: 0.681, 95%CI: 0.472-0.984) were associated with a lower risk of the MUO phenotype. A healthier diet composition and positive eating behavior may contribute to favorable metabolic outcomes in children and adolescents with obesity. abstract_id: PUBMED:32272828 Assessment of diet quality, nutrient intake, and dietary behaviours in obese children compared to healthy children. Introduction: Obesity has been recognised by the World Health Organisation as the most common chronic disease in the world. An increase in the prevalence of childhood obesity has been seen in Poland. Purpose: The aim of this study was to compare the dietary behaviours, quality of diet, and nutrient intake in a group of obese children compared to children with normal body weight. Material And Methods: A total of 105 children aged between seven and 15 years were surveyed. The test group consisted of 52 obese patients from a Dietary Clinic in the Children's University Hospital in Cracow, without accompanying chronic diseases, and 53 normal weight patients. Chil-dren's nutritional behaviours and diet were assessed with a questionnaire (FFQ) and a 24-hour recall. To assess diet quality the Healthy Eating Indicator (HDI) was used. Results: The diet of the studied obese children was rich in highly processed and high-fat products, and poor in vegetables, fruits, and wholegrain products. Obese children consumed on average 73.42 ±19.60 kcal per body weight whereas normal-weight children consumed 49.89 ±12.20 (p &lt; 0.01) and obtained more energy from fat than from carbohydrates. A low intake of vitamins A and D, folic acid, calcium, iodine, and iron in both groups was observed. The HDI showed a low-quality diet in nearly 40% of obese children. Conclusions: Dietary treatment of obese children should concentrate not only on reduction of calories from food products but also on choices of high-nutrient-density products and on developing healthy eating habits. Answer: No, fat intake does not appear to be a strong predictor of adiposity in healthy children and adolescents aged 2--15 years. A longitudinal analysis conducted on subjects from a representative birth cohort of healthy children recruited in 1975 and followed over 15 years found that across the ages of 2--15 years, energy-adjusted fat and carbohydrate intakes were respectively directly and inversely related to subscapular (SS) skinfold measures but not to either body mass index (BMI) or triceps (TC) skinfold. The study concluded that the current level of body fatness of the child and parental adiposity are more important predictors than dietary intake variables of the risk of children becoming or remaining overweight as they grow (PUBMED:11423924).
Instruction: Relapsed actinic keratosis evaluation: an observational Italian multicenter prospective study. Does gender have a role? Abstracts: abstract_id: PUBMED:24819640 Relapsed actinic keratosis evaluation: an observational Italian multicenter prospective study. Does gender have a role? Aim: Relapsed actinic keratoses evaluation study (RAKE) was performed in nine Italian centers of dermatology in order to observe the outcome of the treatments of these common skin neoplasms. Methods: A total of 182 patients were enrolled in 2 cohorts: the first included 144/182 patients (79.1%) evaluated after 6 months from clinical remission, and the second 116/182 (63.7%) evaluated for at least 12 months after clinical remission. Patients were previously treated with topical diclofenac 3% in hyaluronic acid, cryotherapy, photodynamic, curettage or imiquimod cream. Results: Subjects with history of malignant skin diseases showed an increased number of new lesions at 16 months from baseline (12 months from remission) compared to patients without history of cancers (mean 1.58 versus 1.17). Hyperkeratotic lesions healed more rapidly but relapsed at 6 months more frequently than non-hyperkeratotic ones (32.9% versus 20.7%). The results showed gender-related differences: male patients recovered better and independently from the treatment used; in contrast, men showed a higher recurrence (32% at 6 months and 6.6% between 6 and 12 months versus 16% at 6 months and 5.9% between 6 and 12 months for females) and a higher average number of new lesions after 12 months from remission (1.6 versus 0.88 for females). Conclusion: The results may suggest a lower adherence to photoprotection in male patients. Hyperkeratotic lesions recurred mostly at 6 months in comparison to non-hyperkeratotic lesions. abstract_id: PUBMED:32235587 Non-Melanoma Skin Cancer in Outdoor Workers: A Study on Actinic Keratosis in Italian Navy Personnel. Occupational exposure to ultraviolet radiation is one of the main risk factors for non-melanoma skin cancer (NMSC) development. The most common variants of NMSC are basal cell carcinomas, squamous cell carcinomas, and actinic keratosis (AK). The latter is nowadays considered by most authors as an early squamous cell carcinoma rather than a precancerous lesion. Outdoor workers have a higher risk of developing NMSC because they spend most of the working day outside. The aim of this descriptive study was to assess the prevalence of skin lesions, especially AK, in a professional category of individuals exposed to ultraviolet (UV) radiation: the Italian Navy. From January to June 2016, a questionnaire and a total skin examination of 921 military personnel were administered by medical specialists (dermatologists) in seven different Italian Navy centres. AK was detected in 217 of 921 (23.5%) workers. Older age, outdoor occupation, longer working life, and fair skin seem to promote the development of AK. Of the 217 workers with AK, 187 (86.2%) had lesions in chronically sun-exposed skin areas. Italian Navy personnel have a high AK prevalence. Further studies are needed to investigate occupational hazards and their health effects among outdoor workers to promote protective behaviour and raise awareness of skin cancer. abstract_id: PUBMED:25171087 Melanoma density and relationship with the distribution of melanocytic naevi in an Italian population: a GIPMe study--the Italian multidisciplinary group on melanoma. The most frequent site for melanoma is the back in men and the lower limbs in women, where intermittent sun exposure has been reported to be an environmental agent, although studies on age-specific incidence have suggested that melanoma in chronically sun-exposed areas, such as the face, increases with age. To identify the preferential development of melanoma in chronically or intermittently sun-exposed areas and the relationship between body site distribution and parameters such as sex, age, distribution of melanocytic naevi, atypical naevi and actinic keratoses, a prospective epidemiological multicentre study was carried out on all the consecutive melanoma cases diagnosed in a 2-year period from 27 Italian GIPMe centres (GIPMe: the Italian Multidisciplinary Group on Melanoma). Both the relative density of melanoma (RDM), defined as the ratio between observed and expected melanoma for a specific body site, and the average nevi density were identified. The most common melanoma site was the back, a factor that was not affected by either age or sex, even if men had higher density values. Statistically significant higher RDM values were observed in women aged more than 50 years for leg lesions and in the anterior thighs for young women (&lt;50 years), whereas the lowest values were observed in the posterior thighs in women of any age. Facial RDM was statistically significantly higher than expected in both male and female patients more than 50 years of age. Melanoma was associated with a significantly higher atypical naevi density only for the back, chest and thighs. Indeed, facial melanoma was related to the presence of more than four actinic keratoses and not naevi density. To the best of our knowledge, the RDM method was applied for the first time together with naevus density calculation to obtain these data, which strongly substantiate the 'divergent pathway' hypothesis for the development of melanoma, but not find a direct correlation between melanoma and nevi for each anatomical site. abstract_id: PUBMED:29311040 Prevalence and risk factors of actinic keratosis in patients attending Italian dermatology clinics. Actinic keratosis (AK) is a common keratinocyte intra-epidermal neoplasia. To assess AK prevalence and potential risk factors in patients attending Italian general dermatology clinics. This retrospective study was conducted on clinical data from consecutive white outpatients aged ≥30 years, attending 24 general dermatology clinics between December 2014 and February 2015. AK prevalence (entire population) and multivariate risk factor analysis (patients with current/previous AK and complete data) are presented. AK prevalence in 7,284 patients was 27.4% (95% CI: 26.4-28.4%); 34.3% in men and 20.0% in women (p&lt;0.001). Independent AK risk factors in 4,604 patients were: age (OR: 4.8 [95% CI: 3.5-6.5] for 46-60 years, increasing with older age to OR: 41.5 [95% CI: 29.5-58.2] for &gt;70 years), history of other non-melanoma skin cancers (OR: 2.7 [2.2-3.3]), residence in southern Italy/Sardinia (OR: 2.6 [2.1-3.0]), working outdoors &gt;6 hours/day (OR: 1.9 [1.4-2.4]), male gender (OR: 1.7 [1.4-2.0]), facial solar lentigos (OR: 1.6 [1.4-1.9]), light hair colour (OR: 1.5 [1.2-1.8]), prolonged outdoor recreational activities (OR: 1.4 [1.2-1.7]), light eye colour (OR: 1.3 [1.1-1.6]), skin type I/II (OR: 1.3 [1.1-1.6]), and alcohol consumption (OR: 1.2 [1.0-3.3]). BMI ≥25.0 (OR: 0.6 [0.5-0.7]), regular sunscreen use (OR: 0.7 [0.6-0.8]), and a lower level of education (OR: 0.8 [0.7-1.0]) were independent protective factors. AK prevalence was high in Italian dermatology outpatients. We confirm several well-known AK risk factors and reveal possible novel risk and protective factors. Our results may inform on the design and implementation of AK screening and educational programmes. abstract_id: PUBMED:32301503 The use of ingenol mebutate to treat actinic keratosis in standard clinical practice: a prospective phase IV multicenter observational cohort study. Background: Actinic keratosis (AK) is a chronic, precancerous skin disease. Various treatments options exist, including ingenol mebutate gel. The aim of this study was to compare its effectiveness and tolerability as well as the impact of therapy on patients' quality of life in standard clinical practice. Methods: A multicenter study was carried out involving a 12-month follow-up period. A sample of 440 patients was included. Medical history details were recorded. Effectiveness, compliance to treatment, quality of life (EQ-5D-5L), and treatment satisfaction questionnaire for medication (TSQM-9) at week 8 were assessed. Results: Of the total 440 patients, 428 (97.3%) attended at 8 weeks assessment. The number of patients with complete clearance was 337 (78.7%). EQ VAS score was significantly increased (P &lt; 0.001). As far as TSQM-9 is concerned, patients with complete clearance reported statistically significantly higher satisfaction in effectiveness, convenience, and global satisfaction. At the 12-month follow-up visit, 323 patients (95.8%) retained their clearance status. Nineteen patients did not apply the ingenol mebutate gel on consecutive days. For these patients, the complete clearance rate was 42.1%, while for those who were treated on consecutive days, the complete clearance rate was 80.6%. None of our patients developed skin cancer. Conclusions: This study supports that ingenol mebutate is effective for the treatment of AK with a good safety profile. It significantly improves quality of life. Limited adherence to treatment might be associated with reduced effectiveness. abstract_id: PUBMED:37211296 Plum-blossom needle tapping enhances the efficacy of ALA photodynamic therapy for facial actinic keratosis in Chinese population: a randomized, multicenter, prospective, and observer-blind study. Background: Photodynamic therapy (PDT) with 5-aminolevulinic acid (ALA) is a reliable treatment for actinic keratosis (AK), but its effect needs to be enhanced in thick lesions. Plum-blossom needle is a traditional Chinese cost-effective instrument for enhancing the transdermal delivery of ALA. However, whether it could improve the efficacy of AK treatment has not yet been investigated. Objective: To compare the efficacy and safety of plum-blossom needle-assisted PDT in facial AK in the Chinese population. Methods: In this multicenter, prospective study, a total of 142 patients with AKs (grades I-III) were randomized into the plum-blossom needle-assisted PDT group (P-PDT) and control PDT group (C-PDT). In the P-PDT group, each AK lesion was tapped vertically by a plum-blossom needle before the application of 10% ALA cream. In the C-PDT group, each lesion was only wiped with regular saline before ALA cream incubation. Then, 3 hours later, all the lesions were irradiated with light-emitting diode (LED) at a wavelength of 630 nm. PDT was performed once every 2 weeks until all lesion patients achieved complete remission or completed six sessions. The efficacy (lesion response) and safety (pain scale and adverse events) in both groups were evaluated before each treatment and at every follow-up visit at 3-month intervals until 12 months. Results: In the P-PDT and C-PDT groups, the clearance rates for all AK lesions after the first treatment were 57.9% and 48.0%, respectively (P &lt; 0.05). For grade I AK lesions, the clearance rates were 56.5% and 50.4%, respectively (P = 0.34). For grade II AK lesions, the clearance rates were 58.0% and 48.9%, respectively (P = 0.1). For grade III AK lesions, the clearance rates were 59.0% and 44.2%, respectively (P &lt; 0.05). Moreover, grade III AK lesions in the P-PDT group required fewer treatment sessions (P &lt; 0.05). There was no significant difference in the pain score between the two groups (P = 0.752). Conclusion: Plum-blossom needle tapping may enhance the efficacy of ALA-PDT by facilitating ALA delivery in the treatment of AK. abstract_id: PUBMED:29070025 Daylight photodynamic therapy versus cryosurgery for the treatment and prophylaxis of actinic keratoses of the face - protocol of a multicenter, prospective, randomized, controlled, two-armed study. Background: Photodynamic therapy with daylight (DL-PDT) is efficacious in treating actinic keratosis (AK), but the efficacy of field-directed, repetitive DL-PDT for the treatment and prophylaxis of AK in photodamaged facial skin has not yet been investigated. Methods/design: In this multicenter, prospective, randomized, controlled, two-armed, observer-blinded trial, patients with a minimum of 5 mild-to-moderate AK lesions on photodamaged facial skin are randomly allocated to two treatment groups: DL-PDT with methyl aminolevulinate (MAL) and cryosurgery. In the DL-PDT group (experimental group), 5 treatments of the entire face are conducted over the course of 18 months. After preparation of the lesion and within 30 min after MAL application, patients expose themselves to daylight for 2 h. In the control group, lesion-directed cryosurgery is conducted at the first visit and, in the case of uncleared or new AK lesions, also at visits 2 to 5. The efficacy of the treatment is evaluated at visits 2 to 6 by documenting all existing and new AK lesions in the face. Cosmetic results and improvement of photoaging parameters are evaluated by means of a modified Dover scale. Primary outcome parameter is the cumulative number of AK lesions observed between visits 2 and 6. Secondary outcome parameters are complete clearance of AK, new AK lesions since the previous visit, cosmetic results independently evaluated by both patient and physician, patient-reported pain (visual analogue scale), patient and physician satisfaction scores with cosmetic results, and patient-reported quality of life (Dermatology Life Quality Index). Safety parameters are also documented (adverse events and serious adverse events). Discussion: This clinical trial will assess the efficacy of repetitive DL-PDT in preventing AK and investigate possible rejuvenating effects of this treatment. (Trial registration: ClinicalTrials.gov Identifier: NCT02736760). Trial Registration: ClinicalTrials.gov Identifier: NCT02736760 . Study Code Daylight_01. EudraCT 2014-005121-13. abstract_id: PUBMED:22626452 Multicenter case-control study of risk factors for cutaneous melanoma in Valencia, Spain. Introduction: It is important to identify subgroups within the general population that have an elevated risk of developing cutaneous melanoma because preventive and early-detection measures are useful in this setting. The findings of most studies that have evaluated risk factors for cutaneous melanoma are of limited application in Spain because the populations studied have different pigmentary traits and are subject to different environmental factors. Objective: To identify the phenotypic characteristics and amount of exposure to sunlight that constitute risk factors for cutaneous melanoma in the population of the Autonomous Community of Valencia, Spain. Methods: We performed a multicenter observational case-control study. In total, the study included 242 patients with melanoma undergoing treatment in 5 hospitals and 173 controls enrolled from among the companions of the patients between January 2007 and June 2008. The information was collected by means of a standardized, validated questionnaire. The odds ratio (OR) was calculated for each variable and adjusted using a multiple logistic regression model. Results: The risk factors found to be statistically significant were skin phototypes I and II, blond or red hair, light eye color, abundant melanocytic nevi, and a personal history of actinic keratosis or nonmelanoma skin cancer. After the multivariate analysis, only blond or red hair (OR=1.9), multiple melanocytic nevi (OR=3.1), skin phototypes i and ii (OR=2.1), and a personal history of actinic keratosis (OR=3.5) or nonmelanoma skin cancer (OR=8.1) maintained significance in the model as independent predictive variables for melanoma. Conclusions: Our study supports the importance of certain factors that indicate genetic predisposition (hair color and skin phototype) and environmental factors associated with exposure to sunlight. Patients with multiple acquired melanocytic nevi and patients with markers of chronic skin sun damage (actinic keratosis and nonmelanoma cancer) presented a significant increase in risk. abstract_id: PUBMED:33179876 Role of occupational and recreational sun exposure as a risk factor for keratinocytic non-melanoma skin cancers: an Italian multicenter case-control study. Background: Sun exposure is the main external risk factor for keratinocytic non-melanoma skin cancer (NMSC). Outdoor workers are at increased risk, but the relationship of NMSC with occupational solar exposure is often confounded by concurrent recreational sun exposure. We compared the percentage of outdoor workers in NMSC patients versus controls without history of NMSC and assessed occupational and recreational sun exposure in both groups, evaluating also other risk factors and use of protective measures. Methods: Adult NMSC patients and controls without history of NMSC or actinic keratoses, matched for sex and age range, were recruited in the Departments of Dermatology of seven Italian University Hospitals, with a 1:2 patient/control ratio whenever possible. Data were collected using specifically designed questionnaires. Results: Eight hundred thirty-four patients and 1563 controls were enrolled. History of outdoor work was significantly (P=0.033) more frequent in patients. Patients were more sun exposed from outdoor leisure activities (P=0.012) and sunbathed for longer periods (P=0.13) and between 12 pm and 3.30 pm (P=0.011). Cumulative sun exposure during hobbies was similar between patients and controls in outdoor workers, higher (P&lt;0.05) in patients among indoor workers. Patients and controls with history of outdoor work were more sun exposed at work than during leisure activities (P&lt;0.001). Use of sunscreens by outdoor workers was very low, particularly at work (19.9%). Patients used sunscreens more than controls (P=0.002). Conclusions: Occupational and recreational sun exposure are relevant risk factors for outdoor and indoor workers respectively. Sunscreens are alarmingly underused, particularly at work, and are used mainly by patients. abstract_id: PUBMED:14730235 Study design and preliminary results from the pilot phase of the PraKtis study: self-reported diagnoses of selected skin diseases in a representative sample of the Italian population. Background: Few data are available on the prevalence of common skin disorders like actinic keratoses in the general population. Such data are mostly needed to better define health needs and to organize medical services. The Prevalence of Actinic Keratoses in the Italian Population Study (PraKtis) was designed to estimate the point prevalence of actinic keratoses and related disorders, e.g. photoaging, in a representative sample of the Italian population. Within the study, information on the history of relevant dermatological diagnoses was also collected. Methods: The study was conducted in collaboration with DOXA, the Italian branch of the Gallup International Association. A representative sample of people aged 45 years or older was selected by picking them from the electoral rolls according to a stratified random sampling design involving a replacement procedure. A total of 180 interviewers specifically trained to collect data on skin diseases and to take photographs of representative lesions on the face and upper limbs, contacted and interviewed the sampled subjects and performed a face-to-face computer-assisted interview. A final sample of about 12000 subjects was foreseen. Results: The pilot phase of the study was conducted between January 1 and June 30, 2003. A total of 3660 subjects were recruited and interviewed. Overall, an estimated 37% of Italian people reported having ever undergone a dermatological consultation in the past. An estimated 29% reported having ever received a specific dermatological diagnosis by a physician. The frequency of specifically enquired diagnoses, weighted according to the distribution of the Italian population, was as follows: atopic dermatitis 4.7%, other eczematous dermatitides 4.3%, urticaria 4.3%, psoriasis 3.1%, skin tumors 1.6%, vitiligo 0.7% and actinic keratoses 0.3%. Conclusion: Skin diseases are frequently reported. The prevalence of actinic keratoses according to self-reported diagnoses was lower than expected based on prevalence data obtained by directly examining people. These discrepancies may be due to underreporting and/or unawareness of lesions by affected people. More precise estimates will be obtained by direct examination of sampled people. Answer: The Relapsed Actinic Keratosis Evaluation (RAKE) study, which was an observational Italian multicenter prospective study, did find that gender plays a role in the outcome of treatments for actinic keratosis (AK). The study observed 182 patients who had been previously treated for AK with various methods, including topical diclofenac 3% in hyaluronic acid, cryotherapy, photodynamic therapy, curettage, or imiquimod cream. The results indicated that male patients generally had better recovery from AK treatments regardless of the treatment used. However, male patients also exhibited a higher recurrence rate of AK (32% at 6 months and 6.6% between 6 and 12 months) compared to female patients (16% at 6 months and 5.9% between 6 and 12 months). Additionally, after 12 months from remission, male patients had a higher average number of new lesions (1.6 versus 0.88 for females). The study suggested that the observed gender-related differences might be due to a lower adherence to photoprotection among male patients (PUBMED:24819640).
Instruction: Can children with SLI detect cognitive conflict? Abstracts: abstract_id: PUBMED:24686792 Can children with SLI detect cognitive conflict? Behavioral and electrophysiological evidence. Purpose: This study examined whether children with specific language impairment (SLI) are deficient in detecting cognitive conflict between competing response tendencies in a GO/No-GO task. Method: Twelve children with SLI (ages 10-12), 22 children with typical language development matched group-wise on age (TLD-A), and 16 younger children with TLD (ages 8-9) matched group-wise on language skills (TLD-L) were tested using a behavioral GO/No-GO paradigm with simultaneous collection of event-related potentials. The N2 component was used as a neural index of the ability to detect conflict between GO and No-GO response tendencies. Results: Hit rates did not differentiate the 3 groups. The TLD-L children demonstrated the highest false-alarm rates. The N2 component was attenuated and showed delayed divergence of GO and No-GO amplitudes in SLI relative to TLD-A children in response to stimuli presented at various probability levels. The N2 effect in children with SLI resembled that of children with TLD who were approximately 3 years younger. Conclusions: School-age children with SLI exhibit a maturational lag in detecting conflict between competing response alternatives. Deficient conflict detection may in turn hinder these children's ability to resolve conflict among semantic representations that are activated during language processing. abstract_id: PUBMED:27135369 Lexical conflict resolution in children with specific language impairment. The aim of our study is to examine the effect of conflict on naming latencies in children with specific language impairment (SLI) and typically developing (TD) children and to explore whether deficits in conflict resolution contribute to lexical problems in SLI. In light of previous results showing difficulties with inhibitory functions in SLI, we expected higher semantic conflict effect in the SLI than in the TD group. To investigate this question 13 children with SLI and 13 age- and gender-matched TD children performed a picture naming task in which the level of conflict was manipulated and naming latencies were measured. Children took longer to name pictures in high conflict conditions than in low conflict conditions. This effect was equally present in the SLI and TD groups. Our results suggest that word production is more effortful for children when conflict resolution is required but children with SLI manage competing lexical representations as efficiently as TD children. This result contradicts studies, which found difficulties with inhibitory functions and is in line with findings of intact inhibitory abilities in children with SLI. Further studies should rule out the possibility that in SLI lower level of conflict resulting from weaker lexical representations masks impairments in inhibition, and investigate the effect of linguistic conflict in other areas. abstract_id: PUBMED:29940484 The linguistic constraint on contraction in children with SLI. Purpose: The goal of the present study was to investigate whether children with specific language impairment (SLI) obey the constraint on contraction with the verb BE in three linguistic contexts: ellipsis, yes/no questions and embedded questions. Method: Using elicited production methodology, a total of 51 children were tested: 17 children with SLI (mean age = 5;6); 17 language-matched children matched on mean length of utterance (mean age = 3;6) and 17 children age-matched children (mean age = 5;4). Results: The experimental results revealed that children with SLI did not differ from the children in the control groups. Children contracted BE where it is possible and failed to contract in the linguistic contexts where contraction is prohibited. Our experimental findings suggest that for this aspect of linguistic knowledge children with SLI have the same underlying grammar as children whose grammars are typically-developing. abstract_id: PUBMED:24240018 Working memory performance and executive function behaviors in young children with SLI. The present study compared the performances of young children with specific language impairment (SLI) to that of typically developing (TD) children on cognitive measures of working memory (WM) and behavioral ratings of executive functions (EF). The Automated Working Memory Assessment was administered to 58 children with SLI and 58 TD children aged 4 and 5 years. Additionally, parents completed the Behavior Rating Inventory of Executive Function - Preschool Version. The results showed the SLI group to perform significantly worse than the TD group on both cognitive and behavioral measures of WM. The deficits in WM performance were not restricted to the verbal domain, but also affected visuospatial WM. The deficits in EF behaviors included problems with inhibition, shifting, emotional control, and planning/organization. The patterns of associations between WM performance and EF behaviors differed for the SLI versus TD groups. WM performance significantly discriminated between young children with SLI and TD, with 89% of the children classified correctly. The data indicate domain general impairments in WM and problems in EF behaviors in young children with SLI. Attention should thus be paid to WM - both verbal and visuospatial - and EF in clinical practice. Implications for assessment and remediation were discussed. abstract_id: PUBMED:34813101 Midfrontal theta oscillations and conflict monitoring in children and adults. Conflict monitoring is central in cognitive control, as detection of conflict serves as a signal for the need to engage control. This study examined whether (1) midfrontal theta oscillations similarly support conflict monitoring in children and adults, and (2) performance monitoring difficulty influences conflict monitoring and resolution. Children (n = 25) and adults (n = 24) completed a flanker task with fair or rigged response feedback. Relative to adults, children showed a smaller congruency effect on midfrontal theta power, overall lower midfrontal theta power and coherence, and (unlike adults) no correlation between midfrontal theta power and N2 amplitude, suggesting that reduced neural communication efficiency contributes to less efficient conflict monitoring in children than adults. In both age groups, response feedback fairness affected response times and the P3, but neither midfrontal theta oscillations nor the N2, indicating that performance monitoring difficulty influenced conflict resolution but not conflict monitoring. abstract_id: PUBMED:15571714 Social cognition and language in children with specific language impairment (SLI). Unlabelled: This investigation examined the relationship between social pragmatics, social self-esteem, and language in children with specific language impairment (SLI) and in their age-matched peers (7-10 years). The children with SLI indicated significantly poorer social cognitive knowledge than their typically developing peers. They showed low social, but not academic self-esteem. They often used inappropriate negotiation and conflict resolution strategies. Their errors reflect some qualitative differences from those of the typically developing children (e.g., children with SLI use more nonverbal strategies, demonstrate passive/withdrawn behavior, etc.). Our data show that these children's social pragmatic deficit is not causally related to their language impairment; the two problems are co-occurring. Further, the parents and teachers of the children with SLI indicated different views regarding these children's social relations. Although the parents expressed major concerns about their children's social competence, the teachers did not notice this problem. Learning Outcomes: The reader will be able to summarize, critically analyze, and interpret the findings from existing research on social cognition and its relationship with language abilities in children with specific language impairment. Further, the reader will gain an understanding of the importance of applying intervention procedures that facilitate the use of language in different social situations, and the necessity of increasing parent-teacher communication in schools. abstract_id: PUBMED:25319060 Associations between parental ideology and neural sensitivity to cognitive conflict in children. Processes through which parental ideology is transmitted to children-especially at a young age prior to the formation of political beliefs-remain poorly understood. Given recent evidence that political ideology is associated with neural responses to cognitive conflict in adults, we tested the exploratory hypothesis that children's neurocognitive responses to conflict may also differ depending on their parents' ideology. We assessed relations between parental political ideology and children's neurocognitive responses to conflict, as measured by the N2 component of the event-related potential. Children aged 5-7 completed an age-appropriate flanker task while electroencephalography was recorded, and the N2 was scored to incongruent versus congruent flankers to index conflict processing. Because previous research documents heightened liberal-conservative differences in threat-relevant contexts, each trial of the task was preceded by an angry face (threat-relevant) or comparison face (happy or neutral). An effect of parental ideology on the conflict-related N2 emerged in the threat condition, such that the N2 was larger among children of liberals compared with children of moderates and conservatives. These findings suggest that individual differences in neurocognitive responses to conflict, heightened in the context of threat, may reflect a more general pattern of individual differences that, in adults, relates to political ideology. abstract_id: PUBMED:11488380 Conflict monitoring and cognitive control. A neglected question regarding cognitive control is how control processes might detect situations calling for their involvement. The authors propose here that the demand for control may be evaluated in part by monitoring for conflicts in information processing. This hypothesis is supported by data concerning the anterior cingulate cortex, a brain area involved in cognitive control, which also appears to respond to the occurrence of conflict. The present article reports two computational modeling studies, serving to articulate the conflict monitoring hypothesis and examine its implications. The first study tests the sufficiency of the hypothesis to account for brain activation data, applying a measure of conflict to existing models of tasks shown to engage the anterior cingulate. The second study implements a feedback loop connecting conflict monitoring to cognitive control, using this to simulate a number of important behavioral phenomena. abstract_id: PUBMED:26287388 Social participation of children age 8-12 with SLI. Purpose: Two objectives are being pursued: (1) to describe the level of social participation of children aged 8-12 presenting a specific language impairment (SLI) and (2) to identify personal and family factors associated with their level of social participation. Method: This cross-sectional study was conducted among 29 children with SLI and one of their parents. Parental stress and family adversity were measured as risk factors. The measure of life habits (LIFE-H) adapted to children aged 5-3 was used to measure social participation. Results: The assumption that social participation of these children is impaired in relation to the communication dimension was generally confirmed. The statements referring to the "communication in the community" and "written communication" are those for which the results are weaker. "Communication at home" is made easier albeit with some difficulties, while "telecommunication" is totally preserved. A high level of parental stress is also confirmed, affecting the willingness of parents to support their child's autonomy. Conclusions: The achievement of a normal lifestyle of children with SLI is upset in many spheres of life. Methods of intervention must better reflect the needs and realities experienced by these children in their various living environments, in order to optimize social participation, and consequently, to improve their well-being and that of their families. The need to develop strategies to develop children's independence and to reduce parental stress must be recognized and all stakeholders need to be engaged in the resolution of this challenge. Implications For Rehabilitation: The realization of life habits of SLI children is compromised at various levels, especially in the domain related to "communication in the community" and "written communication". Speech-language pathologists must consider providing ongoing support throughout the primary years of these children and during adolescence, to promote and facilitate the continued realization of life habits of SLI persons. Providing ongoing support throughout the primary years and thereafter is needed to facilitate the continuing realization of life habits of SLI persons. Parents of children with SLI experience considerable stress in relation to the exercise of their parental roles. It is important to intervene and to support parents to promote autonomy for their SLI children. abstract_id: PUBMED:33680163 Children's Cognitive and Emotional Processes in Adult Versus Child-Related Inter-Parental Conflicts. In the literature, little attention has been paid to the specific impact of child-related versus adult-related inter-parental conflicts on children's intrapersonal processes and adjustment. Aimed to advance knowledge on this topic, the cross-sectional study explores: 1) the predictive effects of the two forms of inter-parental conflicts on: a) children's internalizing/externalizing behaviors and b) children's cognitive appraisals, emotional distress, and triangulation; 2) the mediating role of children's cognitive appraisals, emotional distress, and triangulation, in the association between adult-related vs child-related conflict and children's adjustment. Seventy-five school-aged children and their parents completed measures of inter-parental conflict, cognitive, emotional and behavioral processes and child adjustment. The results indicated that: 1) higher levels of adult-related inter-parental conflict promoted children's internalizing behaviors, through the mediation of perceived threat; 2) higher levels of child-related inter-parental discord promoted both children's internalizing/externalzing behaviors, through the mediation of perceived threat and self-blame. Answer: Children with specific language impairment (SLI) have been studied to determine their ability to detect cognitive conflict. The evidence from these studies is mixed, with some suggesting that children with SLI may have difficulties in this area, while others indicate that they can manage cognitive conflict similarly to typically developing (TD) children. One study examined children with SLI using a GO/No-GO task and found that while hit rates did not differentiate between children with SLI and those with typical language development, the N2 component, which is a neural index of the ability to detect conflict between GO and No-GO response tendencies, was attenuated and showed delayed divergence in SLI relative to TD children. This suggests that school-age children with SLI exhibit a maturational lag in detecting conflict between competing response alternatives (PUBMED:24686792). However, another study that investigated lexical conflict resolution in a picture naming task found that children with SLI and TD children took longer to name pictures in high conflict conditions, but the effect was equally present in both groups. This result suggests that children with SLI manage competing lexical representations as efficiently as TD children, contradicting some studies that found difficulties with inhibitory functions in SLI (PUBMED:27135369). Additionally, research on the linguistic constraint on contraction in children with SLI found that these children did not differ from control groups in their use of contraction with the verb BE in various linguistic contexts, indicating that their underlying grammar was similar to that of typically-developing children (PUBMED:29940484). In summary, while there is evidence of a maturational lag in conflict detection in children with SLI (PUBMED:24686792), other studies suggest that they can handle cognitive conflict as efficiently as their TD peers in certain contexts (PUBMED:27135369; PUBMED:29940484). Therefore, the ability of children with SLI to detect cognitive conflict may vary depending on the type of conflict and the context in which it is presented.
Instruction: Candidaemia associated with decreased in vitro fluconazole susceptibility: is Candida speciation predictive of the susceptibility pattern? Abstracts: abstract_id: PUBMED:20430790 Candidaemia associated with decreased in vitro fluconazole susceptibility: is Candida speciation predictive of the susceptibility pattern? Background: Candidaemia is often treated with fluconazole in the absence of susceptibility testing. We examined factors associated with candidaemia caused by Candida isolates with reduced susceptibility to fluconazole. Methods: We identified consecutive episodes of candidaemia at two hospitals from 2001 to 2007. Species identification followed CLSI methodology and fluconazole susceptibility was determined by Etest or broth microdilution. Susceptibility to fluconazole was defined as: full susceptibility (MIC &lt; or = 8 mg/L); and reduced susceptibility (MIC &gt; or = 32 mg/L). Complete resistance was defined as an MIC &gt; 32 mg/L. Results: Of 243 episodes of candidaemia, 190 (78%) were fully susceptible to fluconazole and 45 (19%) had reduced susceptibility (of which 27 were fully resistant). Of Candida krusei and Candida glabrata isolates, 100% and 51%, respectively, had reduced susceptibility. Despite the small proportion of Candida albicans (8%), Candida tropicalis (4%) and Candida parapsilosis (4%) with reduced fluconazole susceptibility, these species composed 36% of the reduced-susceptibility group and 48% of the fully resistant group. In multivariate analysis, independent factors associated with reduced fluconazole susceptibility included male sex [odds ratio (OR) 3.2, P &lt; 0.01], chronic lung disease (OR 2.7, P = 0.01), the presence of a central vascular catheter (OR 4.0, P &lt; 0.01) and prior exposure to antifungal agents (OR 2.2, P = 0.04). Conclusions: A significant proportion of candidaemia with reduced fluconazole susceptibility may be caused by C. albicans, C. tropicalis and C. parapsilosis, species usually considered fully susceptible to fluconazole. Thus, identification of these species may not be predictive of fluconazole susceptibility. Other factors that are associated with reduced fluconazole susceptibility may help clinicians choose adequate empirical anti-Candida therapy. abstract_id: PUBMED:25681153 In vitro susceptibility and molecular characterization of Candida spp. from candidemic patients. Background: Candida species are the main cause of hospital acquired fungal bloodstream infections. The main risk factors for candidemia include parenteral nutrition, long-term intensive care, neutropenia, diabetes, abdominal surgery and the use of central venous catheters. The antifungal drugs used to treat candidemia are mainly the echinocandins, however some isolates may be resistant to these drugs. Aims: This work aims to evaluate the in vitro susceptibility patterns of various Candida species isolated from blood samples and provide their identification by molecular characterization. Methods: Antifungal susceptibility testing was performed using the broth microdilution method. The sequencing of the ITS and D1/D2 regions of rDNA was used for molecular characterization. Results: Seventy-four of the 80 isolates were susceptible to anidulafungin, 5 were intermediate, and 1 was resistant. For micafungin 67 were susceptible, 8 were intermediate and 5 were resistant. All isolates were susceptible to amphotericin B. Lastly, 65 isolates were susceptible to fluconazole, 8 were dose-dependent and 4 were resistant. The molecular identification corroborated the phenotypic data in 91.3% of the isolates. Conclusions: Antifungal susceptibility data has an important role in the treatment of candidemia episodes. It was also concluded that the molecular analysis of isolates provides an accurate identification and identifies genetic variability within Candida species isolated from patients with candidemia. abstract_id: PUBMED:25485060 Yeast colonization and drug susceptibility pattern in the pediatric patients with neutropenia. Background: Pediatric patients with neutropenia are vulnerable to invasive Candida infections. Candida is the primary cause of fungal infections, particularly in immunosuppressed patients. Candida albicans has been the most common etiologic agent of these infections, affecting 48% of patients. Objectives: The aim of this study was to identify Candida spp. isolated from children with neutropenia and determine the antifungal susceptibility pattern of the isolated yeasts. Patients And Methods: In this study 188 children with neutropenia were recruited, fungal surveillance cultures were carried out on nose, oropharynx, stool, and urine samples. Identification of Candida strains was performed using germ tube and chlamydospore production tests on an API 20 C AUX system. Susceptibility testing on seven antifungal agents was performed using the agar-based E-test method. Results: A total of 229 yeasts were isolated. Among those, C. albicans was the most common species followed by C. krusei, C. parapsilosis, C. glabrata, C. tropicalis, C. famata, C. dubliniensis, C. kefyr, and other Candida species. C. glabrata was the most resistant isolated yeasts, which was 70% resistant to fluconazole and 50% to itraconazole, 7.5% to amphotericin B and 14% to ketoconazole. All the tested species were mostly sensitive to caspofungin. Conclusions: Knowledge about the susceptibility patterns of colonized Candida spp. can be helpful for clinicians to manage pediatric patients with neutropenia. In this study, caspofungin was the most effective antifungal agent against the colonized Candida spp. followed by conventional amphotericin B. abstract_id: PUBMED:30135234 Are In Vitro Susceptibilities to Azole Antifungals Predictive of Clinical Outcome in the Treatment of Candidemia? The purpose of this review is to critically analyze published data evaluating the impact of azole pharmacokinetic and pharmacodynamic parameters, MICs, and Candida species on clinical outcomes in patients with candidemia. Clinical breakpoints (CBPs) for fluconazole and voriconazole, which are used to determine susceptibility, have been defined by the Clinical and Laboratory Standards Institute (CLSI) for Candida species. Studies evaluating the relationship between treatment efficacy and in vitro susceptibility, as well as the pharmacodynamic targets, have been conducted in patients treated with fluconazole for candidemia; however, for species other than Candida albicans and Candida glabrata, and for other forms of invasive candidiasis, data remain limited and randomized trials are not available. Limited data evaluating these relationships with voriconazole are available. While pharmacodynamic targets for posaconazole and isavuconazole have been proposed based upon studies conducted in murine models, CBPs have not been established by CLSI. Fluconazole remains an important antifungal agent for the treatment of candidemia, and data supporting its use based on in vitro susceptibility are growing, particularly for C. albicans and C. glabrata Further investigation is needed to establish the roles of voriconazole, posaconazole, and isavuconazole in the treatment of candidemia and for all agents in the treatment of other forms of invasive candidiasis. abstract_id: PUBMED:27746090 Epidemiology, risk factors and in vitro susceptibility in candidaemia due to non-Candida albicans species Background: Invasive fungal infection (IFI) has increased in recent years due to there being a greater number of risk factors. IFI caused by Candida is the most frequent, and although Candida albicans is the most isolated species, there is currently a decrease of C. albicans and an increase of other species of the genus. Aims: To analyse the epidemiology, risk factors, and antifungal susceptibility of blood culture isolates of non-C.albicans Candida species in our hospital in the last 12years. Methods: A retrospective study was conducted on 107 patients with candidaemia admitted to our hospital. Candida isolates susceptibility to fluconazole, itraconazole, voriconazole, amphotericinB, 5-fluorocytosine, caspofungin, micafungin, and anidulafungin was determined by means of a microdilution technique (Sensititre Yeast One; Izasa, Spain). Results: From a total of 109 strains, 59 belonged to non-C. albicans Candida species: 25 Candida parapsilosis complex, 14 Candida glabrata complex, 13 Candida tropicalis, 4 Candida krusei, 1 Candida lipolytica, 1 Candida membranaefaciens, and 1 Candida pulcherrima. The most common risk factor in adults and children was catheter use. It was observed that 8.5% of those non-C.albicans strains were resistant to fluconazole. Conclusions: The results of this work confirm that it is necessary to know the epidemiology of non-C.albicans Candida species, the in vitro susceptibility of the species involved, and the main risk factors, especially in patients with predisposing conditions. abstract_id: PUBMED:32176090 A multicenter retrospective analysis of the antifungal susceptibility patterns of Candida species and the predictive factors of mortality in South Korean patients with candidemia. As detection rates of non-albicans Candida species are increasing, determining their pathogen profiles and antifungal susceptibilities is important for antifungal treatment selection. We identified the antifungal susceptibility patterns and predictive factors for mortality in candidemia.A multicenter retrospective analysis of patients with at least 1 blood culture positive for Candida species was conducted. Candida species were classified into 3 groups (group A, Candia albicans; group B, Candida tropicalis, and Candida parasilosis; group C, Candida glabrata and Candida krusei ) to analyze the susceptibility patterns, first-line antifungal administered, and mortality. Univariate and multivariate comparisons between outcomes were performed to identify mortality risk factors.In total, 317 patients were identified, and 136 (42.9%) had recorded mortality. Echinocandin susceptibility was higher for group A than group B (111/111 [100%] vs 77/94 [81.9%], P &lt; .001). Moreover, group A demonstrated higher fluconazole susceptibility (144/149 [96.6%] vs 39/55 [70.9%], P &lt; .001) and lower mortality (68 [45.3%] vs 34 [61.8%], P = .036) than those of group C. In the multivariate analysis, the sequential organ failure assessment score (odds ratio OR 1.351, 95% confidence interval 1.067-1.711, p = 0.013) and positive blood culture on day 7 of hospitalization (odds ratio 5.506, 95% confidence interval, 1.697-17.860, P = .004) were associated with a higher risk of mortality.Patients with higher sequential organ failure assessment scores and sustained positive blood cultures have an increased risk of mortality. abstract_id: PUBMED:16144871 The European Confederation of Medical Mycology (ECMM) survey of candidaemia in Italy: in vitro susceptibility of 375 Candida albicans isolates and biofilm production. Objectives: To investigate the in vitro antifungal susceptibility pattern of 375 Candida albicans bloodstream isolates recovered during the European Confederation of Medical Mycology survey of candidaemia performed in Lombardia, Italy and to test the ability to form biofilm. Methods: In vitro susceptibility to flucytosine, fluconazole, itraconazole, posaconazole, voriconazole and caspofungin was performed by broth microdilution following the NCCLS guidelines. Biofilm production was measured using the XTT reduction assay in 59 isolates selected as representative of different patterns of susceptibility to flucytosine and azoles. Results: MICs (mg/L) at which 90% of the strains were inhibited were &lt; or =0.25 for flucytosine, 0.25 for caspofungin, 4 for fluconazole and 0.06 for itraconazole, voriconazole and posaconazole. Flucytosine resistance was detected in five isolates and was associated with serotype B in 2/29 and serotype A in 3/346. Resistance to fluconazole was detected in 10 isolates; nine of these exhibited reduced susceptibility to the other azoles. Among the 10 patients with fluconazole-resistant C. albicans bloodstream infection, only one, an AIDS patient, had been previously treated with fluconazole. Biofilm production was observed in 23 isolates (39%) and was significantly associated with serotype B. No relationship was detected with the pattern of antifungal susceptibility. Conclusions: Resistance is uncommon in C. albicans isolates recovered from blood cultures, while biofilm production is a relatively frequent event. Periodic surveillance is warranted to monitor the incidence of in vitro antifungal resistance as well as of biofilm production. abstract_id: PUBMED:36636301 Identification and Antifungal Drug Susceptibility Pattern of Candida auris in India. Introduction: Candida auris has turned up as a multidrug-resistant nosocomial agent with outbreaks reported worldwide. The present study was conducted to evaluate the antifungal drug susceptibility pattern of C. auris. Methods: Isolates of C. auris were obtained from clinically suspected cases of candidemia from January 2019 to June 2021. Identification was done with matrix-assisted laser desorption/ionization-time of flight (MALDI-TOF) and panfungal DNA polymerase chain reaction (PCR), followed by sequencing. Antifungal susceptibility testing was performed with broth microdilution method. Results: Out of 50 isolates C. auris, 49 were identified by MALDI-TOF and one isolate was identified with panfungal DNA PCR followed by sequencing. For fluconazole, 84% (n = 42) isolates were found to be resistant and 16% (n = 8) isolates were susceptible (minimum inhibitory concentrations [MICs] range 0.5-16). Posaconazole exhibited potent activity, followed by itraconazole. For amphotericin B, only 6% (n = 3) isolates were resistant with MICs ≥2 μg/mL. Only 4% (n = 2) isolates exhibited resistance to caspofungin. No resistance was noted for micafungin and anidulafungin. One (2%) isolate was found to be panazole resistant. One (2%) isolate was resistant to fluconazole, amphotericin B, and caspofungin. Conclusion: Correct identification of C. auris can be obtained with the use of MALDI-TOF and sequencing methods. A small percentage of fluconazole-sensitive isolates are present. Although elevated MICs for amphotericin B and echinocandins are not generally observed, the possibility of resistance with the irrational use of these antifungal drugs cannot be denied. Pan azole-resistant and pan drug-resistant strains of C. auris are on rise. abstract_id: PUBMED:27222702 Candidiasis in Pediatrics; Identification and In vitro Antifungal Susceptibility of the Clinical Isolates. Background: Candida species are normal microflora of oral cavity, vagina, and gastrointestinal tract. They are the third most prevalent cause of pediatric health care-associated bloodstream fungal infection. This study aimed to provide an epidemiological feature of candidiasis and also presents an antifungal susceptibility profile of clinical Candida isolates among children. Materials And Methods: During July 2013 to February 2015, 105 patients from different hospitals of Isfahan, Iran, were examined for candidiasis by phenotypic tests. Samples were obtained from nail clippings, blood, thrush, BAL, urine, oropharynx, skin, and eye discharge. The age range of patients was between 18 days to 16 years. Genomic DNA of isolates was extracted and ITS1-5.8SrDNA-ITS2 region was amplified by ITS1 and ITS2 primers. The PCR products were digested using the restriction enzyme MspI. Minimum inhibitory concentration (MICs) was determined using microdilution broth method according to the clinical and laboratory standards institute (CLSI) M27-A3 and M27-S4 documents. Results: Forty-three patients (40.9%) had Candida infection.The most clinical strains were isolated from nail infections (39.5%), and candidemia (13.9%). Candida albicans was the most prevalent species (46.5%). MICs ranges for amphotericin B, fluconazole, and itraconazole were (0.025-0.75 µg/ml), (0.125-16 µg/ml), and (0.094-2 µg/ml), respectively. Conclusion: Due to high incidence of Candida infections among children, increasing of fatal infection like candidemia, and emersion of antifungal resistance Candida isolates, early and precise identification of the Candida species and determination of antifungal susceptibility patterns of clinical isolates may lead to better management of the infection. abstract_id: PUBMED:28457646 Molecular identification and in vitro antifungal susceptibility of blood isolates of the Candida parapsilosis species complex in Venezuela Background: Candida parapsilosis is a species complex consisting of Candida parapsilosis sensu stricto, Candida orthopsilosis, and Candida metapsilosis. Studies worldwide have described its epidemiology and susceptibility to antifungal agents. Aims: The aims of this study were to carry out the molecular identification of blood isolates belonging to the Candida parapsilosis species complex, and to determine their in vitro susceptibility to antifungals of systemic use. Methods: A study of 86 strains of C. parapsilosis species complex collected in 2008-2011 and obtained from the Candidaemia Surveillance Network of Mycology Department of the Rafael Rangel National Institute of Hygiene, was made. Secondary alcohol-dehydrogenase gene amplification was performed using polymerase chain reaction, and the products were analysed by restriction fragments length polymorphisms using the enzyme BanI. Susceptibility tests were performed using Etest®, following the manufacturer's instructions with modifications. Results: Of the 86 isolates studied, 81 (94.2%) were C. parapsilosis sensu stricto, 4 (4.6%) C. orthopsilosis, and one (1.2%) C. metapsilosis. C. parapsilosis isolates were susceptible to amphotericin B and caspofungin, showing low rates of resistance to fluconazole and voriconazole. C. orthopsilosis and C. metapsilosis were susceptible to all the antifungals tested. Conclusions: The results obtained in Venezuela provide for the first time important information about the distribution of C. parapsilosis species complex in cases of candidaemia, and support the need for continuing surveillance programs, including molecular discrimination of species and antifungal susceptibility tests, which may guide specific therapy. Answer: Candidaemia, or bloodstream infection with Candida species, is a significant clinical concern, and fluconazole is often used as a treatment option. However, the predictive value of Candida speciation for fluconazole susceptibility is not straightforward. According to the study by PUBMED:20430790, while certain Candida species such as Candida krusei and Candida glabrata are known to have reduced susceptibility to fluconazole, other species traditionally considered fully susceptible, like Candida albicans, Candida tropicalis, and Candida parapsilosis, may also exhibit reduced susceptibility. In their study, 100% of C. krusei and 51% of C. glabrata isolates had reduced susceptibility to fluconazole. However, despite the small proportion of C. albicans (8%), C. tropicalis (4%), and C. parapsilosis (4%) with reduced fluconazole susceptibility, these species composed a significant portion of the reduced-susceptibility group (36%) and the fully resistant group (48%). This suggests that Candida speciation alone may not be a reliable predictor of fluconazole susceptibility. Other studies have also highlighted the variability in susceptibility patterns among different Candida species. For instance, PUBMED:25681153 reported that most isolates were susceptible to echinocandins and amphotericin B, but there was some resistance to fluconazole. PUBMED:25485060 found that C. glabrata was the most resistant yeast, with significant resistance to fluconazole. PUBMED:27746090 observed that 8.5% of non-C. albicans strains were resistant to fluconazole. PUBMED:32176090 noted that echinocandin susceptibility was higher for C. albicans than for other species groups, and PUBMED:16144871 reported that resistance to fluconazole was detected in 10 C. albicans isolates, suggesting that resistance can occur even in species typically considered susceptible. In conclusion, while certain species of Candida are more likely to exhibit reduced susceptibility to fluconazole, speciation alone is not entirely predictive of the susceptibility pattern. Other factors, such as prior exposure to antifungal agents and the presence of a central vascular catheter, may also be associated with reduced fluconazole susceptibility (PUBMED:20430790). Therefore, susceptibility testing and consideration of the patient's clinical history are important for guiding effective treatment of candidaemia.
Instruction: Is Arthroscopic Technique Superior to Open Reduction Internal Fixation in the Treatment of Isolated Displaced Greater Tuberosity Fractures? Abstracts: abstract_id: PUBMED:26728514 Is Arthroscopic Technique Superior to Open Reduction Internal Fixation in the Treatment of Isolated Displaced Greater Tuberosity Fractures? Background: Arthroscopic double-row suture-anchor fixation and open reduction and internal fixation (ORIF) are used to treat displaced greater tuberosity fractures, but there are few data that can help guide the surgeon in choosing between these approaches. Questions/purposes: We therefore asked: (1) Is there a difference in surgical time between arthroscopic double-row suture anchor fixation and ORIF for isolated displaced greater tuberosity fractures? (2) Are there differences in the postoperative ROM and functional scores between arthroscopic double-row suture anchor fixation and ORIF for isolated displaced greater tuberosity fractures? (3) Are there differences in complications resulting in additional operations between the two approaches? Methods: Between 2006 and 2012, we treated 79 patients surgically for displaced greater tuberosity fractures. Of those, 32 (41%) were considered eligible for our study based on inclusion criteria for isolated displaced greater tuberosity fractures with a displacement of at least 5 mm but less than 2 cm. During that time, we generally treated patients with displaced greater tuberosity fractures with a displacement greater than 1 cm or with a fragment size greater than 3×3 cm with open treatment, and patients with displaced greater tuberosity fractures with a displacement less than 1 cm or with a fragment size less than 3×3 cm with arthroscopic treatment. Fifty-three underwent open treatment based on those indications, and 26 underwent arthroscopic treatment, of whom 17 (32%) and 15 (58%) were available for followup at a mean of 34 months (range, 24-28 months). All patients with such fractures identified from our institutional database were treated by these two approaches and no other methods were used. Surgical time was defined as the time from initiation of the incision to the time when suture of the incision was finished, and was determined by an observer with a stopwatch. Patients were followed up in the outpatient department at 6, 12, and 24 weeks, and every 6 month thereafter. Radiographs showed optimal reduction immediately after surgery and at every followup. Radiographs were obtained to assess fracture healing. Patients were followed up for a mean of 34 months (range, 24-48 months). At the last followup, ROM, VAS score, and American Shoulder and Elbow Surgeons (ASES) score were used to evaluate clinical outcomes. All these data were retrieved from our institutional database through chart review. Complications were assessed through chart review by one observer other than the operating surgeon. Results: Patients who underwent arthroscopic double-row suture anchor fixation had longer surgical times than did patients who underwent ORIF (mean, 95.3 minutes, SD, 10.6 minutes vs mean, 61.5 minutes, SD, 7.2 minutes; mean difference, 33.9 minutes; 95% CI, 27.4-40.3 minutes; p &lt; 0.001). All patients achieved bone union within 3 months. Compared with patients who had ORIF, the patients who had arthroscopic double-row suture anchor fixation had greater ranges of forward flexion (mean, 152.7°, SD, 13.3° vs mean, 137.7°, SD, 19.2°; p = 0.017) and abduction (mean, 146.0°, SD, 16.4° vs mean, 132.4°, SD, 20.5°; p = 0.048), and higher ASES score (mean, 91.8 points, SD, 4.1 points vs mean, 87.4 points, SD, 5.8 points; p = 0.021); however, in general, these differences were small and of questionable clinical importance. With the numbers available, there were no differences in the proportion of patients experiencing complications resulting in reoperation; secondary subacromial impingement occurred in two patients in the ORIF group and postoperative stiffness in one from the ORIF group. The two patients experiencing secondary subacromial impingement underwent reoperation to remove the implant. The patient with postoperative stiffness underwent adhesion release while receiving anesthesia, to improve the function of the shoulder. These three patients had the only reoperations. Conclusions: We found that in the hands of surgeons comfortable with both approaches, there were few important differences between arthroscopic double-row suture anchor fixation and ORIF for isolated displaced greater tuberosity fractures. Future, larger studies with consistent indications should be performed to compare these treatments; our data can help inform sample-size calculations for such studies. Level Of Evidence: Level III, therapeutic study. abstract_id: PUBMED:26419378 Arthroscopic-assisted plate fixation for displaced large-sized comminuted greater tuberosity fractures of proximal humerus: a novel surgical technique. Purpose: The purpose of the present study was to describe the use of a novel hybrid surgical technique-arthroscopic-assisted plate fixation-and evaluate its clinical and anatomical outcomes in the management of large, displaced greater tuberosity (GT) fractures with comminution. Methods: From 2009 to 2011, this novel technique was performed in 11 patients [2 men and 9 women; median age, 64 years (range 41-83 years)] with large, comminuted GT fractures, with fragment displacements of &gt;5 mm. The preoperative mean posterior and superior migration of the fractured fragment, as measured on computed tomography (CT), was 19.5 and 5.5 mm, respectively. Two patients had shoulder fracture-dislocation, and three had associated undisplaced surgical neck fracture. The mean duration between injury and surgery was 4 days. The mean follow-up duration was 26 months. Results: At the final follow-up, the mean postoperative ASES, UCLA and SST scores were 84, 29, and 8, respectively. The mean range of motion was as follows: forward flexion, 138°; abduction, 135°; external rotation at the side, 19°; and internal rotation, up to the L2 level. The mean posterior and superior displacements of fracture fragments on postoperative CT scan [0.7 ± 0.8 mm (range 0-2.1 mm) and 2.8 ± 0.5 mm (range 3.4-5.3 mm), respectively] were significantly improved (p &lt; 0.05). On arthroscopy, a partial articular-side supraspinatus tendon avulsion lesion was identified in 10 of 11 patients (91 %), and 1 of these patients had a partial tear of the biceps and 1 had a partial subscapularis tear, respectively (9 %). Intraoperatively, 1 anchor pullout and 1 anchor protrusion through the humeral head were noted and corrected. Postoperatively, the loss of reduction in the fracture fragment was noted in 1 patient at 4 weeks, after corrective reduction and fixation surgery. Conclusions: The novel arthroscopic-assisted anatomical plate fixation technique was found to be effective in reducing large-sized, displaced, comminuted GT fractures and in allowing concurrent management of intra-articular pathologies and early functional rehabilitation. Compared with the conventional plate fixation or arthroscopic suture anchor fixation technique, arthroscopic-assisted plate fixation enabled accurate restoration of the medial footprint of the GT fracture and provided an effective buttress to the large-sized GT fracture fragments. Level Of Evidence: Retrospective clinical study, Level IV. abstract_id: PUBMED:30399160 Open reduction and internal fixation of displaced proximal humeral fractures. Does the surgeon's experience have an impact on outcomes? Introduction: To evaluate outcomes following open reduction and internal fixation of displaced proximal humeral fractures with regards to the surgeon's experience. Material And Methods: Patients were included undergoing ORIF by use of locking plates for displaced two-part surgical neck type proximal humeral fractures. Reduction and functional outcomes were compared between procedures that were conducted by trauma surgeons [TS], senior (&gt;2 years after board certified) trauma surgeons [STS] and trauma surgeons performing ≥50 shoulder surgeries per year [SS]. Quality of reduction was measured on postoperative x-rays. Functional outcomes were assessed by gender- and age-related Constant Score (nCS). Secondary outcome measures were complication and revision rates. Results: Between 2002-2014 (12.5 years) n = 278 two-part surgical neck type humeral fractures (AO 11-A2, 11-A3) were included. Open reduction and internal fixation was performed with the following educational levels: [TS](n = 68, 25.7%), [STS](n = 110, 41.5%) and [SS](n = 77, 29.1%). Functional outcome (nCS) increased with each higher level of experience and was significantly superior in [SS] (93.3) vs. [TS] (79.6; p = 0.01) vs. [STS] (83.0; p = 0.05). [SS] (7.8%) had significantly less complications compared with [TS] (11.3%; p = 0.003) and [STS](11.7%; p = 0.01) moreover significantly less revision rates (3.9%) vs. [TS](8.2%) and [STS](7.4%) (p&lt;0.001). Primary revision was necessary in 13 cases (4.7%) due to malreduction of the fracture. Conclusion: Quality of reduction and functional outcomes following open reduction and internal fixation of displaced two-part surgical neck fractures are related to the surgeon's experience. In addition, complications and revision rates are less frequent if surgery is conducted by a trauma surgeon performing ≥50 shoulder surgeries per year. abstract_id: PUBMED:22613600 Arthroscopic reduction and fixation for isolated greater tuberosity fractures. Background: Traditionally, displaced greater tuberosity fractures are treated with open reduction and internal fixation. Arthroscopic treatment and outcome of greater tuberosity fractures is far from comprehensive. The objective of the current study was to assess the surgical procedure and outcome of an arthroscopic method in the treatment of isolated greater tuberosity fractures. Methods: From January 2006 to December 2009, 23 patients with isolated greater tuberosity fractures were treated with an arthroscopic procedure using three cannulated screws combined with washers. During follow-up, radiographs and the constant shoulder score (CSS) were used to evaluate the outcome. Results: Three cannulated screws with washers were used to fix the fractured fragment of the greater tuberosity under an arthroscope. All incisions healed at primary intention without infection. The mean duration of follow-up was 20 months (range 18 - 36 months). Fracture fixation was excellent, and fractures healed 2 - 6 months (mean 3.8 months) after surgery. At final follow-up, the CSS was 92 (range 86 - 100). Conclusions: The described arthroscopic procedure provides anatomical reduction and firm fixation for isolated greater tuberosity fractures. It is a successful and minimally invasive procedure with satisfying therapeutic effects as well as excellent functional recovery. abstract_id: PUBMED:29305100 Clinical outcomes of minimally invasive open reduction and internal fixation by screw and washer for displaced greater tuberosity fracture of the humerus. Background: The purpose of this study was to investigate clinical and radiologic outcomes of open reduction and internal fixation with a screw and washer for a displaced greater tuberosity fracture of the proximal humerus through a small incision. Methods: We retrospectively reviewed 29 patients who underwent open reduction and internal fixation with a screw and washer for a greater tuberosity fracture of the proximal humerus. After surgery, the patients were immobilized in a brace for 4 weeks. To determine clinical outcomes, we evaluated a visual analog scale pain score; the Subjective Shoulder Value; the University of California, Los Angeles shoulder score; the American Shoulder and Elbow Surgeons score; and active range of motion. Results: All patients achieved bone union within 3 months after surgery. At the 2-year follow-up, the mean visual analog scale pain score was 1.1 ± 1.1; Subjective Shoulder Value, 93.4 ± 5.3; University of California, Los Angeles shoulder score, 31.2 ± 2.7; and American Shoulder and Elbow Surgeons score, 92.6 ± 6.7. Mean active forward flexion, external rotation, and internal rotation were 144° ± 16°, 33° ± 11°, and 13.3 ± 1.7, respectively. Postoperatively, 9 patients (31%) had stiffness and pain refractory to conservative treatment and underwent arthroscopic release. Conclusion: Although minimal open reduction and screw and washer fixation resulted in bone union in all cases, the incidence of postoperative stiffness was relatively high in patients with displaced greater tuberosity fractures because of prolonged immobilization after surgery. abstract_id: PUBMED:33387054 Arthroscopic reduction and fixation of greater tuberosity fractures of the humerus. Background: The optimal technique for the displaced greater tuberosity (GT) fractures remains unclear; those in favor of arthroscopic techniques emphasize on the feasibility of arthroscopic reduction and fixation, while others report that anatomic reduction and osteosynthesis of the fracture are optimal through open surgery. Therefore, we performed this study to evaluate the clinical results of arthroscopic fixation for displaced and/or comminuted GT fractures using a bridging arthroscopic technique. Materials And Methods: We studied the files of 11 patients (4 men, 7 women; mean age, 55 years; range, 28-74 years), with an isolated, displaced GT fracture treated with arthroscopic reduction and double-row suture anchor fixation technique from December 2016 to October 2018. All patients were operated at a mean time from their injury of 23 days (range, 1-85 days) using an arthroscopic technique. Any concomitant pathology that was arthroscopically identified was identified and repaired after arthroscopic fixation of the GT fracture. The mean follow-up was 12 months (range, 6-18 months). We evaluated pain using a 0-10 point visual analog scale (VAS), shoulder range of motion, fracture healing, Constant-Murley Shoulder Outcome Score, and patients' satisfaction from the operation. Results: Postoperative radiographs showed anatomic reduction without any displacement of the GT fracture in eight patients and residual displacement of &lt; 3 mm in three patients. All patients significantly improved in VAS score from 8.4 points (range, 7-10 points) preoperatively to 0.9 points (range, 0-3 points) postoperatively. Range of motion was 153 degrees forward flexion (range, 130-170 degrees), 149 degrees abduction (range, 120-170 degrees), 42 degrees external rotation (range, 20-70), and internal rotation between T10 and L3 spinal level. The final mean Constant-Murley Shoulder Outcome Score was 85.8 points (range, 76-94 points); correlation analysis showed that the patients with the higher greater tuberosity fracture displacement had the worst postoperative score (Pearson correlation coefficient -0,85; p = 0.0009), and the patients with nonanatomic reduction had close to average score. All patients were very satisfied with the end result of the operation, even the 3 patients with residual fracture displacement. No patient experienced any postoperative complications. Conclusions: Arthroscopic reduction and fixation of displaced GT fractures is a feasible minimally invasive procedure for optimal fracture healing and patients satisfaction. abstract_id: PUBMED:37668752 Comparison between arthroscopic suture anchor fixation and open plate fixation in the greater tuberosity fracture of the proximal humerus. Introduction: The purpose of this study is to compare the clinical and radiological outcomes of patients undergoing open reduction and internal fixation (OR/IF) using a plate or patients undergoing an arthroscopic suture anchor fixation for the greater tuberosity (GT) fracture of the proximal humerus. The purpose of this study is to compare the clinical and radiological outcomes of patients undergoing OR/IF or an arthroscopic suture anchor fixation for the GT fracture. Materials And Methods: Between January, 2010 and December, 2020, 122 patients with GT fracture underwent operative fixation. Either OR/IF using proximal humeral locking plate (50 patients) or arthroscopic suture anchor (72 patients) fixation was performed. Fourteen patients were lost to follow-up and finally, 108 patients were enrolled in this study. We divided these patients into two groups: (1) OR/IF group (Group I: 44 patients) and arthroscopic anchor fixation group (Group II: 64 patients). The primary outcome was subjective shoulder function (shoulder functional scale). Secondary outcomes were range of motion, and complications including GT fixation failure, fracture migration, or neurologic complication. Also, age, sex, BMI, operation time, shoulder dislocation, fracture comminution, AP (anteroposterior), SI (superoinferior) size and displacement were evaluated and compared between two groups. Results: Both groups showed satisfactory clinical and radiological outcomes at mid-term follow-up. Between 2 groups, there were no significant differences in age, sex, BMI, presence of shoulder dislocation or comminution. Group II showed higher clinical scores except VAS score (p &lt; 0.05) and longer surgical times (95.3 vs. 61.5 min). Largest fracture displacement (Group I vs. II: SI displacement: 40 vs. 13 mm, and AP displacement: 49 vs. 11 mm) and higher complication rate (p = 0.049) was found in Group I. Conclusions: Both arthroscopic anchor fixation and open plate fixation methods showed satisfactory outcomes at mid-term follow-up. Among them, OR/IF is preferred for larger fracture displacement (&gt; 5 mm) and shorter operation time However, arthroscopic anchor fixation group showed better clinical outcomes and less complications than the OR/IF group. Level Of Evidence: Level 4, Case series with subgroup analysis. abstract_id: PUBMED:24684914 Surgical management of isolated greater tuberosity fractures of the proximal humerus. Because the greater tuberosity is the insertion site of the posterior superior rotator cuff, fractures can have a substantial impact on functional outcome. Isolated fractures should not inadvertently be trivialized. Thorough patient evaluation is required to make an appropriate treatment decision. In most cases surgical management is considered when there is displacement of 5 mm or greater. Although reduction of displaced greater tuberosity fractures has traditionally been performed with open techniques, arthroscopic techniques are now available. The most reliable techniques of fixation of the greater tuberosity incorporate the rotator cuff tendon bone junction rather than direct bone-to-bone fixation. abstract_id: PUBMED:34022867 Intraoperative Nice knots assistance for reduction in displaced comminuted clavicle fractures. Purpose: The Nice knots have been widely used in orthopedic surgeries to fix torn soft tissue and fracture in recent years. The study aims to investigate the clinical efficacy and prognosis of intraoperative and postoperative Nice Knots-assisted reduction in the treatment of displaced comminuted clavicle fracture. Methods: From Jan 2014 to Dec 2019, 75 patients diagnosed with unilateral closed displaced comminuted clavicle fracture were treated with open reduction and internal fixation (ORIF) in this study. Nice knot group (the NK group) included 38 patients and the other 37 patients were in the traditional group (the TK group). The time of operation and the amount of bleeding during operation were recorded. Post-operative clinical outcomes and radiographic results were recorded and compared between these two groups. The Visual Analogue Scale (VAS), Neer score, Rating Scale of the American Shoulder and Elbow Surgeons, Constant-Murley score and complications such as infection, nonunion, implant loosening, fragment displacement and hardware pain were observed in the two groups. Results: In the comparison between the two groups, there was no significant difference in age, sex, the cause of displaced clavicle fracture, and other basic information between the two groups. The operation time, intraoperative fluoroscopy time, and intraoperative blood loss were significantly reduced in the NK group (P &lt; 0.01). There were 2 cases of plate fracture in the TK group. The follow-up results showed that there was no significant difference in VAS, Neer score, ASES, and Constant-Murley scores between the two groups. Conclusion: The use of Nice knot, in comminuted and displaced clavicle fractures can reduce intraoperative blood loss, shorten operation time, facilitate intraoperative reduction, and achieve satisfactory postoperative clinical results. This study demonstrates that Nice knot is a simple, safe, practical and effective auxiliary reduction method. abstract_id: PUBMED:33738220 Arthroscopic Reduction and Transosseous Suture Fixation of Avulsed Displaced Shoulder Greater Tuberosity Fracture. The results of conservative treatment of displaced proximal humerus fractures are not satisfactory. Open reconstruction and rigid internal fixation, as well as arthroscopic-assisted reduction and internal fixation, are possible in selected cases, mostly young patients. Older patients with osteoporotic, comminuted bone accounts for 70% of the cases. We present an arthroscopic reduction and transosseous suture fixation technique for osteoporotic patients with displaced 2-part greater tuberosity fractures of the proximal humerus. The technique reduces the upward and medially displaced greater tuberosity to its anatomic position and uses longitudinal, horizontal, or a mattress suture fixation as single or combined fixation (Natofix technique). Answer: The comparison between arthroscopic techniques and open reduction internal fixation (ORIF) for the treatment of isolated displaced greater tuberosity fractures has been the subject of several studies, but the superiority of one method over the other is not definitively established. One study found that arthroscopic double-row suture anchor fixation had longer surgical times compared to ORIF (mean difference of 33.9 minutes), but patients who underwent the arthroscopic technique had slightly greater ranges of forward flexion and abduction, and higher American Shoulder and Elbow Surgeons (ASES) scores. However, these differences were small and of questionable clinical importance. There were no significant differences in the proportion of patients experiencing complications resulting in reoperation between the two approaches (PUBMED:26728514). Another study described a novel arthroscopic-assisted plate fixation technique for large, comminuted greater tuberosity fractures, which was found to be effective in reducing the fractures and allowing early functional rehabilitation. This technique also enabled accurate restoration of the medial footprint of the greater tuberosity fracture and provided an effective buttress to the large-sized fracture fragments (PUBMED:26419378). A study evaluating the impact of the surgeon's experience on outcomes of ORIF for displaced proximal humeral fractures found that the quality of reduction and functional outcomes were related to the surgeon's experience, with fewer complications and revision rates when surgery was conducted by a trauma surgeon performing ≥50 shoulder surgeries per year (PUBMED:30399160). Arthroscopic reduction and fixation using three cannulated screws combined with washers for isolated greater tuberosity fractures resulted in excellent fracture fixation and high constant shoulder scores (CSS) at follow-up (PUBMED:22613600). Minimally invasive open reduction and internal fixation by screw and washer for displaced greater tuberosity fractures resulted in bone union in all cases, but there was a relatively high incidence of postoperative stiffness due to prolonged immobilization after surgery (PUBMED:29305100). Arthroscopic reduction and fixation of greater tuberosity fractures using a double-row suture anchor fixation technique showed good outcomes and patient satisfaction, with no postoperative complications reported (PUBMED:33387054). A comparison between arthroscopic suture anchor fixation and open plate fixation found that both methods showed satisfactory outcomes at mid-term follow-up.
Instruction: Can renaming schizophrenia reduce negative attitudes toward patients in Turkey? Abstracts: abstract_id: PUBMED:26719486 Can renaming schizophrenia reduce negative attitudes toward patients in Turkey? Aim: To determine the perception of the term schizophrenia among university students. Methods: This cross-sectional study was performed in April 2015 with students from Canik Başarı University (Samsun/Turkey). A patient history was first established. We then investigated to what extent students agreed with 10 statements based on that patient history. Three separate questionnaire forms (versions A, B and C), differing only in terms of the diagnosis in the patient in the history, were prepared. The three diagnoses were 'Schizophrenia' (version A), 'A psychiatric disease by the name of Bleuler's syndrome' (version B) and 'Brain tumor' (version C). The questionnaires were administered in a class environment. In all, 771 students participated. Results: Statistically significant differences between the forms were determined in only two statements ('A.'s disease will represent a problem in A.'s future career' and 'A. will in all probability have problems with the law in the future'). While no difference was determined between versions A and B at two-way comparisons, a statistically significant difference was observed between versions A and B and version C. Conclusion: No difference was determined between students' attitudes toward a diagnosis of 'schizophrenia' and one of 'a psychiatric disease known as Bleuler's syndrome'. The focus in preventing stigmatization of schizophrenia should not concentrate on a name change alone. Changing the name schizophrenia may be of no use unless public ignorance and fear of psychiatric diseases can also be overcome. abstract_id: PUBMED:35907348 What do psychiatrists think about renaming schizophrenia in Turkey? In this study, it was aimed to evaluate the opinions of psychiatrists in Turkey on whether to change the name of schizophrenia in order to reduce stigma. This cross-sectional survey was conducted with psychiatrists (resident in psychiatry, specialist, and lecturer) in Turkey. An online survey was created via the Google Forms public web address. Online questionnaires were delivered through Google Forms by emailing and messaging on WhatsApp, Telegram, Google and Yahoo groups and asking them to pass the questionnaire to other possible participants in their network. The study was performed between June 20, 2021 and July 10, 2021. 460 psychiatrists participated in the study. Forty-five-point 2 % of psychiatrists think that the name of schizophrenia should be renamed to reduce stigma. Forty-two-point 8 % of those who support the name change state that this change should be done as soon as possible. While 64.1 % of psychiatrists stated that naming the disease with another (new) name instead of schizophrenia could increase the hopes of patients and their relatives for recovery, 12.6 % stated that renaming would not cause any positive or negative changes. There is no statistical difference between psychiatrists who have a relative diagnosed with schizophrenia and psychiatrists who do not, in terms of thinking that the name of schizophrenia should be renamed to reduce stigma. In order to remove the stigma on schizophrenia, many interventions are required in social, cultural, economic and political fields. Renaming schizophrenia may be a good start for interventions. abstract_id: PUBMED:28177184 Associations between renaming schizophrenia and stigma-related outcomes: A systematic review. Renaming schizophrenia is a potential strategy to reduce the stigma attached to people with schizophrenia. However, the overall associations between renaming schizophrenia and stigma-related outcomes have not been fully elucidated. We conducted a systematic review of studies that empirically examined the outcomes between new or alternative terms and old or existing terms for schizophrenia. We searched for relevant articles in eight bibliographic databases, conducted a Google search, examined reference lists, and contacted relevant experts. We found a total of 2601 reference records, and 23 articles were included in this review. Overall, in countries where schizophrenia has been renamed, the name changes may be associated with improvements in adults' attitudes toward people with schizophrenia, and with increased diagnosis announcement. However, studies conducted in countries where schizophrenia has not been renamed report inconsistent findings. In addition, renaming may not influence portrayals of schizophrenia in the media. Most studies included in our review had a risk of bias in their methodology, and we employed a vote-counting method to synthesize study results; therefore, the impacts of renaming are still inconclusive. Future studies are needed to address the following issues: use of univariate descriptive statistics, adjustment for confounding variables, use of reliable measures, and employing a question that addresses the image of split or multiple personalities. Evidence is limited regarding the associations between renaming and stigma experienced by both people with schizophrenia and their families (e.g., perceived stigma, self-stigma, discrimination experience, and burden). Further research in these populations is needed to confirm the effects of renaming schizophrenia. abstract_id: PUBMED:35620801 Attitudes Toward People With Schizophrenia Among Undergraduate Nursing Students. Background: Negative attitudes toward mental disorders are not only an interpersonal issue but also a concern of mental health care. Given that nursing students are future health care providers, it is pivotal to improve their attitudes toward individuals with mental disorders prior to their transition into clinical practice. However, research on nursing students' attitudes in relation to schizophrenia in Taiwan remains unexplored. Aim: The aim of this article is to examine the correlates of attitudes toward individuals with schizophrenia among Taiwanese undergraduate nursing students. Method: A descriptive, correlational, and cross-sectional study was adopted. Self-reported questionnaires were conducted with a convenience sample of 306 Taiwanese undergraduate nursing students. Descriptive statistics, independent t tests, one-way analysis of variance, Pearson's correlations, and a stepwise regression analysis were performed. Results: Nursing students expressed negative attitudes toward individuals with schizophrenia. Nursing students, who were female, had contact with individuals with mental disorders, and expressed greater empathy and personality traits held more favorable attitudes toward individuals with schizophrenia. The study found that empathy, personality traits, and academic year were the most crucial attributes contributing to attitudes of nursing students toward individuals with schizophrenia. Conclusion: Findings suggest that nursing education programs with empathy- and personality-tailored modules in mental health are pivotal to provide humanistic approaches with supportive attitudes regarding schizophrenia. abstract_id: PUBMED:30032592 Effects of Renaming Schizophrenia in Korea: from "Split-Mind Disorder" to "Attunement Disorder". Objective: Korean Neuropsychiatric Association changed the Korean name of schizophrenia from 'Split-mind Disorder' to 'Attunement Disorder' in 2012. This study assessed attitudes towards the renaming of schizophrenia among mental health practitioners (n=440), patients with schizophrenia and their guardians (n=396), and the university students (n=140) using self-administered questionnaires. Methods: The questionnaire included items related to participants' perception of the renaming of the disease, the nature of informing about the disease to confirm the effect of the name change. Results: It was confirmed the notification rate of disease name by mental health practitioners was increased significantly after the renaming. Among patients and their guardians, 24.9% and 15.0%, respectively, perceived their own or the family member's illness as 'attunement disorder'. Conclusion: Patients and their guardians continue to display a low awareness about the name of the disease as 'attunement disorder.' However, mental health practitioners were found to be able to easily use the name 'attunement disorder' as a result of the increased notification rate of the new disease name. abstract_id: PUBMED:35329254 Renaming Schizophrenia and Stigma Reduction: A Cross-Sectional Study of Nursing Students in Taiwan. Schizophrenia is one of the most stigmatized mental disorders. In 2014, schizophrenia was renamed in Mandarin in Taiwan, from the old name of "mind-splitting disease" to new name "disorder with dysfunction of thought and perception", in an attempt to reduce the stigmatization of schizophrenia. This cross-sectional study aimed to investigate the effects of renaming schizophrenia on its stigma in nursing students. We examined the public stigma, self-stigma, and social distance associated with schizophrenia and compared them before and after the renaming. Basic demographic data and previous contact experience were collected, and participants completed a modified Attribution Questionnaire, the Perceived Psychiatric Stigma Scale, and modified Social Distance Scale. The final sample comprised 99 participants. Assessment revealed that the renaming significantly reduced public stigma, self-stigma, and social distance. Regarding the old and new names for schizophrenia, the fourth-year nursing students scored significantly higher on public stigma and self-stigma than did the first-year students. Personal exposure to individuals diagnosed with mental disorders reduced public stigma toward schizophrenia. The study findings suggest that the renaming of schizophrenia reduced its associated stigma. Providing accurate information, instruction by qualified tutors, as well as exposure to patients in acute exacerbation in hospital settings and recovered patients in the community are important. Further studies with longitudinal design, participants from diverse backgrounds, and larger sample sizes to investigate the effect of renaming on the stigma toward schizophrenia are warranted. abstract_id: PUBMED:23173747 Influence of contact with schizophrenia on implicit attitudes towards schizophrenia patients held by clinical residents. Background: Patients with schizophrenia and their families have suffered greatly from stigmatizing effects. Although many efforts have been made to eradicate both prejudice and stigma, they still prevail even among medical professionals, and little is known about how contact with schizophrenia patients affects their attitudes towards schizophrenia. Methods: We assessed the impact of the renaming of the Japanese term for schizophrenia on clinical residents and also evaluated the influence of contact with schizophrenia patients on attitudes toward schizophrenia by comparing the attitudes toward schizophrenia before and after a one-month clinical training period in psychiatry. Fifty-one clinical residents participated. Their attitudes toward schizophrenia were assessed twice, before and one month after clinical training in psychiatry using the Implicit Association Test (IAT) as well as Link's devaluation-discrimination scale. Results: The old term for schizophrenia, "Seishin-Bunretsu-Byo", was more congruent with criminal than the new term for schizophrenia, "Togo-Shitcho-Sho", before clinical training. However, quite opposite to our expectation, after clinical training the new term had become even more congruent with criminal than the old term. There was no significant correlation between Link's scale and IAT effect. Conclusions: Renaming the Japanese term for schizophrenia still reduced the negative images of schizophrenia among clinical residents. However, contact with schizophrenia patients unexpectedly changed clinical residents' attitudes towards schizophrenia negatively. Our results might contribute to an understanding of the formation of negative attitudes about schizophrenia and assist in developing appropriate clinical training in psychiatry that could reduce prejudice and stigma concerning schizophrenia. abstract_id: PUBMED:29357754 Attitudes Toward Euthanasia for Patients Who Suffer From Physical or Mental Illness. This study examined whether attitudes toward euthanasia vary with type of illness and with the source of the desire to end the patient's life. The study used a 3 (illness type: cancer, schizophrenia, depression) × 2 (euthanasia type: patient-initiated, family-initiated) between-groups experimental design. An online questionnaire was administered to 324 employees and students from a Australian public university following random assignment of participants to one of the six vignette-based conditions. Attitudes toward euthanasia were more positive for patients with a physical illness than a mental illness. For a patient with cancer or depression, but not schizophrenia, approval was greater for patient-, than, family-, initiated euthanasia. Relationships between illness type and attitudes were mediated by perceptions of patient autonomy and illness controllability. Findings have implications for debate, practices, and legislation regarding euthanasia. abstract_id: PUBMED:33441112 Daily functioning and symptom factors contributing to attitudes toward antipsychotic treatment and treatment adherence in outpatients with schizophrenia spectrum disorders. Background: Poor adherence and negative attitudes to treatment are common clinical problems when treating psychotic disorders. This study investigated how schizophrenia core symptoms and daily functioning affect treatment adherence and attitudes toward antipsychotic medication and to compare patients using clozapine or other antipsychotics. Method: A cross-sectional study with data from 275 patients diagnosed with schizophrenia spectrum disorder. Patients adherence, attitudes, insight and side-effects were evaluated using the Attitudes toward Neuroleptic Treatment scale. Overall symptomology was measured using the Brief Psychiatric Rating Scale (BPRS), the Health of the Nation Outcome Scale (HoNOS). The functioning was assessed using activities of daily living scale, instrumental activities of daily living scale and social functioning of daily living scale. Results: Self-reported treatment adherence was high. Of the patients, 83% reported using at least 75% of the prescribed medication. Having more symptoms was related with more negative attitude towards treatment. There was a modest association with functioning and treatment adherence and attitude toward antipsychotic treatment. Attitudes affected on adherence in non-clozapine but not in clozapine groups. Conclusion: Early detection of non-adherence is difficult. Systematic evaluation of attitudes toward the treatment could be one way to assess this problem, along with optimized medication, prompt evaluation of side effects and flexible use of psychosocial treatments. abstract_id: PUBMED:30089973 Indian nursing students' attitudes toward mental illness and persons with mental illness. Background: Nursing fraternity play a pivotal role in psychiatric services. Negative attitudes toward mental illness have been consistently reported in nursing students. Psychiatric conditions and persons suffering from them may elicit different kinds of attitudinal responses. Objective: The current study aimed at assessing the attitude toward mental illness and also at comparative evaluation of the attitude toward persons with different psychiatric, physical, and social conditions. Materials And Methods: The present study was carried out among conveniently selected nursing students (n = 100) using Attitude Scale for Mental Illness (ASMI) and Attitude to Mental Illness Questionnaire (AMIQ) on their 1st day of psychiatric posting. Results: Higher stereotype (3.1 ± 0.8), benevolence (3.7 ± 0.6), and pessimistic prediction (3.4 ± 0.9) were reported toward mental illness. Negative attitude was noted toward persons with criminal background (4.8 ± 3.7), heroin abuse (3.2 ± 3.6), suicidal depression (1.36 ± 3.6), alcohol use (0.59 ± 4.38), and schizophrenia (0.03 ± 3.2) in the same order, with positive attitude toward diabetic (-4.4 ± 3.9) and religious persons (-5.13 ± 3.6). Significant correlation of ASMI scores was noted mainly with schizophrenia of AMIQ but not with other psychiatric conditions. Conclusion: High negative attitudes in nursing students underline the need for stronger antistigma measures. Differences in attitude toward mental illness in general and specific psychiatric conditions need further systematic evaluation. Answer: The perception of schizophrenia and the impact of renaming the disorder to reduce stigma have been explored in various studies. In Turkey, a cross-sectional study with university students found that there was no significant difference in attitudes toward a diagnosis of 'schizophrenia' and 'a psychiatric disease known as Bleuler's syndrome' (PUBMED:26719486). This suggests that simply changing the name of schizophrenia may not be sufficient to alter negative attitudes towards patients with the condition. However, a survey conducted among psychiatrists in Turkey indicated that 45.2% of psychiatrists believe that renaming schizophrenia could help reduce stigma, with 42.8% of those supporting a name change stating that it should be done as soon as possible (PUBMED:35907348). This highlights a belief within the psychiatric community that a name change could be a positive step towards stigma reduction. A systematic review on the associations between renaming schizophrenia and stigma-related outcomes revealed that in countries where the name has been changed, there may be improvements in adults' attitudes and increased diagnosis announcement, but the evidence is not conclusive and further research is needed (PUBMED:28177184). Additionally, a study in Korea showed that after renaming schizophrenia from 'Split-mind Disorder' to 'Attunement Disorder', there was an increase in the notification rate of the disease name by mental health practitioners, although patients and guardians displayed low awareness of the new name (PUBMED:30032592). In Taiwan, a cross-sectional study found that renaming schizophrenia significantly reduced public stigma, self-stigma, and social distance among nursing students (PUBMED:35329254). This suggests that renaming the disorder can have a positive impact on attitudes towards schizophrenia in some contexts. Overall, while there is some evidence that renaming schizophrenia could contribute to reducing negative attitudes towards patients, the effectiveness of such a change may vary by context and is likely to require additional interventions in social, cultural, economic, and political fields to effectively combat stigma (PUBMED:35907348).
Instruction: Creating and testing the concept of an academic NGO for enhancing health equity: a new mode of knowledge production? Abstracts: abstract_id: PUBMED:18058687 Creating and testing the concept of an academic NGO for enhancing health equity: a new mode of knowledge production? Context: Collaborative action is required to address persistent and systematic health inequities which exist for most diseases in most countries of the world. Objectives: The Academic NGO initiative (ACANGO) described in this paper was set up as a focused network giving priority to twinned partnerships between Academic research centres and community-based NGOs. ACANGO aims to capture the strengths of both in order to build consensus among stakeholders, engage the community, focus on leadership training, shared management and resource development and deployment. Methods: A conceptual model was developed through a series of community consultations. This model was tested with four academic-community challenge projects based in Kenya, Canada, Thailand and Rwanda and an online forum and coordinating hub based at the University of Ottawa. Findings: Between February 2005 and February 2007, each of the four challenge projects was able to show specific outputs, outcomes and impacts related to enhancing health equity through the relevant production and application of knowledge. Conclusions: The ACANGO initiative model and network has demonstrated success in enhancing the production and use of knowledge in program design and implementation for vulnerable populations. abstract_id: PUBMED:35874352 Using Digital Concept Maps in Conflict Resolution Studies: Implications for Students' Argumentative Skills, Domain-Specific Knowledge, and Academic Efficacy. While argumentation emerges as one of the major learning skills in the twenty-first century, a somewhat opaque landscape is revealed in terms of identifying its potential in enhancing higher-education students' domain-specific knowledge. In this study, argumentation-for-learning activity with digital concept mapping (CM) was designed and compared with a traditional teacher-centered activity to determine the former's effectiveness in promoting students' domain-specific factual, conceptual, and procedural knowledge. This study also examines how the proposed activity may contribute to students' academic efficacy and thus promote meaningful learning. A quasi-experimental design was employed by using convenience samples. Two identical courses were selected for this research: the first course with a total of 59 students (the research group), and the second course including a total of 63 students (the control group). Both groups' domain-specific knowledge was assessed before and after the activity. The designed activity was found to be less effective in fostering factual knowledge and more effective in developing the conceptual and procedural knowledge domains. Another finding demonstrated the benefits of argumentation for learning with CM in facilitating students' academic efficacy. It can be concluded that engaging students in a deep argumentation learning process may in turn deepen predominantly conceptual and procedural domain-specific knowledge. Limitations and implications are discussed. abstract_id: PUBMED:31564992 Study of the possibility of introduction of Kazakhstan NGO-based rapid HIV testing procedures. Introduction: New initiatives presented by the World Health Organization (WHO) and the Joint United Nations Program on HIV and AIDS , such as 90-90-90, test and treat, preventive treatment, and best international practices related to the introduction of rapid human immunodeficiency virus (HIV) testing in clinics, and field conditions, including self-testing, predetermined the introduction of NGO-based rapid HIV testing in the Republic of Kazakhstan nongovernmental organizations (NGOs). This work presents the results of a comprehensive study conducted about the possible introduction of NGO-based rapid HIV testing in the country. It should be noted that 32,573 HIV infections have been diagnosed in Kazakhstan (prevalence of 117.7 per 100,000 people) from 1987 to 2018. Most of these new cases are diagnosed among "key" population groups, such as people who use injectable drugs, sex workers, men who have sex with men, those who rely mainly on NGOs, and those who prefer to deal with an organization such as an NGO, which makes it possible to introduce NGO-based rapid HIV testing in Kazakhstan. Methods: In this work, we used the following rapid HIV tests: Alere DetermineTM HIV ½ Ag/Ab Combo, Hexagon HIV 1+2, Abon HIV ½, HIV 1,2 Han Medtest, and Geenius HIV1/2 Confirmatory. Results: The study of the rapid tests, including their diagnostic patterns, conducted in Kazakhstan shows that five rapid HIV tests completely meet the WHO's requirements (sensitivity&gt;99%; specificity&gt;98%). These are Alere DetermineTM HIV ½ Ag/Ab Combo, Hexagon HIV 1+2, Abon HIV ½, HIV 1,2 Han Medtest, and Geenius HIV1/2 Confirmatory. The study of legal and social problems associated with rapid HIV testing in NGOs shows that HIV-related medical examination and counseling carried out in Kazakhstan, including those by rapid methods, are governed by corresponding laws and normative legal documents. Conclusion: It has been established that there are social barriers that interfere with rapid HIV testing. In view of this, services associated with NGO-based rapid HIV testing shall be rendered with the use of a social and legal protection mechanism for those under examination. abstract_id: PUBMED:34242542 Perceived discrimination and mental health in college students: A serial indirect effects model of mentoring support and academic self-concept. Objective: Examine the direct and indirect effects of perceived discrimination, mentoring support, and academic self-concept on college student mental health. Participants: Three hundred fifteen undergraduates of minorized gender (72%), ethnic (57%), and sexual (37%) identities. Methods: An online survey assessed perceived discrimination, mentoring support, academic self-concept, and mental health. Results: Perceived discrimination was associated with mentoring support (B=-0.11, p=.019), academic self-concept (B=-0.13, p&lt;.001), and mental health (B=-0.15, p&lt;.001). Additionally, mentoring support (B = 0.29, p&lt;.001) and academic self-concept (B = 0.53, p&lt;.001) were associated with mental health, and each other (B = 0.25, p&lt;.001). Significant indirect effects were observed such that mentoring support and academic self-concept, individually and collectively, contributed to the association between perceived discrimination and mental health. Conclusions: Addressing discrimination, creating supportive relationships, and facilitating academic growth may reduce mental health concerns in undergraduate populations, thereby having implications for college transition and retention strategies. abstract_id: PUBMED:29230698 Mode 2 Knowledge Production in the Context of Medical Research: A Call for Further Clarifications. The traditional researcher-driven environment of medical knowledge production is losing its dominance with the expansion of, for instance, community-based participatory or participant-led medical research. Over the past few decades, sociologists of science have debated a shift in the production of knowledge from traditional discipline-based (Mode 1) to more socially embedded and transdisciplinary frameworks (Mode 2). Recently, scholars have tried to show the relevance of Mode 2 knowledge production to medical research. However, the existing literature lacks detailed clarifications on how a model of Mode 2 knowledge production can be constructed in the context of medical research. This paper calls for such further clarifications. As a heuristic means, the advocacy for a controversial experimental stem cell therapy (Stamina) is examined. It is discussed that the example cannot be considered a step towards Mode 2 medical knowledge production. Nonetheless, the example brings to the fore some complexities of medical knowledge production that need to be further examined including: (1) the shifting landscape of defining and addressing vulnerability of research participants, (2) the emerging overlap between research and practice, and (3) public health implications of revising the standard notions of quality control and accountability. abstract_id: PUBMED:23113133 Health research evaluation and its role on knowledge production. Background: Knowledge production and evaluation are two important functions of health research system (HRS). In this article, we aimed to reveal the correlation between evaluation of health research organizations and health knowledge production promotion. Methods: A comprehensive evaluation system was developed to evaluate the academic performance of national medical science universities on an annual basis. It assess following domains; stewardship, capacity building and knowledge production. Measurable indicators for each domain were assigned, a 'research profile' for each department was provided. In this study, we compared the results of annually national Health Research System evaluation findings during 2005-2008. Results: The number of scientific articles has been increased from 4672 to 8816 during 2005 to 2008. It is mentionable that, the number of articles which has been published in indexed data bases has risen too. This fact could be related to directed policy for more international publication of scientific articles from Iran. The proportion of total articles to the number of academic members was 1.14 in 2008, comparing to 0.84 in 2005. It means that this proportion have increased about twice (0.7 Vs 0.45) during mentioned time. Moreover, other scientific products such as authored books based on domestic researches and cited articles in textbooks have increased according to special attention to knowledge production by policy makers. Conclusion: We conclude that Health System Research evaluation could be used as a mean for implementing policies and promoting knowledge production. abstract_id: PUBMED:37099650 The Academic Learning Health System: A Framework for Integrating the Multiple Missions of Academic Medical Centers. The learning health system (LHS) has emerged over the past 15 years as a concept for improving health care delivery. Core aspects of the LHS concept include: promoting improved patient care through organizational learning, innovation, and continuous quality improvement; identifying, critically assessing, and translating knowledge and evidence into improved practices; building new knowledge and evidence around how to improve health care and health outcomes; analyzing clinical data to support learning, knowledge generation, and improved patient care; and engaging clinicians, patients, and other stakeholders in processes of learning, knowledge generation, and translation. However, the literature has paid less attention to how these LHS aspects may integrate with the multiple missions of academic medical centers (AMCs). The authors define an academic learning health system (aLHS) as an LHS built around a robust academic community and central academic mission, and they propose 6 features that emphasize how an aLHS differs from an LHS. An aLHS capitalizes on embedded academic expertise in health system sciences; engages the full spectrum of translational investigation from mechanistic basic sciences to population health; builds pipelines of experts in LHS sciences and clinicians with fluency in practicing in an LHS; applies core LHS principles to the development of curricula and clinical rotations for medical students, housestaff, and other learners; disseminates knowledge more broadly to advance the evidence for clinical practice and health systems science methods; and addresses social determinants of health, creating community partnerships to mitigate disparities and improve health equity. As AMCs evolve, the authors expect that additional differentiating features and ways to operationalize the aLHS will be identified and hope this article stimulates further discussion around the intersection of the LHS concept and AMCs. abstract_id: PUBMED:27324644 Engaging indigenous and academic knowledge on bees in the Amazon: implications for environmental management and transdisciplinary research. Background: This paper contributes to the development of theoretical and methodological approaches that aim to engage indigenous, technical and academic knowledge for environmental management. We present an exploratory analysis of a transdisciplinary project carried out to identify and contrast indigenous and academic perspectives on the relationship between the Africanized honey bee and stingless bee species in the Brazilian Amazon. The project was developed by practitioners and researchers of the Instituto Socioambiental (ISA, a Brazilian NGO), responding to a concern raised by a funding agency, regarding the potential impact of apiculture development by indigenous peoples, on the diversity of stingless bee species in the Xingu Park, southern Brazilian Amazon. Research and educational activities were carried out among four indigenous peoples: Kawaiwete or Kaiabi, Yudja or Juruna, Kīsêdjê or Suyá and Ikpeng or Txicão. Methods: A constructivist qualitative approach was developed, which included academic literature review, conduction of semi-structured interviews with elders and leaders, community focus groups, field walks and workshops in schools in four villages. Semi-structured interviews and on-line surveys were carried out among academic experts and practitioners. Results: We found that in both indigenous and scientific perspectives, diversity is a key aspect in keeping exotic and native species in balance and thus avoiding heightened competition and extinction. The Africanized honey bee was compared to the non-indigenous westerners who colonized the Americas, with whom indigenous peoples had to learn to coexist. We identify challenges and opportunities for engagement of indigenous and scientific knowledge for research and management of bee species in the Amazon. A combination of small-scale apiculture and meliponiculture is viewed as an approach that might help to maintain biological and cultural diversity in Amazonian landscapes. Conclusion: The articulation of knowledge from non-indigenous practitioners and researchers with that of indigenous peoples might inform sustainable management practices that are, at the same time, respectful of indigenous perspectives and intellectual property rights. However, there are ontological, epistemological, political and financial barriers and constraints that need to be addressed in transdisciplinary research projects inter-relating academic, technical and indigenous knowledge systems for environmental management. abstract_id: PUBMED:25376134 Academic self-concept in children with epilepsy and its relation to their quality of life. Objectives: Academic achievement in children with epilepsy is a highly studied topic with many important implications. However, only little attention has been devoted to academic self-concept of such children and the relation of academic self-concept to their quality of life. We aimed to examine academic self-concept in children with epilepsy, to assess its relationship to academic achievement and to determine possible correlations between academic self-concept and quality of life. Methods: The study group consisted of 182 children and adolescents aged 9-14 years who completed the student's perception of ability scale (SPAS) questionnaire to determine their academic self-concept and the modified Czech version of the CHEQOL-25 questionnaire to determine their health-related quality of life. Results: We found that academic self-concept in children with epilepsy was on average significantly lower than in their peers without seizures, especially with regard to general school-related abilities, reading, and spelling. On the other hand, the variance in the data obtained from the group of children with epilepsy was significantly higher than in the whole population and the proportion of individuals with very high academic self-concept seems comparable among children with and without epilepsy. Moreover, it was found that correlations between academic self-concept and academic achievement are significantly lower in children with epilepsy than in the whole population. Discussion: The presented results suggest that considerable attention should be paid to the role of academic self-concept in education of children with epilepsy and to the factors influencing this self-concept in this group. abstract_id: PUBMED:35967595 How Classroom Environment Influences Academic Enjoyment in Mathematics Among Chinese Middle School Students: Moderated Mediation Effect of Academic Self-Concept and Academic Achievement. Purpose: Substantial literature has documented the influence of classroom environment on academic enjoyment. However, little is known about the mediating and moderating mechanisms underlying this relationship. Based on the control-value theory and the individual-context interaction model, a moderated mediation model was constructed in this study to further examine whether academic self-concept mediated the relation between classroom environment and enjoyment in mathematics and whether this mediating effect was moderated by academic achievement. Methods: We recruited 750 Chinese middle school students and they completed the classroom environment, academic self-concept, and academic enjoyment questionnaires. Results: After controlling for gender and grade, the results of structural equation modeling showed that academic self-concept partially mediated the association between classroom environment and enjoyment in mathematics. The mediating path from classroom environment to academic self-concept was moderated by academic achievement. Classroom environment positively predicted academic self-concept for the higher achieving students. However, the effect of classroom environment on academic self-concept was not significant for the lower achieving students. Conclusion: These findings highlight that classroom environment has a more salient impact on academic self-concept and enjoyment for higher achieving students than for lower achieving students. The study results provide guidelines for educators regarding effective interventions for fostering positive academic emotions. Answer: The concept of an academic NGO (ACANGO) represents a novel approach to knowledge production aimed at enhancing health equity. This initiative combines the strengths of academic research centers and community-based NGOs to address persistent health inequities through collaborative action. The ACANGO model was developed through community consultations and tested with four academic-community challenge projects in Kenya, Canada, Thailand, and Rwanda, as well as an online forum and coordinating hub based at the University of Ottawa. Between February 2005 and February 2007, these projects demonstrated specific outputs, outcomes, and impacts related to enhancing health equity through the production and application of knowledge. The ACANGO initiative has shown success in using knowledge to design and implement programs for vulnerable populations, suggesting that this model could be an effective new mode of knowledge production (PUBMED:18058687).
Instruction: Can Swedish interactive thresholding algorithm fast perimetry be used as an alternative to goldmann perimetry in neuro-ophthalmic practice? Abstracts: abstract_id: PUBMED:12215089 Can Swedish interactive thresholding algorithm fast perimetry be used as an alternative to goldmann perimetry in neuro-ophthalmic practice? Objective: To assess the potential role of Swedish Interactive Thresholding Algorithm (SITA) Fast computerized static perimetry, compared with that of Goldmann manual kinetic perimetry (GVF), for reliably detecting visual field defects in neuro-ophthalmic practice. Background: Automated visual field testing is challenging in patients with poor visual acuity or severe neurological disease. In these patients, GVF is often the preferred visual field technique, but performance of this test requires a skilled technician, and this option may not be readily available. The recent development of the SITA family of perimetry has allowed for shorter automated perimetry testing time in normal subjects and in glaucoma patients. However, its usefulness for detecting visual field defects in patients with poor vision or neurological disease has not been evaluated. Design And Methods: We prospectively studied 64 consecutive, neuro-ophthalmologically impaired patients with neurologic disability of 3 or more on the Modified Rankin Scale, or with visual acuity of 20/200 or worse in at least one eye. Goldmann manual kinetic perimetry and SITA Fast results were compared for each eye, with special attention to reliability, test duration, and detection and quantification of neuro-ophthalmic visual field defects. We categorized the results into 1 of 9 groups based on similarities and reliabilities. Patient test preference was also assessed. Results: Patients were separated into 2 groups, those with severe neurologic deficits (n = 50 eyes) and those with severe vision loss but mild neurologic dysfunction or none at all (n = 50 eyes). Overall, GVF and SITA Fast were equally reliable in 77% of eyes. Goldmann manual kinetic perimetry and SITA Fast showed similar visual field results in 75% of all eyes (70% of eyes of patients with severe neurologic deficits and 80% of eyes with poor vision). The mean +/- SD duration per eye was 7.97 +/- 3.2 minutes for GVF and 5.43 +/- 1.41 minutes for SITA Fast (P&lt;.001). Ninety-one percent of patients preferred GVF to SITA Fast. Conclusions: We found the SITA Fast strategy of automated perimetry to be useful in the detection, and accurate in the quantification of central visual field defects associated with neuro-ophthalmic disorders. Our results suggest that for the general ophthalmologist or neurologist, visual field testing with SITA Fast perimetry might even be preferable to GVF, especially if performed by a marginally trained technician, even in patients with severely decreased vision or who are neurologically disabled. abstract_id: PUBMED:31211001 Reliability of Semiautomated Kinetic Perimetry (SKP) and Goldmann Kinetic Perimetry in Children and Adults With Retinal Dystrophies. Purpose: To investigate the precision of visual fields (VFs) from semiautomated kinetic perimetry (SKP) on Octopus 900 perimeters, for children and adults with inherited retinal degenerations (IRDs). Goldmann manual kinetic perimetry has long been used in the diagnosis and follow-up of these patients, but SKP is becoming increasingly common. Octopus VFs (OVFs) and Goldmann VFs (GVFs) were both mapped on two occasions. Methods: Nineteen females and 10 males with IRDs were tested on OVFs and GVFs, with two targets per test (V4e and one smaller target). Tests were performed in the same (randomized) order at two visits about 1 week apart. The VFs were digitized to derive isopter solid angles. Comparisons, within and between visits, were performed with paired t-tests and Bland-Altman plots. Results: Median age was 20 years (range, 7-70; 10 participants aged ≤17 years old). There were no significant differences in solid angles between OVFs and GVFs (P ≥ 0.06) or between the two visits' solid angles on either perimeter (P ≥ 0.30). Between-visit test-retest variability for GVFs and OVFs was similar (P ≥ 0.73), with median values of approximately 9% to 13%. Overall variability was lower for children than adults (medians of 7.5% and 12.8%, respectively). Conclusions: Octopus SKP and Goldmann perimetry produced VFs of similar size and variability. Translational Relevance: Our study indicates that SKP provides a viable alternative to traditional Goldmann perimetry in clinical trials or care involving both children and adults with IRDs. abstract_id: PUBMED:29392149 Comparison of Matrix Frequency-Doubling Technology (FDT) Perimetry with the SWEDISH Interactive Thresholding Algorithm (SITA) Standard Automated Perimetry (SAP) in Mild Glaucoma. This study aimed to compare second-generation frequency-doubling technology (FDT) perimetry with standard automated perimetry (SAP) in mild glaucoma. Forty-seven eyes of 47 participants who had mild visual field defect by SAP were included in this study. All participants were examined using SITA 24-2 (SITA-SAP) and matrix 24-2 (Matrix-FDT). The correlations of global indices and the number of defects on pattern deviation (PD) plots were determined. Agreement between two sets regarding the stage of visual field damage was assessed. Pearson's correlation, intra-cluster comparison, paired t-test, and 95% limit of agreement were calculated. Although there was no significant difference between global indices, the agreement between the two devices regarding the global indices was weak (the limit of agreement for mean deviation was -6.08 to 6.08 and that for pattern standard deviation was -4.42 to 3.42). The agreement between SITA-SAP and Matrix-FDT regarding the Glaucoma Hemifield Test (GHT) and the number of defective points in each quadrant and staging of the visual field damage was also weak. Because the correlation between SITA-SAP and Matrix-FDT regarding global indices, GHT, number of defective points, and stage of the visual field damage in mild glaucoma is weak, Matrix-FDT cannot be used interchangeably with SITA-SAP in the early stages of glaucoma. abstract_id: PUBMED:30345638 Semi-automated kinetic perimetry: Comparison of the Octopus 900 and Humphrey visual field analyzer 3 versus Goldmann perimetry. Purpose: To evaluate the clinical usefulness and reproducibility of (semi-)automated kinetic perimetry of the Octopus 900 and Humphrey field analyzer 3 (HFA3) compared to Goldmann perimetry as reference technique. Methods: A prospective interventional study of two study groups, divided into three subgroups. The first study group consisted of 28 patients, performing one visual field examination on each of the three devices. A second group of 30 patients performed four examinations, one on Goldmann and three on Octopus 900 with the following testing strategies: (1) with reaction time (RT) vector, no headphone; (2) without RT vector, no headphone; and (3) without RT vector, with headphone. Comparisons for V4e and I4e stimuli were made of the mean isopter radius (MIR) and of the distances of the isopter to the central visual axis in four directions. Statistical analysis was made with the R software version 3.2.2. Results: For V4e stimuli, the mean isopter radius showed no statistic significant difference comparing Goldmann to HFA3 [p-value = 0.144; confidence interval (CI) -0.152 to 0.019] and comparing Goldmann to Octopus 900 without RT vector, either with (p-value = 0.347; CI -0.023 to 0.081) or without headphone (p-value = 0.130; CI -0.011 to 0.095). Octopus 900 with RT vector produced a significantly larger MIR for V4e stimuli in comparison to Goldmann (p-value &lt; 0.001). I4e stimuli produced statistically significantly larger visual field areas when comparing HFA3 and Octopus 900 to Goldmann perimetry. Conclusion: Humphrey field analyzer 3 and Octopus 900 without RT vector are promising successors of Goldmann perimetry. abstract_id: PUBMED:38289405 Interpretation of the Visual Field in Neuro-ophthalmic Disorders. Purpose Of Review: In this review, we will describe current methods for visual field testing in neuro-ophthalmic clinical practice and research, develop terminology that accurately describes patterns of field deficits, and discuss recent advances such as augmented or virtual reality-based perimetry and the use of artificial intelligence in visual field interpretation. Recent Findings: New testing strategies that reduce testing times, improve patient comfort, and increase sensitivity for detecting small central or paracentral scotomas have been developed for static automated perimetry. Various forms of machine learning-based tools such as archetypal analysis are being tested to quantitatively depict and monitor visual field abnormalities in optic neuropathies. Studies show that the combined use of optical coherence tomography and standard automated perimetry to determine the structure-function relationship improves clinical care in neuro-ophthalmic disorders. Visual field assessment must be performed in all patients with neuro-ophthalmic disorders affecting the afferent visual pathway. Quantitative visual field analysis using standard automated perimetry is critical in initial diagnosis, monitoring disease progression, and guidance of therapeutic plans. Visual field defects can adversely impact activities of daily living such as reading, navigation, and driving and thus impact quality of life. Visual field testing can direct appropriate occupational low vision rehabilitation in affected individuals. abstract_id: PUBMED:29560366 Sensitivity and Specificity of Swedish Interactive Threshold Algorithm and Standard Full Threshold Perimetry in Primary Open-angle Glaucoma. Perimetry is one of the mainstays in glaucoma diagnosis and treatment. Various strategies offer different accuracies in glaucoma testing. Our aim was to determine and compare the diagnostic sensitivity and specificity of Swedish Interactive Threshold Algorithm (SITA) Fast and Standard Full Threshold (SFT) strategies of the Humphrey Field Analyzer (HFA) in identifying patients with visual field defect in glaucoma disease. This prospective observational case series study was conducted in a university-based eye hospital. A total of 37 eyes of 20 patients with glaucoma were evaluated using the central 30-2 program and both the SITA Fast and SFT strategies. Both strategies were performed for each strategy in each session and for four times in a 2-week period. Data were analyzed using the Student's t-test, analysis of variance, and chi-square test. The SITA Fast and SFT strategies had similar sensitivity of 93.3%. The specificity of SITA Fast and SFT strategies was 57.4% and 71.4% respectively. The mean duration of SFT tests was 14.6 minutes, and that of SITA Fast tests was 5.45 minutes (a statistically significant 62.5% reduction). In gray scale plots, visual field defect was less deep in SITA Fast than in SFT; however, more points had significant defect (p &lt; 0.5% and p &lt; 1%) in pattern deviation plots in SITA Fast than in SFT; these differences were not clinically significant. In conclusion, the SITA Fast strategy showed higher sensitivity for detection of glaucoma compared to the SFT strategy, yet with reduced specificity; however, the shorter test duration makes it a more acceptable choice in many clinical situations, especially for children, elderly, and those with musculoskeletal diseases. abstract_id: PUBMED:38130807 Agreement Between Virtual Reality Perimetry and Static Automated Perimetry in Various Neuro-Ophthalmological Conditions: A Pilot Study. Our objective was to compare the agreement between virtual reality perimetry (VRP) (order of magnitude, OM) and static automated perimetry (SAP) in various neuro-ophthalmological conditions. We carried out a retrospective analysis of visual field plots of patients with various neuro-ophthalmological conditions who underwent visual field testing using VRP and SAP and between 1 January and 31 May 2022. Two fellowship-trained neuro-ophthalmologists compared the visual field defects observed on both devices. Per cent agreement was used to compare the interpretation of the two examiners on both techniques. The study criteria were met by 160 eyes from 148 patients (mean age 44 years, range 17-74 years). The most common aetiologies were optic atrophy due to various causes, optic neuritis, ischaemic optic neuropathy, and compressive optic neuropathy. Overall, we found good agreement between VRP and SAP for bitemporal (93.8%), hemianopic (90.8%), altitudinal (79.4%), and generalised visual field defects (86.4%). The agreement was acceptable for central/centrocaecal scotomas and not acceptable for enlarged blind spots. Between the two examiners there was good agreement for bitemporal (92.3%), hemianopic (82%), altitudinal (83%), and generalised field defects (76.4%). The results of our study suggest that VRP gives overall good agreement with SAP in various neuro-ophthalmological conditions, especially those likely to produce hemianopic, altitudinal, and generalised visual field defects. This could be useful in various settings; however, future larger studies are needed to explore real-world utilisation. abstract_id: PUBMED:11384569 Role of frequency doubling perimetry in detecting neuro-ophthalmic visual field defects. Purpose: To report the ability of frequency doubling perimetry to detect "neuro-ophthalmic" field defects, characterize them as hemianopic or quadrantanopic, and differentiate glaucomatous from "other" neuro-ophthalmic field defects. Methods: Sixty eyes of 30 normal subjects, 50 eyes of 29 patients with glaucomatous defects, and 138 eyes of 103 patients with "typical" neuro-ophthalmic field defects underwent automated perimetry using the Swedish Interactive Threshold Algorithm and frequency doubling perimetry. The sensitivity and specificity for identification of a field defect (frequency doubling perimetry 20-5 and 20-1 screening tests), or to characterize hemianopia/quadrantanopia (full threshold test) were determined. Ability to discriminate glaucomatous defects was determined by comparing frequency doubling perimetry full threshold test in glaucoma to pooled results of normal and neuro-ophthalmic groups. Results: On frequency doubling perimetry, a single point depressed to less than 1% probability had a sensitivity of 97.1% (20-5 test) and 95.7% (20-1 test) for detecting a neuro-ophthalmic visual field defect. The corresponding specificities were 95% using pooled results in normal subjects and patients with glaucoma and "other" neuro-ophthalmic field defects. In 20-5 screening a single abnormal point depressed to less than 2% probability level had a sensitivity of 98.6% (specificity 85%). Two abnormal points in the 20-1 screening depressed to less than 1% probability level had a specificity of 100% (sensitivity 84.8%). In frequency doubling perimetry full threshold, sensitivity and specificity for detection of hemianopia were 86.8% and 83.2%; for quadrantanopia they were 79.2% and 38.6%. The sensitivity and specificity for categorizing a defect as glaucomatous were 86% and 74.7%. Conclusions: Frequency doubling perimetry is a sensitive and specific test for detecting "neuro-ophthalmic" field defects. The presence of two abnormal points (20-1 screening program) "rules in" the presence of a field defect. A normal 20-5 program (absence of a single abnormal point) almost "rules out" a defect. Frequency doubling perimetry could not accurately categorize hemianopic, quadrantanopic, or glaucomatous defects. abstract_id: PUBMED:30868416 Expediency of the Automated Perimetry Using the Goldmann V Stimulus Size in Visually Impaired Patients with Glaucoma. Introduction: White-on-white standard automated perimetry (AP) uses a white round stimulus with 0.43° diameter and 4.0 mm2 area (Goldmann size III). Patients with low vision have difficulty seeing such a small stimulus and are often tested with perimetry using the size V stimulus with 1.72° diameter and 64 mm2 area. We undertook an observational case-control study to compare the performance of patients on AP using two differently sized stimuli. Methods: Patients with glaucoma and visual acuity worse than 20/100 underwent AP using the standard size III stimulus Swedish Interactive Threshold Algorithm (SITA) standard test and size V stimulus full threshold test. All patients were familiar with the procedure, having done the test at least twice previously. Another group of glaucoma patients with visual acuity better than 20/40 served as controls. The main outcome measures included test time, mean retinal sensitivity (MRS), foveal sensitivity (FS), fixation loss (FL), false positive (FP), false negative (FN), and the patient's subjective preference. Results: Fifty patients were included in the study. Most preferred the size V stimulus target size test. For glaucoma patients, test time was shorter with size III; MRS and FS were higher with size V; FL, FP, and FN did not differ between the tests. Conclusion: AP with stimulus size V may be a good alternative to standard size III in selected visually debilitated patients who report difficulty undergoing a standard SITA 24-2 test. abstract_id: PUBMED:7258961 Automated perimetry in a neuro-ophthalmologic practice. Visual field examinations were performed with both the Fieldmaster Automated Perimeter and the Goldmann kinetic perimeter on 60 patients, aged 10 to 70 years (118 eyes) referred for neuro-ophthalmologic evaluation. Results suggest that the Fieldmaster perimeter is almost as sensitive as the Goldmann perimeter in identifying abnormal visual fields (99.2%). False-positive results were noted in only two cases (1.6%). Fieldmaster perimetry demonstrated certain advantages over Goldmann perimetry in easily fatigued, inattentive, or uncooperative patients. Recent technological advances in automated perimetry resulting in greater sensitivity and reproducibility of results now offer a viable alternative to Goldmann perimetry. Answer: Yes, Swedish Interactive Thresholding Algorithm (SITA) Fast perimetry can be used as an alternative to Goldmann perimetry in neuro-ophthalmic practice. A study comparing SITA Fast with Goldmann manual kinetic perimetry (GVF) in neuro-ophthalmologically impaired patients found that SITA Fast was useful in detecting and accurately quantifying central visual field defects associated with neuro-ophthalmic disorders. The study showed that SITA Fast and GVF were equally reliable in 77% of eyes and showed similar visual field results in 75% of all eyes. Additionally, SITA Fast had a shorter mean test duration per eye compared to GVF. Despite a preference for GVF among 91% of patients, the study concluded that SITA Fast might be preferable, especially if performed by a marginally trained technician, in patients with severely decreased vision or who are neurologically disabled (PUBMED:12215089). Moreover, other studies have also supported the use of alternative perimetry techniques in various conditions. For instance, semiautomated kinetic perimetry (SKP) on Octopus 900 perimeters was found to produce visual fields of similar size and variability to Goldmann perimetry in children and adults with inherited retinal degenerations (IRDs), indicating that SKP is a viable alternative to traditional Goldmann perimetry (PUBMED:31211001). Additionally, semi-automated kinetic perimetry using the Octopus 900 and Humphrey field analyzer 3 (HFA3) showed promising results as successors of Goldmann perimetry, with no statistically significant difference in mean isopter radius for V4e stimuli when compared to Goldmann (PUBMED:30345638). Furthermore, advancements in visual field testing, such as the development of new testing strategies that reduce testing times, improve patient comfort, and increase sensitivity for detecting small central or paracentral scotomas, have been noted in static automated perimetry. Machine learning-based tools are also being tested to quantitatively depict and monitor visual field abnormalities in optic neuropathies (PUBMED:38289405). In conclusion, SITA Fast perimetry and other alternative perimetry techniques have been shown to be reliable and can be considered as alternatives to Goldmann perimetry in neuro-ophthalmic practice, with the choice potentially depending on the specific patient population and clinical setting.
Instruction: Routine histologic examination of 728 mastectomy scars: did it benefit our patients? Abstracts: abstract_id: PUBMED:17051097 Routine histologic examination of 728 mastectomy scars: did it benefit our patients? Background: Routine histologic examination of secondarily excised mastectomy scars is considered good practice, even though the microscopic detection of a metastasis in clinically unsuspected mastectomy scars is rare. Because cost-effective use of histologic services is required, the occurrence rate of metastases in such scars needs to be established to assess the possible benefit of such routine examination. Methods: The histologic observations on 728 clinically unsuspected scars from prophylactic (n = 151) or curative (n = 395) mastectomy or breast-conservation treatment in 424 patients were traced and correlated to the indication of initial breast surgery, possible adjuvant therapy, and time lapse between initial surgery and scar examination. Results: In none of the 728 scars was a scar metastasis or de novo tumor found. Conclusions: Routine histologic examination of clinically unsuspected scars excised at the time of breast reconstruction or scar correction after prophylactic or curative breast surgery did not benefit the authors' patients. abstract_id: PUBMED:18216513 Should excised keloid scars be sent for routine histologic analysis? Introduction: Keloid scarring is a clinical diagnosis, usually preceded by a history of localized trauma. Significant variation exists as to whether excised specimens are sent for routine histologic analysis. We aimed to review the histology of all clinically diagnosed keloids at our unit. Methods: All keloids diagnosed clinically and excised were identified between April 1995 and April 2006. The subsequent histology results were analyzed. Results: Five hundred sixty-eight specimens were sent for pathologic investigation over an 11-year period. Four hundred fifty-eight (81%) were reported as "keloid," 60 (11%) as "acne keloidalis," 35 (6%) as "hypertrophic scar," and 14 (2%) as "normal scar." There were no reported malignancies or dysplasias. Discussion: These histology results suggest that, given a good clinical suspicion of keloid, it may be unnecessary to send specimens at excision for routine histology. abstract_id: PUBMED:19337082 Mastectomy scars following breast reconstruction: should routine histologic analysis be performed? Background: There is some debate in the recent literature regarding the routine submission of mastectomy scars for histologic analysis when performing delayed breast reconstructions. The aim of this study was to review the relevant publications and evaluate the practice of routine histologic examination of mastectomy scars. Methods: The authors conducted a retrospective review, across three regional plastic and reconstructive surgery units, of 433 patients who had 455 scars routinely sent for histologic examination following delayed breast reconstruction between January of 2000 and December of 2006. Patients with clinical evidence of recurrent carcinoma were excluded. Results: Data from 433 patients revealed an average age at reconstruction of 49.9 years (range, 25 to 77 years). The mean interval from primary breast surgery to reconstruction was 3.9 years (range, 2 months to 32 years), and the average length of patient follow-up, from primary surgery, was 6.4 years (range, 1 to 40 years). The majority of the initial operations were carried out for invasive carcinoma (89 percent). Four mastectomy scars in three patients were positive for carcinoma recurrence. Conclusions: The publications related to the practice of routine histologic analysis of mastectomy scars provide conflicting conclusions. As a proportion of patients may benefit from the early detection and treatment of locoregional recurrence, the authors suggest that the routine submission of mastectomy scars will allow for the earlier detection of soft-tissue recurrences that may affect long-term outcome. In keeping with cancer surgery principles, the authors recommend routine histologic examination of mastectomy scars following delayed breast reconstruction. abstract_id: PUBMED:17572595 Routine histologic examination of 728 mastectomy scars: did it benefit our patients? N/A abstract_id: PUBMED:29485577 Routine Pathologic Evaluation of Plastic Surgery Specimens: Are We Wasting Time and Money? Background: Recent health care changes have encouraged efforts to decrease costs. In plastic surgery, an area of potential cost savings includes appropriate use of pathologic examination. Specimens are frequently sent because of hospital policy, insurance request, or habit, even when clinically unnecessary. This is an area where evidence-based guidelines are lacking and significant cost-savings can be achieved. Methods: All specimen submitted for pathologic examination at two hospitals between January and December of 2015 were queried for tissue expanders, breast implants, fat, skin, abdominal pannus, implant capsule, hardware, rib, bone, cartilage, scar, and keloid. Specimens not related to plastic surgery procedures were excluded. Pathologic diagnosis and cost data were obtained. Results: A total of 759 specimens were identified. Of these, 161 were sent with a specific request for gross examination only. There were no clinically significant findings in any of the specimens. There was one incidental finding of a seborrheic keratosis on breast skin. The total amount billed in 2015 was $430,095. Conclusions: The infrequency of clinically significant pathologic examination results does not support routine pathologic examination of all plastic surgery specimens. Instead, the authors justify select submission only when there is clinical suspicion or medical history that warrants evaluation. By eliminating unnecessary histologic or macroscopic examination, significant cost savings may be achieved. abstract_id: PUBMED:23044349 Is routine histological examination of mastectomy scars justified? An analysis of 619 scars. Background: The increasing incidence of breast cancer is paralleled by an increasing demand for post-mastectomy breast reconstruction. At the time of breast reconstruction routine submission of mastectomy scars has been considered appropriate clinical practice to ensure that no residual cancer exists. However, this practice has been challenged by some and has become the topic of controversy. In a retrospective analysis we wished to assess whether routine submission of mastectomy scars altered treatment. Methods: Utilizing the Stanford Translational Research Integrated Database Environment (STRIDE) all patients who underwent implant-based breast reconstruction with routine histological analysis of mastectomy scars were identified. The following parameters were retrieved and analyzed: age, cancer histology, cancer stage (according to the American Joint Committee on Cancer staging system), receptor status (estrogen receptor [ER], progesterone receptor [PR], Her2neu), time interval between mastectomy and reconstruction, and scar histology. Results: A total of 442 patients with a mean age of 45.9 years (range, 22-73 years) were included in the study. Mastectomy with subsequent reconstruction was performed for in-situ disease and invasive cancer in 83 and 359 patients, respectively. A total of 619 clinically unremarkable mastectomy scars were sent for histological analysis, with the most common finding being unremarkable scar tissue (i.e. collagen fibers). Of note, no specimen revealed the presence of carcinoma. Conclusion: According to published reports routine histological examination of mastectomy scars may detect early local recurrence. However, we were not able to detect this benefit in our patient population. As such, particularly in the current health-care climate the cost-effectiveness of this practice deserves further attention. A more selective use of histological analysis of mastectomy scars in patients with tumors that display poor prognostic indicators may be a more reasonable utilization of resources. abstract_id: PUBMED:16918559 Histologic study of depressed acne scars treated with serial high-concentration (95%) trichloroacetic acid. Background: Acne scarring is a common manifestation that remains a therapeutic challenge to dermatologists, dermatologic surgeons, and plastic surgeons. Although multiple therapeutic modalities exist, treatment often remains inadequate. The use of high-concentration (95%) trichloroacetic acid (TCA) applied focally to atrophic acne scars has been described. Objective: The current study confirms the utility of focal application of 95% TCA to acne scars in addition to a histologic examination of this technique. Methods: Acne scars in three patients were treated with focal 95% TCA by serial application. Wooden applicators were used to apply TCA focally and repeated at 6-week intervals for a total of six treatments. Punch biopsies were performed at baseline and at 1 year postoperatively. Histologic examination was performed with routine hematoxylin/eosin, Masson trichrome, and Verhoeff-van Gieson staining. Results: Clinical examination revealed apparent cosmetic improvement in both depth and appearance of acne scars. Patient satisfaction was high. Histologic examination demonstrated a decrease in the depth of acne scars. In addition, increased collagen fibers and fragmentation of elastic fibers were noted. There were no complications from the procedure. Conclusion: Focal application of high-concentration TCA to atrophic and "ice-pick" acne scars appears to produce clinical improvement. Histologic changes of this technique are described. abstract_id: PUBMED:30121003 Three-dimensional histologic reconstruction of remnant functional accessory atrioventricular myocardial connections in a case of Wolff-Parkinson-White syndrome. Myocardial bundles working as accessory pathways in Wolff-Parkinson-White (WPW) syndrome are generally tiny tissues, so elucidating the culprit histology of atrioventricular (AV) myocardial connections requires careful serial sectioning of the AV junction. We performed a postmortem examination of accessory AV myocardial connections in an 84-year-old man who died from pneumonia 20 years after surgical cryoablation for WPW syndrome. Three-dimensional reconstruction images of serial histologic sections revealed accessory AV connections between the atrial and ventricular myocardium in the vicinity of the cryoablation scar. The remnant myocardial bridge was 4 mm wide and made up of multiple discontinuous fibers. This case was informative in that it provided for visualization of the histologic morphology of a remnant bundle of Kent. abstract_id: PUBMED:30489512 Utility and Cost Effectiveness of Routine, Histologic Evaluation of the Mastectomy Scar in Two-Stage, Implant-Based Reconstruction during Expander-to-Implant Exchange. Background: Routine histologic analysis of the mastectomy scar is well studied in the delayed breast construction population; no data regarding its utility in the immediate, staged reconstruction cohort have been published. Methods: A retrospective review of all of the senior author's (C.D.C.) patients who underwent immediate, staged reconstruction was performed. The mastectomy scar was analyzed routinely at the time of expander-to-implant exchange. Six hundred forty-seven breasts were identified. The mastectomy scar, time between expander and permanent implant, average patient age, and mastectomy indication were calculated. A cost analysis was completed. Results: All scar pathologic results were negative for in-scar recurrence. The majority, 353 breasts, underwent mastectomy for carcinoma, 94 for germline mutations, 15 for high-risk lesions, six for high family risk, and 179 for contralateral symmetry/risk reduction. The average age at mastectomy/expander placement was 47.7 ± 10.3 years, and the average time between expander placement and implant exchange was 254 ± 152 days. The total histologic charge per breast was $602. Conclusions: A clinically silent in-scar recurrence is, at most, a rare occurrence. Routine histologic analysis of the mastectomy scar can be safely avoided in the immediate, staged reconstruction cohort. Clinical Question/level Of Evidence: Therapeutic, IV. abstract_id: PUBMED:34104159 Frequency of incidental ocular findings during pre-employment screening at a tertiary care Eye hospital. Objective: To highlight the prevalence of incidentally discovered ocular findings harvested amongst candidates of different age groups presented for pre-employment screening in a tertiary care eye hospital. Methods: This Cross sectional prospective study was conducted in Armed Forces Institute of Ophthalmology, Rawalpindi, from Jun 2018 to Dec 2019. Data was collected using non-probability consecutive sampling technique. All candidates who appeared for medical fitness examination were included. Candidates belonged to various regions of Pakistan. Complete ophthalmic checkup including visual acuity, best corrected visual acuity, anterior and posterior segment examination was performed. The data analysis was done by IBM SPSS 2.0 software. Results: One thousand and five hundred (1500) candidates underwent ophthalmic medical fitness examination during Jun 2018 to Dec 2019, out of these 86% (1290) were males and 14% (210) were females. Mean age of the candidates was 23.14 ± 5.66 years. The most common incidental ocular findings were amblyopia 24.6% (369), strabismus 10% (150), cataract 7.3% (110), macular scar 6.5% (100) and colour vision deficiencies 5.5% (82). Conclusion: The study demonstrates that out of total patients, 77% (1095) were found to be asymptomatic and 23% (405) were symptomatic. The study provides frequency for prevailing diseases and can help in improvement of eye care screening. Answer: The routine histologic examination of 728 mastectomy scars did not benefit the patients in the study. According to the results, none of the 728 scars examined histologically revealed a scar metastasis or de novo tumor (PUBMED:17051097). Therefore, the routine histologic examination of clinically unsuspected scars excised at the time of breast reconstruction or scar correction after prophylactic or curative breast surgery did not provide any diagnostic benefit to the patients in this particular study.
Instruction: Is the Digital Divide for Orthopaedic Trauma Patients a Myth? Abstracts: abstract_id: PUBMED:27206259 Is the Digital Divide for Orthopaedic Trauma Patients a Myth? Prospective Cohort Study on Use of a Custom Internet Site. Objectives: Some have proposed that a so-called digital divide exists for orthopaedic trauma patients and that the clinical usefulness of the Internet for these patients is limited. No studies to date have confirmed this or whether patients would use a provided web resource. The hypotheses of this study were (1) a larger than expected percentage of trauma patients have access to the Internet and (2) if given access to a custom site, patients will use it. Design: Prospective cohort. Setting: Level 1 regional trauma center. Patients: Patients who were 18 years or older with acute operative fractures participated in this study. Enrollment was initiated either before discharge or at initial outpatient follow-up. Intervention: We conducted a survey of demographics, Internet usage, device type, eHealth Literacy, and intent to use the web site. Participants received a keychain containing the web address and a unique access code to our custom orthopaedic trauma web site. Main Outcome Measurements: Percentage of patients with Internet access and percentage of patients who visited the web site. Results: One hundred twelve patients were enrolled. Ninety-three percent (104/112) reported having Internet access (P &lt; 0.0001). Only increasing age predicted lack of access (P &lt; 0.015; odds ratio, 0.95). Most (95%, 106/112) planned to visit our site; however, only 11% (P &lt; 0.001) accessed it. Conclusions: The digital divide is a myth in orthopaedic trauma. Despite widespread access and enthusiasm for our web site, few patients visited. This cautions against the allocation of resources for patient-specific web sites for orthopaedic trauma until a rationale for use can be better delineated. Level Of Evidence: Therapeutic Level IV. See Instructions for Authors for a complete description of levels of evidence. abstract_id: PUBMED:32818068 Finding NEEMO: towards organizing smart digital solutions in orthopaedic trauma surgery. There are many digital solutions which assist the orthopaedic trauma surgeon. This already broad field is rapidly expanding, making a complete overview of the existing solutions difficult.The AO Foundation has established a task force to address the need for an overview of digital solutions in the field of orthopaedic trauma surgery.Areas of new technology which will help the surgeon gain a greater understanding of these possible solutions are reviewed.We propose a categorization of the current needs in orthopaedic trauma surgery matched with available or potential digital solutions, and provide a narrative overview of this broad topic, including the needs, solutions and basic rules to ensure adequate use in orthopaedic trauma surgery. We seek to make this field more accessible, allowing for technological solutions to be clearly matched to trauma surgeons' needs. Cite this article: EFORT Open Rev 2020;5:408-420. DOI: 10.1302/2058-5241.5.200021. abstract_id: PUBMED:35697552 Comparing shared decision making using a paper and digital consent process. A multi-site, single centre study in a trauma and orthopaedic department. Introduction: The importance of shared decision making (SDM) for informed consent has been emphasised in the updated regulatory guidelines. Errors of completion, legibility and omission have been associated with paper-based consent forms. We introduced a digital consent process and compared it against a paper-based process for quality and patient reported involvement in shared decision making. Methods: 223 patients were included in this multi-site, single centre study. Patient consent documentation was by either a paper consent form or the Concentric digital consent platform. Consent forms were assessed for errors of legibility, completion and accuracy of content. Core risks for 20 orthopaedic operations were pre-defined by a Delphi round of experts and forms analysed for omission of these risks. SDM was determined via the 'collaboRATE Top Score', a validated measure for gold-standard SDM. Results: 72% (n = 78/109) of paper consent forms contained ≥1 error compared to 0% (n = 0/114) of digital forms (P &lt; 0.0001). Core risks were unintentionally omitted in 63% (n = 68/109) of paper-forms compared to less than 2% (n = 2/114) of digital consent forms (P &lt; 0.0001). 72% (n = 82/114) of patients giving consent digitally reported gold-standard SDM compared to 28% (n = 31/109) with paper consent (P &lt; 0.001). Conclusion: Implementation of a digital consent process has been shown to reduce both error rate and the omission of core risks on consent forms whilst increasing the quality of SDM. This novel finding suggests that using digital consent can improve both the quality of informed consent and the patient experience of SDM. abstract_id: PUBMED:35949264 Sensors and digital medicine in orthopaedic surgery. Digital health principles are starting to be evident in medicine. Orthopaedic trauma surgery is also being impacted -indirectly by all other improvements in the health ecosystem but also in particular efforts aimed at trauma surgery. Data acquisition is changing how evidence is gathered and utilized. Sensors are the pen and paper of the next wave of data acquisition. Sensors are gathering wide arrays of information to facilitate digital health relevance and adoption. Early adaption of sensor technology by the nonlegacy health environment is what has made sensor driven data acquisition so palatable to the normal health care system. As it applies to orthopaedic trauma, current sensor driven diagnostics and surveillance are nowhere near as developed as in the larger medical community. Digital health is being explored for health care records, data acquisition in diagnostics and rehabilitation, wellness to health care translation, intraoperative monitoring, surgical technique improvement, as well as some early-stage projects in long-term monitoring with implantable devices. The internet of things is the next digital wave that will undoubtedly affect medicine and orthopaedics. Internet of things (loT) devices are now being used to enable remote health monitoring and emergency notification systems. This article reviews current and future concepts in digital health that will impact trauma care. abstract_id: PUBMED:29887237 Use of ASEPSIS scoring method for the assessment of surgical wound infections in a Greek orthopaedic department. Background: In Greece there is no systematic assessment of surgical wounds with the use of a validated instrument, while the ASEPSIS scoring method has been widely used internationally. Aim: To examine the frequency of wound infections and their correlations both with patient background factors, as well as surgery factors, with the use of ASEPSIS. Methods: In this prospective, observational study, participants undergoing orthopaedic surgeries in a large hospital in Greece were assessed during hospitalisation and the first month after discharge using the ASEPSIS wound assessment tool. The principles of the Declaration of Helsinki were applied. Non-parametric statistical analyses were performed using SPSS 20.0. Results: In total, 111 patients participated; nearly half (49.5%) had a total ASEPSIS score of "0". Almost 3 out of 4 patients (76.6%) had an ASEPSIS score under or equal to "10" (satisfactory healing) and only 3.6% had a minor or severe surgical wound infection. The ASEPSIS score was only positively correlated to longer surgery duration and longer postoperative stay. Discussion: The frequency of surgical wound infections in orthopaedic patients in Greece is comparable to that described in the literature. ASEPSIS could be used for assessing patients and as a performance indicator in Greek orthopaedic departments. abstract_id: PUBMED:32007403 Still too noisy - An audit of sleep quality in trauma and orthopaedic patients. Introduction: An adequate amount of sleep is fundamental to health and well-being, especially for individuals recovering from an illness or injury. Trauma patients sustain musculoskeletal and tissue injuries and require a sufficient amount of sleep to promote recovery. However, it is known that patients can face difficulties sleeping in hospitals which impacts on their recovery. Aim: To determine the quality of sleep, influence of sleep quality and the impact of sleep quality on recovery in trauma and orthopaedic patients. Methodology: An exploratory descriptive design was applied using a clinical audit. As no standardised sleep assessment tool was identified, a sleep audit tool was developed. Findings: A total of 40 patients were recruited from two trauma and orthopaedic wards from a London Hospital in the United Kingdom. Of these 17 patients (43%) rated the quality of sleep as 'poor' and nearly half (n = 19, 46%) reported that the quality of their night-time sleep had affected their recovery. Two-thirds of patients reported noise was the main factor that disrupted their sleep, making it the highest contributing sleep disruptor (n = 26, 65%). Conclusion: A significant association between poor quality of sleep and patient recovery was identified in this small sample of trauma and orthopaedic patients. The findings suggest that nurses should try to create a suitable sleeping environment to enhance patient recovery. There is a need for a standardised sleep assessment tool and sleep audit tool so that the quality of patients' sleep can be accurately assessed and documented. abstract_id: PUBMED:38425488 Can researchers trust ICD-10 coding of medical comorbidities in orthopaedic trauma patients? Objectives: The 10th revision of the International Classification of Diseases (ICD-10) coding system may prove useful to orthopaedic trauma researchers to identify and document populations based on comorbidities. However, its use for research first necessitates determination of its reliability. The purpose of this study was to assess the reliability of electronic medical record (EMR) ICD-10 coding of nonorthopaedic diagnoses in orthopaedic trauma patients relative to the gold standard of prospective data collection. Design: Nonexperimental cross-sectional study. Setting: Level 1 Trauma Center. Patients/participants: Two hundred sixty-three orthopaedic trauma patients from 2 prior prospective studies from September 2018 to April 2022. Intervention: Prospectively collected data were compared with EMR ICD-10 code abstraction for components of the Charlson Comorbidity Index (CCI), obesity, alcohol abuse, and tobacco use (retrospective data). Main Outcome Measurements: Percent agreement and Cohen's kappa reliability. Results: Percent agreement ranged from 86.7% to 96.9% for all CCI diagnoses and was as low as 72.6% for the diagnosis "overweight." Only 2 diagnoses, diabetes without end-organ damage (kappa = 0.794) and AIDS (kappa = 0.798) demonstrated Cohen's kappa values to indicate substantial agreement. Conclusion: EMR diagnostic coding for medical comorbidities in orthopaedic trauma patients demonstrated variable reliability. Researchers may be able to rely on EMR coding to identify patients with diabetes without complications or AIDS. Chart review may still be necessary to confirm diagnoses. Low prevalence of most comorbidities led to high percentage agreement with low reliability. Level Of Evidence: Level 1 diagnostic. abstract_id: PUBMED:34488428 The role of Vitamin D in orthopaedic infection: a systematic literature review. Aims: Orthopaedic infection is a potentially serious complication of elective and emergency trauma and orthopaedic procedures, with a high associated burden of morbidity and cost. Optimization of vitamin D levels has been postulated to be beneficial in the prevention of orthopaedic infection. This study explores the role of vitamin D in orthopaedic infection through a systematic review of available evidence. Methods: A comprehensive search was conducted on databases including Medline and Embase, as well as grey literature such as Google Scholar and The World Health Organization Database. Pooled analysis with weighted means was undertaken. Results: Pooled analysis of four studies including 651 patients found the mean 25(OH)D level to be 50.7 nmol/l with a mean incidence of infection of 70%. There was a paucity of literature exploring prophylactic 25(OH)D supplementation on reducing orthopaedic infection, however, there was evidence of association between low 25(OH)D levels and increased incidence of orthopaedic infection. Conclusion: The results indicate a significant proportion of orthopaedic patients have low 25(OH]D levels, as well as an association between low 25(OH)D levels and orthopaedic infection, but more randomized controlled trials need to be conducted to establish the benefit of prophylactic supplementation and the optimum regimen by dose and time. Cite this article: Bone Jt Open 2021;2(9):721-727. abstract_id: PUBMED:28360496 Drivers of hospital length of stay in 56,000 orthopaedic trauma patients: The impact of postoperative cardiac events. Purpose: To determine whether postoperative cardiac complications following orthopaedic trauma treatment are associated with longer lengths of stay. Methods: This was a retrospective cohort study. We analyzed orthopaedic trauma patients in the United States for whom data was collected in the ACS-NSQIP database between the years of 2006 and 2013. The patient population included 56,217 orthopaedic trauma patients meeting any 1 of the 89 CPT codes selected in the ACS-NSQIP database. The main outcome measure was hospital length of stay following orthopaedic trauma treatment. Result: Of the 56,217 orthopaedic trauma patients, 749 (1.3%) developed postoperative adverse cardiac events. There was a significant difference in total length of stay (p &lt; 0.001): patients with cardiac complications on average stayed 10.6 days compared to 5.2 days for patients who did not experience such cardiac complications. This amounted to a difference of $24,316 in total hospital costs. Through multiple linear regression modeling controlling for multiple patient and surgical factors, the presence of cardiac complications significantly added 1.5 days in total hospital stay (p &lt; 0.05). Conclusion: Orthopaedic trauma patients sustaining postoperative cardiac events have significantly longer hospital lengths of stay when compared to those who do not develop cardiac complications. This difference amounts to significantly higher health care costs. abstract_id: PUBMED:33937668 Sleep disturbances in orthopaedic trauma patients. Objectives: To determine the prevalence of sleep disturbances in orthopaedic trauma patients 3 months following surgery and to identify any subset(s) of patients at high risk for prolonged sleep disturbance. Design: Prospective cohort. Setting: Level 1 Trauma Center. Patient/participants: All patients at an orthopaedic trauma clinic from May 3, 2016 to Feb 23, 2017. Main Outcome Measurements: Baseline patient Pittsburgh Sleep Questionnaire (PSQI) and Insomnia Severity Index (ISI) scores compared to PSQI and ISI scores 3-months postoperatively. Both gender and age stratified data analyses were performed. Results: Sixty-six patients met our inclusion criteria and completed both baseline and 3-month surveys. There were 44 males and 22 females. There was a significant increase in PSQI and ISI scores from baseline to 3 months across all patients. Further analysis revealed significant increases from baseline to 3 months in both PSQI and ISI scores for female patients but not male patients. There was a significant difference from baseline to 3 months in patients 50 years old and under but not for patients above 50 years old. No patients required revision surgery in the first 3 months. Conclusions: More than half of all patients reported continued sleep disturbance 3 months postoperatively. Females are at particularly increased risk for sleep disturbance. These findings demonstrate that sleep disturbances merit attention in the early stages of the postoperative recovery process. Level Of Evidence: Therapeutic Level II. Answer: The concept of a digital divide for orthopaedic trauma patients has been challenged by a study that found a higher than expected percentage of these patients have access to the Internet. In a prospective cohort study conducted at a Level 1 regional trauma center, 93% of the 112 enrolled patients reported having Internet access. Despite this widespread access and initial enthusiasm for a custom orthopaedic trauma website provided to them, only 11% accessed it. This suggests that while access to the Internet is not a significant barrier, the actual utilization of online resources provided to orthopaedic trauma patients is low. Therefore, the study concludes that the digital divide is a myth in orthopaedic trauma, but it also cautions against allocating resources for patient-specific websites until a clear rationale for their use is established (PUBMED:27206259).
Instruction: Extracranial Venous abnormalities: A true pathological finding in patients with multiple sclerosis or an anatomical variant? Abstracts: abstract_id: PUBMED:27011374 Extracranial Venous abnormalities: A true pathological finding in patients with multiple sclerosis or an anatomical variant? Objective: To evaluate the extracranial venous anatomy with contrast-enhanced MR venogram (CE-MRV) in patients without multiple sclerosis (MS), and assess the prevalence of various venous anomalies such as asymmetry and stenosis in this population. Materials And Methods: We prospectively recruited 100 patients without MS, aged 18-60 years, referred for contrast-enhanced MRI. They underwent additional CE-MRV from skull base to mediastinum on a 3T scanner. Exclusion criteria included prior neck radiation, neck surgery, neck/mediastinal masses or significant cardiac or pulmonary disease. Two neuroradiologists independently evaluated the studies to document asymmetry and stenosis in the jugular veins and prominence of collateral veins. Results: Asymmetry of internal jugular veins (IJVs) was found in 75 % of subjects. Both observers found stenosis in the IJVs with fair agreement. Most stenoses were located in the upper IJV segments. Asymmetrical vertebral veins and prominence of extracranial collateral veins, in particular the external jugular veins, was not uncommon. Conclusion: It is common to have stenoses and asymmetry of the IJVs as well as prominence of the collateral veins of the neck in patients without MS. These findings are in contrast to prior reports suggesting collateral venous drainage is rare except in MS patients. Key Points: • The venous anatomy of the neck in patients without MS demonstrates multiple variants • Asymmetry and stenoses of the internal jugular veins are common • Collateral neck veins are not uncommon in patients without MS • These findings do not support the theory of chronic cerebrospinal venous insufficiency • MR venography is a useful imaging modality for assessing venous anatomy. abstract_id: PUBMED:31185944 No association between variations in extracranial venous anatomy and clinical outcomes in multiple sclerosis patients over 5 years. Background: No longitudinal, long-term, follow-up studies have explored the association between presence and severity of variations in extracranial venous anatomy, and clinical outcomes in patients with multiple sclerosis (MS). Objective: This prospective 5-year follow-up study assessed the relationship of variations in extracranial venous anatomy, indicative of chronic cerebrospinal venous insufficiency (CCSVI) on Doppler sonography, according to the International Society for Neurovascular Disease (ISNVD) proposed consensus criteria, with clinical outcomes and disease progression in MS patients. Methods: 90 MS patients (52 relapsing-remitting, RRMS and 38 secondary-progressive, SPMS) and 38 age- and sex-matched HIs were prospectively followed for 5.5 years. Extracranial and transcranial Doppler-based venous hemodynamic assessment was conducted at baseline and follow-up to determine the extent of variations in extracranial venous anatomy. Change in Expanded Disability Status Scale (∆EDSS), development of disability progression (DP) and annualized relapse rate (ARR) were assessed. Results: No significant differences were observed in MS patients, based on their presence of variations in extracranial venous anatomy at baseline or at the follow-up, in ∆EDSS, development of DP or ARR. While more MS patients had ISNVD CCSVI criteria fulfilled at baseline compared to HIs (58% vs. 37%, p = 0.03), no differences were found at the 5-year follow-up (61% vs. 56%, p = 0.486). Discussion: This is the longest follow-up study assessing the longitudinal relationship between the presence of variations in extracranial venous anatomy and clinical outcomes in MS patients. Conclusion: The presence of variations in extracranial venous anatomy does not influence clinical outcomes over the 5-year follow-up in MS patients. abstract_id: PUBMED:26099795 An anatomy-based lumped parameter model of cerebrospinal venous circulation: can an extracranial anatomical change impact intracranial hemodynamics? Background: The relationship between extracranial venous system abnormalities and central nervous system disorders has been recently theorized. In this paper we delve into this hypothesis by modeling the venous drainage in brain and spinal column areas and simulating the intracranial flow changes due to extracranial morphological stenoses. Methods: A lumped parameter model of the cerebro-spinal venous drainage was created based on anatomical knowledge and vessels diameters and lengths taken from literature. Each vein was modeled as a hydraulic resistance, calculated through Poiseuille's law. The inputs of the model were arterial flow rates of the intracranial, vertebral and lumbar districts. The effects of the obstruction of the main venous outflows were simulated. A database comprising 112 Multiple Sclerosis patients (Male/Female = 42/70; median age ± standard deviation = 43.7 ± 10.5 years) was retrospectively analyzed. Results: The flow rate of the main veins estimated with the model was similar to the measures of 21 healthy controls (Male/Female = 10/11; mean age ± standard deviation = 31 ± 11 years), obtained with a 1.5 T Magnetic Resonance scanner. The intracranial reflux topography predicted with the model in cases of internal jugular vein diameter reduction was similar to those observed in the patients with internal jugular vein obstacles. Conclusions: The proposed model can predict physiological and pathological behaviors with good fidelity. Despite the simplifications introduced in cerebrospinal venous circulation modeling, the key anatomical feature of the lumped parameter model allowed for a detailed analysis of the consequences of extracranial venous impairments on intracranial pressure and hemodynamics. abstract_id: PUBMED:23761866 Incidence and distribution of extravascular compression of extracranial venous pathway in patients with chronic cerebrospinal venous insufficiency and multiple sclerosis. Objective: To examine the incidence and distribution of extravascular compression of the extracranial venous pathway (the jugular and/or azygous veins) in multiple sclerosis patients with chronic cerebrospinal venous insufficiency evaluated by mulitislice computer tomographic angiography. Methods And Results: Study group consisted of 51 consecutive patients with multiple sclerosis in whom chronic cerebrospinal venous insufficiency was diagnosed by color Doppler sonography (CDS). Mulitislice computer tomographic angiography was performed in all patients, and it revealed significant extravascular compression (&gt;70%) of extracranial venous pathway in 26 patients (51%), while in 25 patients (49%) no significant extravascular compression was seen. Extracranial compression due to transverse processus of cervical vertebrae was seen in 23 patients, carotid bulb compression was seen in two patients, and in one case, compression presented as a thoracic outlet syndrome. Conclusion: Our data indicate that extravascular compression of the extracranial venous pathway is frequent in multiple sclerosis patients with chronic cerebrospinal venous insufficiency, and that it is mainly due to compression caused by transverse processus of cervical vertebrae. Further studies are needed to evaluate potential clinical implications of this phenomenon. abstract_id: PUBMED:23953830 Comparison of intravascular ultrasound with conventional venography for detection of extracranial venous abnormalities indicative of chronic cerebrospinal venous insufficiency. Purpose: To investigate prevalence of extracranial abnormalities in azygos and internal jugular (IJ) veins using conventional venography and intravascular ultrasound (IVUS) in patients with multiple sclerosis (MS) being evaluated for chronic cerebrospinal venous insufficiency, a condition of vascular hemodynamic dysfunction. Materials And Methods: PREMiSe (Prospective Randomized Endovascular therapy in Multiple Sclerosis) is a venous angioplasty study that enrolled 30 patients with relapsing MS. The patients fulfilled two or more venous hemodynamic extracranial Doppler sonography screening criteria. Phase I of the study included 10 patients and was planned to assess safety and standardize venography, IVUS, and angioplasty and blinding procedures; phase II enrolled 20 patients and further validated diagnostic assessments using the two invasive techniques. Venography was considered abnormal when ≥ 50% lumen-diameter restriction was detected. IVUS was considered abnormal when ≥ 50% lumen-diameter restriction, intraluminal defects, or reduced pulsatility was detected. Results: No venography-related or IVUS-related complications, including vessel rupture, thrombosis, or side effects of contrast media were recorded among the 30 study patients. IVUS-detected venous abnormalities, including chronic, organized, thrombus-like inclusions were observed in 85% of azygos, 50% of right IJ, and 83.3% of left IJ veins, whereas venography demonstrated stenosis of ≥ 50% in 50% of azygos, 55% of right IJ, and 72% of left IJ veins. Sensitivity of venography for detecting IVUS abnormalities was 52.9%, 73.3%, and 80% for the azygos, left IJ, and right IJ veins, respectively. Conclusions: IVUS assessment of azygos and IJ veins showed a higher rate of venous abnormalities than venography. IVUS provides a diagnostic advantage over conventional venography in detecting extracranial venous abnormalities indicative of chronic cerebrospinal venous insufficiency. abstract_id: PUBMED:20351666 Extracranial Doppler sonographic criteria of chronic cerebrospinal venous insufficiency in the patients with multiple sclerosis. Aim: The aim of this open-label study was to assess extracranial Doppler criteria of chronic cerebrospinal venous insufficiency in multiple sclerosis patients. Methods: Seventy patients were assessed: 49 with relapsing-remitting, 5 with primary progressive and 16 with secondary progressive multiple sclerosis. The patients were aged 15-58 years and they suffered from multiple sclerosis for 0.5-40 years. Sonographic signs of abnormal venous outflow were detected in 64 patients (91.4%). Results: We found at least two of four extracranial criteria in 63 patients (90.0%), confirming that multiple sclerosis is stronghly associated with chronic cerebrospinal venous insufficiency. Additional transcranial investigations may increase the rate of patients found positive in our survey. Reflux in internal jugular and/or vertebral veins was present in 31 cases (42.8%), stenosis of internal jugular veins in 61 cases (87.1%), not detectable flow in internal jugular and/or vertebral veins in 37 cases (52.9%) and negative difference in cross-sectional area of the internal jugular vein assessed in the supine vs. sitting position in 28 cases (40.0%). Flow abnormalities in the vertebral veins were found in 8 patients (11.4%). Pathologic structures (membranaceous or netlike septa, or inverted valves) in the junction of internal jugular vein with brachiocephalic vein were found in 41 patients (58.6%), in 15 patients (21.4%) on one side only and in 26 patients (37.1%) bilaterally. Conclusion: Multiple sclerosis is highly correlated with chronic cerebrospinal venous insufficiency. These abnormalities in the extracranial veins draining the central nervous system can exist in various combinations. The most common pathology in our patients was the presence of an inverted valve or another pathologic structure (like membranaceous or netlike septum) in the area of junction of the IJV with the brachiocephalic vein. abstract_id: PUBMED:27384420 Extracranial Venous Drainage Pattern in Multiple Sclerosis and Healthy Controls: Application of the 2011 Diagnostic Criteria for Chronic Cerebrospinal Venous Insufficiency. The etiology of multiple sclerosis (MS) is still largely unknown and it has been proposed that an impaired venous drainage from the central nervous system, defined as chronic cerebrospinal venous insufficiency (CCSVI), may play a role in this. We investigated the prevalence of extracranial venous drainage pattern alterations in a cohort of MS patients based on the 2011 revised diagnostic criteria for CCSVI. Thirty-nine MS patients and 18 healthy subjects underwent blinded extra-cranial venous echo-color Doppler sonography to reveal the presence of CCSVI. There was no statistically significant difference between MS patients and healthy controls regarding CCSVI prevalence (p value = 0.53). The results challenge the hypothesis that CCSVI plays a primary role in the pathogenesis of MS. abstract_id: PUBMED:23060482 Cerebral venous outflow resistance and interpretation of cervical plethysmography data with respect to the diagnosis of chronic cerebrospinal venous insufficiency. Objective: While chronic cerebrospinal venous insufficiency (CCSVI) can be characterized using cervical plethysmography, much remains unknown about the haemodynamics associated with this procedure. The aim of the study was therefore to gain a deeper understanding of the observed haemodynamics. Method: Forty healthy controls and 44 CCSVI patients underwent cervical plethysmography, which involved placing a strain-gauge collar around their necks and tipping them from the upright (90(o)) to supine position (0(o)) in a chair. Once stabilized, they were returned to the upright position, allowing blood to drain from the neck. A mathematical model was used to calculate the hydraulic resistance of the extracranial venous system for each subject in the study. Results: The mean hydraulic resistance of the extracranial venous system was 10.28 (standard deviation [SD] 5.14) mmHg.s/mL in the healthy controls and 16.81 (SD 9.22) in the CCSVI patients (P &lt; 0.001). Conclusions: The haemodynamics of the extracranial venous system are greatly altered in CCSVI patients. abstract_id: PUBMED:24344725 Is there a link between the extracranial venous system and central nervous system pathology? The extracranial venous system is complex and variable between individuals. Until recently, these variations were acknowledged as developmental variants and were not considered pathological findings. However, in the last decade, the presence and severity of uni- or bi-lateral jugular venous reflux (JVR) was linked to several central nervous system (CNS) disorders such as transient global amnesia, transient monocular blindness, cough headache, primary exertional headache and, most recently, to Alzheimer's disease. The most recent introduction of a composite criteria-based vascular condition named chronic cerebrospinal venous insufficiency (CCSVI), which was originally linked to multiple sclerosis, increased the interest in better understanding the role of the extracranial venous system in the pathophysiology of CNS disorders. The ultimate cause-consequence relationship between these conditions and CNS disorders has not been firmly established and further research is needed. The purpose of this article collection in BMC Medicine and BMC Neurology is to synthesize current concepts and most recent findings concerning the evaluation, etiology, pathophysiology and clinical relevance of the potential involvement of the extracranial venous system in the pathology of multiple CNS disorders and in aging. abstract_id: PUBMED:24139135 Multimodal noninvasive and invasive imaging of extracranial venous abnormalities indicative of CCSVI: results of the PREMiSe pilot study. Background: There is no established noninvasive or invasive diagnostic imaging modality at present that can serve as a 'gold standard' or "benchmark" for the detection of the venous anomalies, indicative of chronic cerebrospinal venous insufficiency (CCSVI). We investigated the sensitivity and specificity of 2 invasive vs. 2 noninvasive imaging techniques for the detection of extracranial venous anomalies in the internal jugular veins (IJVs) and azygos vein/vertebral veins (VVs) in patients with multiple sclerosis (MS). Methods: The data for this multimodal imaging comparison pilot study was collected in phase 2 of the "Prospective Randomized Endovascular therapy in Multiple Sclerosis" (PREMiSe) study using standardized imaging techniques. Thirty MS subjects were screened initially with Doppler sonography (DS), out of which 10 did not fulfill noninvasive screening procedure requirements on DS that consisted of ≥2 venous hemodynamic extracranial criteria. Accordingly, 20 MS patients with relapsing MS were enrolled into the multimodal diagnostic imaging study. For magnetic resonance venography (MRV), IJVs abnormal findings were considered absent or pinpoint flow, whereas abnormal VVs flow was classified as absent. Abnormalities of the VVs were determined only using non-invasive testing. Catheter venography (CV) was considered abnormal when ≥50% lumen restriction was detected, while intravascular ultrasound (IVUS) was considered abnormal when ≥50% restriction of the lumen or intra-luminal defects or reduced pulsatility was found. Non-invasive and invasive imaging modality comparisons between left, right and total IJVs and between the VVs and azygos vein were performed. Because there is no reliable way of non-invasively assessing the azygos vein, the VVs abnormalities detected by the non-invasive testing were compared to the azygos abnormalities detected by the invasive testing. All image modalities were analyzed in a blinded manner by more than one viewer, upon which consensus was reached. The sensitivity and specificity were calculated using contingency tables denoting the presence or absence of vein-specific abnormality findings between all imaging modalities used individually as the benchmark. Results: The sensitivity of CV + IVUS was 68.4% for the right and 90% for the left IJV and 85.7% for the azygos vein/VVs, compared to venous anomalies detected on DS. Compared to the venous anomalies detected on MRV, the sensitivity of CV + IVUS was 71.4% in right and 100% in left IJVs and 100% in the azygos vein/VVs; however, the specificity was 38.5%, 38.9% and 11.8%, respectively. The sensitivity between the two invasive imaging techniques, used as benchmarks, ranged from 72.7% for the right IJV to 90% for the azygos vein but the IVUS showed a higher rate of venous anomalies than the CV. There was excellent correspondence between identifying collateral veins on MRV and CV. Conclusions: Noninvasive DS screening for the detection of venous anomalies indicative of CCSVI may be a reliable approach for identifying patients eligible for further multimodal invasive imaging testing of the IJVs. However, the noninvasive screening methods were inadequate to depict the total amount of azygos vein/VVs anomalies identified with invasive testing. This pilot study, with limited sample size, shows that both a non-invasive and invasive multimodal imaging diagnostic approach should be recommended to depict a range of extracranial venous anomalies indicative of CCSVI. However, lack of invasive testing on the study subjects whose results were negative on the DS screening and of healthy controls, limits further generalizibility of our findings. In addition, the findings from the 2 invasive techniques confirmed the existence of severe extracranial venous anomalies that significantly impaired normal blood outflow from the brain in this group of MS patients. Answer: The question of whether extracranial venous abnormalities are a true pathological finding in patients with multiple sclerosis (MS) or merely an anatomical variant has been a subject of investigation. Studies have shown that extracranial venous anomalies, such as asymmetry and stenosis of the internal jugular veins (IJVs) and prominence of collateral veins, are common in the general population without MS, suggesting that these findings may not be exclusive to MS patients (PUBMED:27011374). Furthermore, a longitudinal study over 5 years found no association between variations in extracranial venous anatomy and clinical outcomes in MS patients, indicating that these variations do not influence disease progression or relapse rates (PUBMED:31185944). A lumped parameter model of cerebrospinal venous circulation has been developed to simulate the impact of extracranial morphological stenoses on intracranial hemodynamics. This model can predict physiological and pathological behaviors, suggesting that extracranial venous impairments could potentially affect intracranial pressure and hemodynamics (PUBMED:26099795). However, the clinical significance of these findings remains uncertain. Studies using multimodal imaging techniques, such as multislice computer tomographic angiography and intravascular ultrasound (IVUS), have detected a higher rate of venous abnormalities in MS patients compared to conventional venography, indicating that more sensitive imaging modalities may reveal a greater prevalence of extracranial venous anomalies in MS patients (PUBMED:23761866, PUBMED:23953830). Despite this, the overall evidence does not firmly establish a cause-consequence relationship between extracranial venous abnormalities and MS. Research has also explored the prevalence of chronic cerebrospinal venous insufficiency (CCSVI) in MS patients using Doppler sonography, with some studies finding a high correlation between MS and CCSVI (PUBMED:20351666). However, other studies have not found a statistically significant difference in CCSVI prevalence between MS patients and healthy controls, challenging the hypothesis that CCSVI plays a primary role in the pathogenesis of MS (PUBMED:27384420). In conclusion, while extracranial venous abnormalities are observed in MS patients, the current body of evidence does not conclusively support these abnormalities as being pathologically significant for MS. They may represent anatomical variants rather than disease-specific pathology.
Instruction: Is decreased thyroid echogenity a good indicator of thyroid autoimmune disorder? Abstracts: abstract_id: PUBMED:17063800 Is decreased thyroid echogenity a good indicator of thyroid autoimmune disorder? Introduction: Thyroid gland with mildly decreased or significantly decreased echogenity is indicating possible autoimmune disorder even before first symptoms, i.e. change in laboratory tests measuring the level of thyroid hormones and antibodies to thyroid antigens occur. Target: to consider changes in thyroid gland echogenity suspecting thyroid autoimmune disorder and to determine antibodies to thyroid antigens in the respective type of thyroid echogenity (increased, normal, mildly decreased or significantly decreased) to consider the activity of autoimmune thyropathies related to echogenity and to compare these factors. Methods: Echogenity of the thyroid gland was examinated in randomly selected population (n = 1 055, 360 male, 695 female) in 11 regions of the Czech republic, all presented with urinary iodine concentration &gt; 100 microg/L of urine. The echogenity was determined in 4-level scale as increased (1), normal (0), mildly decreased (-1) and significantly decreased (-2). Texture of thyroid was evaluated in 2-level scale as homogenous or non-homogenous. For the evaluation of the relation between echogenity type (1 to -2) and TgAb, and between the type of echogenity and TOPAb frequence analysis (logarithm-linear modules) was used, i.e. the complete module was compared with the measured values. Results: The selected adults (695 female, 360 male) with urinary iodine concentration &gt; 100 microg/L of urine presented with increased echogenity in 2 females (0.28%) and 1 male (0.28%), normal echogenity in 281 females (40.42%) and 206 males (57.22%), mildly decreased echogenity in 288 females (41.43%) and 128 males (35.56%) and significantly decreased echogenity in 124 females (17.84%) and 25 males (6.95%). The biggest group, both in males and in females, presented with normal and mildly decreased echogenity. Homogenous thyroid gland structure was found in 223 females (32.08%) and 220 males (61.11%). Non-homogenous texture was found in 472 females (67.92 %) and 140 males (38.89%). Frequence analysis both in males and in females was focused on: 1. relation between the echogenity (ECHO) and TgAb: in females with positive TgAb (14.23%), significant relation to ECHO can be seen (p &lt; 0,0001), in contradiction to males; 2. relation between the echogenity (ECHO) and TPOAb: this relation is very significant both in males and in females (p &lt; 0.0001); 3. mutual relation between TgAb and TPOAb: both in males and in females very significant (p &lt; 0.0001); positive relation between antibodies can be seen. Positive presence of antibodies can be found less frequent, negative presence of both antibodies is more frequent; 4. relation between the echogenity, TgAb and TPOAb: no statistic significance was found. Conclusion: Homogenous thyroid gland structure was mainly found in males and, on the contrary, non-homogenous structure in females. In 52.7% of adults with significantly decreased echogenity, autoimmune disorder was confirmed in laboratory tests at the same time. With echogenity increasing, TgAb and TPOAb decreased, vice versa. Sonography, evaluating decreased echogenity, can be an early indicator of serious thyropathies before function parameters and clinical symptoms appear. Detected risky adults with sonographic signs of autoimmune disorder have to be monitored and respective treatment considered and started at the very first occurence of positive antibodies even if the function is still normal. abstract_id: PUBMED:9403122 Thyroid diseases in the elderly. The ageing thyroid is associated with a number of morphological and functional changes, such as decreased serum T3 and mean thyroid-stimulating hormone concentrations, that are to some extent independent of intercurrent non-thyroidal illnesses. All thyroid diseases, including clinical and subclinical hypo- and hyperthyroidism, non-toxic nodular goitre and thyroid cancer, are encountered in the elderly, but their prevalence and clinical expression differ from those observed in younger patients. In the elderly, autoimmune hypothyroidism is particularly prevalent, hyperthyroidism is mainly characterized by cardiovascular symptoms and is frequently due to toxic nodular goitres, and differentiated thyroid carcinoma is more aggressive. The interpretation of thyroid function tests is difficult in old individuals, because of age-associated changes in thyroid function and frequent alterations secondary to non-thyroidal illnesses and/or drugs. Treatment of thyroid disease deserves special attention in old patients because of the increased risk of complications. abstract_id: PUBMED:26098657 Significance of selenium in thyroid physiology and pathology. Selenium is pivotal element in maintaining homeostasis of human body. It is capable of exerting an influence on immunological responses, cell growth and viral defence. Nevertheless, it is mostly required for the proper thyroid function. There were described 25 selenoproteins, which play various roles in human body. Selenium is an essential particle in the active site of enzymes such as GPXs (glutathione peroxidases), Ds (deiodinases) and TRs (thioredoxin reductases). Owing to this, it has a fundamental importance in the synthesis and function of thyroid hormones, and protects cells against free radicals and oxidative damage. Intake of selenium necessary to maintain suitable selenoenzyme activity ranges from 60 μg to 75 μg per day. Selenium deficiency contributes to decreased activity of GPXs, which can lead to oxidative damage, or Ds, which is connected with impaired thyroid activity. Moreover, a low selenium concentration causes autoimmune processes in the thyroid gland, thus selenium deficiency is essential in pathogenesis of autoimmune thyroiditis or Graves' disease. Because of regulation of the cell cycle, a decreased concentration of selenium impacts on the development of thyroid cancer. abstract_id: PUBMED:28536577 Thyroid Autoimmunity: Role of Anti-thyroid Antibodies in Thyroid and Extra-Thyroidal Diseases. Autoimmune diseases have a high prevalence in the population, and autoimmune thyroid disease (AITD) is one of the most common representatives. Thyroid autoantibodies are not only frequently detected in patients with AITD but also in subjects without manifest thyroid dysfunction. The high prevalence raises questions regarding a potential role in extra-thyroidal diseases. This review summarizes the etiology and mechanism of AITD and addresses prevalence of antibodies against thyroid peroxidase, thyroid-stimulating hormone receptor (TSHR), and anti-thyroglobulin and their action outside the thyroid. The main issues limiting the reliability of the conclusions drawn here include problems with different specificities and sensitivities of the antibody detection assays employed, as well as potential confounding effects of altered thyroid hormone levels, and lack of prospective studies. In addition to the well-known effects of TSHR antibodies on fibroblasts in Graves' disease (GD), studies speculate on a role of anti-thyroid antibodies in cancer. All antibodies may have a tumor-promoting role in breast cancer carcinogenesis despite anti-thyroid peroxidase antibodies having a positive prognostic effect in patients with overt disease. Cross-reactivity with lactoperoxidase leading to induction of chronic inflammation might promote breast cancer, while anti-thyroid antibodies in manifest breast cancer might be an indication for a more active immune system. A better general health condition in older women with anti-thyroid peroxidase antibodies might support this hypothesis. The different actions of the anti-thyroid antibodies correspond to differences in cellular location of the antigens, titers of the circulating antibodies, duration of antibody exposure, and immunological mechanisms in GD and Hashimoto's thyroiditis. abstract_id: PUBMED:26825072 Thyroid nodules and thyroid autoimmunity in the context of environmental pollution. Evidence suggests that in most industrialized countries autoimmune disorders, including chronic lymphocytic thyroiditis, are increasing. This increase parallels the one regarding differentiated thyroid cancer, the increment of which is mainly due to the papillary histotype. A number of studies have pointed to an association between chronic lymphocytic thyroiditis and differentiated thyroid cancer. The upward trend of these two thyroid diseases is sustained by certain environmental factors, such as polluting substances acting as endocrine disrupting chemicals. Herein we will review the experimental and clinical literature that highlights the effects of environmental and occupational exposure to polluting chemicals in the development of autoimmune thyroid disease or differentiated thyroid cancer. Stakeholders, starting from policymarkers, should become more sensitive to the consequences for the thyroid resulting from exposure to EDC. Indeed, the economic burden resulting from such consequences has not been quantified thus far. abstract_id: PUBMED:20381375 Cytokines, thyroid diseases and thyroid cancer. Cytokines are molecules that influence activation, growth, and differentiation of several target cells. They are proinflammatory mediators, regulate the systemic inflammatory response, playing a crucial role in autoimmune thyroid diseases, and modulate development and growth of both normal and neoplastic thyroid cells. In addition cytokines, as well as chemokines, have been shown to generate antitumor response. In patients with thyroid cancer, cytokines are useful as serum biomarkers, and should be a part of multi-analyte assay in the clinical evaluation of patients with indeterminate fine-needle aspiration cytology. Finally, several cytokines, such as interleukin-6 (IL-6), leukemia inhibiting factor (LIF), and thyroid transcription factor-1 (TTF-1) are expressed in thyroid cancer cell lines, and they can be used for evaluating the inhibitory effects of several drugs in redifferentiation therapies. This review reports the latest advances in defining the actions of cytokines, and resumes the relationship between cytokines, thyroid diseases and thyroid cancer. abstract_id: PUBMED:33914231 Iodoprophylaxis and thyroid autoimmunity: an update. Adequate iodine intake is necessary for normal thyroid function. Iodine deficiency is associated with serious complications, but also iodine excess can lead to thyroid dysfunction, and iodine supplementation aimed to prevent iodine deficiency disorders has been associated with development of thyroid autoimmunity. The epidemiology of thyroid diseases has undergone profound changes since the implementation of iodoprophylaxis, notably by means of iodine-enriched salt, specifically resulting in decreased prevalence of goiter and neonatal hypothyroidism, improved cognitive function development in infancy, and reduced incidence of more aggressive forms of thyroid cancer. The main question we address with this review is the clinical relevance of the possible effect on autoimmunity exerted by the use of iodine-enriched salt to correct iodine deficiency. In animal models, exogenous iodine is able to trigger or exacerbate thyroid autoimmunity, but it is still not clear whether the observed immunological changes are due to a direct effect of iodine on immune response, or whether they represent a secondary response to a toxic effect of iodine on thyroid tissue. Previous iodine status of a population seems to influence the functional thyroid response to increased iodine intake and possibly the development of thyroid autoimmunity. Moreover, the prevalence of thyroid antibodies, regarded as hallmark of autoimmune thyroid disease, varies between populations under the influence of genetic and environmental factors, and the presence of thyroid antibodies does not always coincide with the presence of thyroid disease or its future development. In addition, the incidence of autoimmune diseases shows a general increasing trend in the last decades. For all these reasons, available data are quite heterogeneous and difficult to analyze and compare. In conclusion, available data from long-term population surveys show that a higher than adequate population iodine intake due to a poorly controlled program of iodine prophylaxis could induce thyroid dysfunction, including thyroid autoimmunity mostly represented by euthyroid or subclinical hypothyroid autoimmune thyroiditis. Close monitoring iodine prophylaxis is therefore advised to ensure that effects of both iodine deficiency and iodine excess are avoided. abstract_id: PUBMED:19533052 Thyroid diseases and pregnancy Thyroid diseases in pregnancy must be recognized as a specific challenge for the clinician. Any pregnancy is causing alterations in thyroid hormone metabolism which have to be differentiated from pathologic states of thyroid function. Any thyroid disease of the mother with disturbances in the functional state of the gland could induce an adverse influence on the course of pregnancy. Furthermore, it can be associated with adverse consequences on fetal development. Especially hypothyroidism has to be avoided during pregnancy due to a danger of affected neurocognitive development of the offspring. Yet also maternal hyperthyroidism can lead to impairments in the course of pregnancy and to fetal thyroid dysfunction. Further clinical attention should be given to thyroid autoimmunity. There is a clear relationship between autoimmune thyroid disease and decreased fertility and an increase in the rate of spontaneous miscarriages. Furthermore, it displays an increased risk for the manifestation of postpartum thyroiditis. The management of nodular thyroid disease and malignancy does not differ from that of nonpregnant women/patients. Thyroid scintiscan and radioiodine therapy must be avoided during pregnancy and lactation. This review deals with the broad variety of thyroid disorders and function disturbances during and after pregnancy. All described diagnostic and therapeutic procedures are based upon the recent Clinical Practice Guideline of the Endocrine Society published in August 2007. abstract_id: PUBMED:29947174 Genome-Wide Association Studies of Autoimmune Thyroid Diseases, Thyroid Function, and Thyroid Cancer. Thyroid diseases, including autoimmune thyroid diseases and thyroid cancer, are known to have high heritability. Family and twin studies have indicated that genetics plays a major role in the development of thyroid diseases. Thyroid function, represented by thyroid stimulating hormone (TSH) and free thyroxine (T4), is also known to be partly genetically determined. Before the era of genome-wide association studies (GWAS), the ability to identify genes responsible for susceptibility to thyroid disease was limited. Over the past decade, GWAS have been used to identify genes involved in many complex diseases, including various phenotypes of the thyroid gland. In GWAS of autoimmune thyroid diseases, many susceptibility loci associated with autoimmunity (human leukocyte antigen [HLA], protein tyrosine phosphatase, non-receptor type 22 [PTPN22], cytotoxic T-lymphocyte associated protein 4 [CTLA4], and interleukin 2 receptor subunit alpha [IL2RA]) or thyroid-specific genes (thyroid stimulating hormone receptor [TSHR] and forkhead box E1 [FOXE1]) have been identified. Regarding thyroid function, many susceptibility loci for levels of TSH and free T4 have been identified through genome-wide analyses. In GWAS of differentiated thyroid cancer, associations at FOXE1, MAP3K12 binding inhibitory protein 1 (MBIP)-NK2 homeobox 1 (NKX2-1), disrupted in renal carcinoma 3 (DIRC3), neuregulin 1 (NRG1), and pecanex-like 2 (PCNXL2) have been commonly identified in people of European and Korean ancestry, and many other susceptibility loci have been found in specific populations. Through GWAS of various thyroid-related phenotypes, many susceptibility loci have been found, providing insights into the pathogenesis of thyroid diseases and disease co-clustering within families and individuals. abstract_id: PUBMED:10483262 Thyroid biopsy in the auto-immune thyroid diseases There are three types of thyroid biopsy, fine needle biopsy, large needle biopsy and open biopsy. The aim of thyroid biopsy is not the diagnosis of auto-immune thyroid diseases itself, but the differential diagnosis of coexisting nodular goiter. Though fine needle aspiration biopsy cytology(FNABC) is a safe and valuable method, there is a limit. Large needle biopsy has small part for thyroid field. In case of malignant lymphoma or anaplastic carcinoma, the open biopsy is recommended. By using ultrasonic guide technique, the accuracy of ABC has come to high. Immunohistochemical method or molecular biological technique are now available. Answer: Yes, decreased thyroid echogenicity is considered a good indicator of thyroid autoimmune disorder. A study examining the echogenicity of the thyroid gland in a randomly selected population found that adults with significantly decreased echogenicity had a confirmed autoimmune disorder in 52.7% of cases as per laboratory tests at the same time. The study also noted that with increasing echogenicity, the levels of thyroid antibodies (TgAb and TPOAb) decreased, and vice versa. This suggests that sonography, which evaluates decreased echogenicity, can be an early indicator of serious thyroid pathologies, such as autoimmune disorders, before functional parameters and clinical symptoms appear (PUBMED:17063800).
Instruction: Do Socioeconomic Factors and Race Determine the Likelihood of Breast-Conserving Surgery? Abstracts: abstract_id: PUBMED:27297238 Do Socioeconomic Factors and Race Determine the Likelihood of Breast-Conserving Surgery? Background: Racial disparities in the use of breast-conserving surgery (BCS) have been reported and may be due to advanced stage at diagnosis. Our hypothesis was that low-income and ethnic minority patients have an increased tumor size at diagnosis and decreased likelihood of BCS. Patients And Methods: A retrospective review was conducted of early stage breast cancer patients from 10 hospitals in Harris County, Texas, between 2004 and 2011. Clinical stage was calculated on the basis of data from the institutional tumor registries and electronic medical records. Zip code-based socioeconomic factors were downloaded from the US Census Bureau (http://www.census.gov/). Linear regression was used to identify predictors of tumor size, and logistic regression was used to identify predictors of BCS. Results: The cohort included 3937 patients, comprising 2546 (65%) whites, 535 (14%) African Americans, 482 (11%) Hispanics, and 374 (10%) Asian/others. Multivariate linear regression demonstrated socioeconomic status (SES), younger age, African American, Hispanic race, and hormone receptor-negative tumors to be associated with increased tumor size at diagnosis (P &lt; .05). Hispanic and Asian/other race, larger tumor size, combined estrogen receptor-negative/progesterone receptor-negative tumors were associated with not receiving BCS. Conclusion: Race and SES were both associated with larger tumor size at diagnosis. Larger tumor size, negative hormone receptor status, and Hispanic and Asian race were associated with lack of receipt of BCS. Breast cancer screening programs should target both minority and low SES groups. Rates of BCS should be interpreted cautiously when used as a quality metric because of the multiple factors, including tumor size and biology, contributing to its use. abstract_id: PUBMED:35284315 Socioeconomic and clinical factors affecting the proportion of breast conserving surgery in Chinese women with breast cancer. Background: This study investigated the socioeconomic and clinical factors affecting the proportion of breast conserving surgery (BCS) in China, to improve the proportion and success rate of BCS in Chinese breast cancer patients. Methods: Six hundred and forty breast cancer patients treated with BCS were compared with 700 selected breast cancer patients (controls) treated with modified radical mastectomy (MRM) in Tianjin Medical University Cancer Institute and Hospital from January 2005 to January 2018. Patients' socioeconomic and clinical factors were collected through telephone interviews or face-to-face interviews. A total of 5,660 BCS patients were enrolled to analyze independent factors affecting initial positive margins. Chi-squared test and multiple logistic regressions were used to examine factors associated with BCS. The locoregional recurrence-free survival (LRRFS), distant metastasis-free survival (DMFS), and overall survival (OS) were calculated using the Kaplan-Meier method and the survival distribution between BCS and MRM groups was compared by log-rank test. Results: Breast cancer patients who were younger, lived in urban areas, had medical insurance, and higher levels of education and Personal income were more likely to choose BCS. We also observed that patients of Han nationality were more likely to choose BCS. Univariate analysis showed that the frozen section analysis (FSA) positive margin was significantly correlated with tumor distance from the nipple, preoperative magnetic resonance imaging (MRI) examination, T stage, pathological subtype, and lymphovascular invasion (LVI). Multivariate analysis showed the distance from the nipple, T stage, pathological subtype, and LVI, and no preoperative MRI examination were independent predictors of positive resection margins. Multivariate analysis of the correlation between MRI findings and positive resection margins revealed that tumor size, non-mass enhancement (NME), and malignant enhancement surrounding the tumor were independent predictors of positive resection margins. Conclusions: In China, socioeconomic factors largely influence the choice of surgical procedures for breast cancer patients. A gradual reduction in the influence of socioeconomic factors on the proportion of BCS is recommended. Furthermore, preoperative MRI should be encouraged in patients preparing for BCS. Clinicopathological characteristics and MRI findings are significantly associated with a positive resection margin in breast cancer patients. abstract_id: PUBMED:11929949 Race, socioeconomic status, and breast cancer treatment and survival. Background: Previous studies have found that African-American women are more likely than white women to have late-stage breast cancer at diagnosis and shortened survival. However, there is considerable controversy as to whether these differences in diagnosis and survival are attributable to race or socioeconomic status. Our goal was to disentangle the influence of race and socioeconomic status on breast cancer stage, treatment, and survival. Methods: We linked data from the Metropolitan Detroit Surveillance, Epidemiology, and End Results (SEER)(1) registry to Michigan Medicaid enrollment files and identified 5719 women diagnosed with breast cancer, of whom 593 were insured by Medicaid. We first calculated the unadjusted odds ratios (ORs) associated with race, Medicaid insurance, and poverty for breast cancer stage at diagnosis, breast cancer treatment, and death. We then estimated the ORs of having late-stage breast cancer at diagnosis, breast-conserving surgery, no surgery, and death using logistic regression after controlling for clinical and nonclinical factors. All statistical tests were two-sided. Results: Before controlling for Medicaid enrollment and poverty, African-American women had a higher likelihood than white women of each unfavorable breast cancer outcome. However, after controlling for covariates, African-American women were not statistically significantly different from white women on most outcomes except for surgical choice. African-American women were more likely than white women to have no surgery (adjusted OR = 1.62; 95% confidence interval [CI] = 1.11 to 2.37). Among women who had surgery, African-American women were more likely to have breast-conserving surgery than were white women (adjusted OR = 1.63; 95% CI = 1.33 to 1.98). Conclusions: The linkage of Medicaid and SEER data provides more in-depth information on low-income women than has been available in past studies. In our Metropolitan Detroit study population, race was not statistically significantly associated with unfavorable breast cancer outcomes. However, low socioeconomic status was associated with late-stage breast cancer at diagnosis, type of treatment received, and death. abstract_id: PUBMED:9010104 The influence of black race and socioeconomic status on the use of breast-conserving surgery for Medicare beneficiaries. Background: This study explores the influence of socioeconomic status (SES) and black race on the use of breast-conserving surgery (BCS) as opposed to mastectomy for early stage breast carcinoma. Methods: A cohort of 41,937 female Medicare inpatients age 65-79 years who had undergone BCS or mastectomy treatment in 1990 for local or regional breast carcinoma was studied. SES was estimated based on the patients' zip code of residence. Results: Greater use of BCS was associated with higher income and increased education as determined by the patients' zip code area (P &lt; 0.001 for each), and with lower vacant housing rates and fewer persons living below the poverty line in the patients' zip code area (P &lt; 0.001 for each). Black women were less likely than women of other races to undergo BCS (odds ratio, 0.80; 95% confidence interval, 0.71-0.91). However, in a multivariate regression model adjusting for stage and urban versus rural residence, income, educational status, and poverty rate remained significant predictors of patient receipt of BCS, whereas black race did not remain an independent predictor of this treatment. Conclusions: Women residing in higher SES areas are more likely to undergo BCS. The reduced use of BCS in black women appears attributable to SES. abstract_id: PUBMED:29159936 Influence of socioeconomic factors and distance to radiotherapy on breast-conserving surgery rates for early breast cancer in regional Australia; implications of change. Aims: Breast conserving surgery rates are affected by many factors including distance to radiotherapy and tumor-related features. Numerous studies have found women who must travel further for radiotherapy are more likely to choose mastectomy and avoid radiotherapy. We examined relationships between socioeconomic group, distance to radiotherapy services and mastectomy rates across a range of rural and metropolitan settings. Methods: We used a dataset extracted from the Evaluation of Cancer Outcomes Barwon South Western Registry, which captured data on new breast cancer diagnoses in the southwest region of Victoria, Australia. Using logistic regression, we modeled treatment choice of women with early breast cancer (mastectomy vs breast conserving surgery) using explanatory variables that included distance to radiotherapy, and area-level socioeconomic data from the Australian Bureau of Statistics, while controlling for clinical factors. Results: Mastectomy was associated with tumor size, nodal burden and younger age at surgery. Distance to a radiotherapy center was also strongly associated with increased rates of mastectomy for women who traveled 100-200 km for radiotherapy (odds ratio = 1.663; P = 0.03) compared to the reference group who were within 100 km of radiotherapy. No socioeconomic differences were seen between the two groups. Conclusion: A strong association between distance to radiotherapy and the type of surgery for early breast cancer was found. Improving access to radiotherapy therefore has the potential to improve breast cancer outcomes for women in regional Australia. abstract_id: PUBMED:31735998 Socioeconomic status differs between breast cancer patients treated with mastectomy and breast conservation, and affects patient-reported preoperative information. Purpose: Breast cancer treatment is reported to be influenced by socioeconomic status (SES). Few reports, however, stem from national, equality-based health care systems. The aim of this study was to analyse associations between SES, rates of breast-conserving surgery (BCS), patient-reported preoperative information and perceived involvement in Sweden. Methods: All women operated for primary breast cancer in Sweden in 2013 were included. Tumour and treatment data as well as socioeconomic data were retrieved from national registers. Postal questionnaires regarding preoperative information about breast-conserving options and perceived involvement in the decision-making process had previously been sent to all women receiving mastectomy. Results: Of 7735 women, 4604 (59.5%) received BCS. In addition to regional differences, independent predictors of BCS were being in the middle or higher age groups, having small tumours without clinically involved nodes, being born in Europe outside Sweden, having a higher education than primary school and an intermediate or high income per household. Women with smaller, clinically node-negative tumours felt more often involved in the surgical decision and informed about breast-conserving options (both p &lt; 0.001). In addition, women who perceived that BCS was discussed as an alternative to mastectomy were more often in a partnership (p &lt; 0.001), not born in Sweden (p = 0.035) and had an employment (p = 0.031). Conclusion: Socioeconomic factors are associated with surgical treatment even in a national health care system that is expected to offer all women the same standard of care. This should be taken into account and adapted to in preoperative counselling on surgical options in breast cancer. abstract_id: PUBMED:37543579 Radiotherapy refusal in breast cancer with breast-conserving surgery. Background: Although radiotherapy after breast-conserving surgery has been the standard treatment for breast cancer, some people still refuse to undergo radiotherapy. The aim of this study is to identify risk factors for refusal of radiotherapy after breast-conserving surgery. Methods: To investigate the trend of refusing radiotherapy after breast-conserving surgery in patients with breast cancer using the Surveillance, Epidemiology, and End Results database. The patients were divided into radiotherapy group and radiotherapy refusal group. Survival results were compared using a multivariate Cox risk model adjusted for clinicopathological variables. Multivariate logistic regression was used to analyze the influencing factors of patients refusing radiotherapy after breast-conserving surgery and a nomogram model was established. Results: The study included 87,100 women who underwent breast-conserving surgery for breast cancer between 2010 and 2015. There were 84,948 patients (97.5%) in the radiotherapy group and 2152 patients (2.5%) in the radiotherapy refusal group. The proportion of patients who refused radiotherapy after breast-conserving surgery increased from 2.1% in 2010 to 3.1% in 2015. The Kaplan-Meier survival curve showed that radiotherapy can improve overall survival (p &lt; 0.001) and breast cancer specific survival (p &lt; 0.001) in the patients with breast-conserving surgery. The results of multivariate logistic regression showed that age, income, marital status, race, grade, stage, subtype and chemotherapy were independent factors associated with the refusal of radiotherapy. Conclusions: Postoperative radiotherapy can improve the benefits of breast-conserving surgery. Patients with old age, low income, divorce, white race, advanced stage, and no chemotherapy were more likely to refuse radiotherapy. abstract_id: PUBMED:29396079 Review of Factors Influencing Women's Choice of Mastectomy Versus Breast Conserving Therapy in Early Stage Breast Cancer: A Systematic Review. We have performed a narrative synthesis. A literature search was conducted between January 2000 and June 2014 in 7 databases. The initial search identified 2717 articles; 319 underwent abstract screening, 67 underwent full-text screening, and 25 final articles were included. This review looked at early stage breast cancer in women only, excluding ductal carcinoma in situ and advanced breast cancer. A conceptual framework was created to organize the central constructs underlying women's choices: clinicopathologic factors, physician factors, and individual factors with subgroups of sociodemographic, geographic, and personal beliefs and preferences. This framework guided our review's synthesis and analysis. We found that larger tumor size and increasing stage was associated with increased rates of mastectomy. The results for age varied, but suggested that old and young extremes of diagnostic age were associated with an increased likelihood of mastectomy. Higher socioeconomic status was associated with higher breast conservation therapy (BCT) rates. Resident rural location and increasing distance from radiation treatment facilities were associated with lower rates of BCT. Individual belief factors influencing women's choice of mastectomy (mastectomy being reassuring, avoiding radiation, an expedient treatment) differed from factors influencing choice of BCT (body image and femininity, physician recommendation, survival equivalence, less surgery). Surgeon factors, including female gender, higher case numbers, and individual surgeon practice, were associated with increased BCT rates. The decision-making process for women with early stage breast cancer is complicated and affected by multiple factors. Organizing these factors into central constructs of clinicopathologic, individual, and physician factors may aid health-care professionals to better understand this process. abstract_id: PUBMED:31301148 Race, ethnicity, and socioeconomic factors in cholangiocarcinoma: What is driving disparities in receipt of treatment? Background And Objectives: Race/ethnicity and socioeconomic factors are associated with worse cancer outcomes. Our aim was to determine the association of these factors with receipt of surgery and multimodality therapy for cholangiocarcinoma. Methods: Patients with cholangiocarcincoma in the National Cancer Database were identified. Racial/ethnic groups were defined as non-Hispanic White, non-Hispanic Black, Asian, and Hispanic. Socioeconomic factors were insurance status, income, and education. Results: Of 12 095 patients with non-metastatic cholangiocarcinoma, 42% received surgery. Black race was associated with decreased odds of receiving surgery (odds ratio [OR]: 0.66l; P &lt; .001) compared to White patients. Socioeconomic factors accounted for 21% of this disparity. Accounting for socioeconomic and clinicopathologic variables, Black race (OR: 0.73; P &lt; .001), uninsured status (OR: 0.43; P &lt; .001), and Medicaid insurance (OR: 0.63; P &lt; .001) were all associated with decreased receipt of surgery. Of 4808 patients who received surgery, 47% received multimodality therapy. There were no racial/ethnic or socioeconomic differences in receipt of multimodality therapy once patients accessed surgical care. Similar results were seen in patients with advanced disease who received chemotherapy as primary treatment. Conclusion: Racial/ethnic and socioeconomic disparities exist in treatment for cholangiocarcinoma, however only for primary treatment. In patients who received surgery or chemotherapy, there were no disparities in receipt of multimodality therapy. This emphasizes the need to improve initial access to health care for minority and socioeconomical disadvantaged patients. abstract_id: PUBMED:37265920 Risk Factors for Positive Margins in Breast-Conserving Surgery. Introduction Breast-conserving surgery (BCS) followed by adjuvant radiotherapy has similar overall survival compared to mastectomy but is associated with higher rates of local recurrence. Positive surgical margins in BCS are the most important predictor of local recurrence. The aim of our study was to assess the risk factors associated with positive margins in women undergoing BCS for breast cancer in order to inform our clinical practice and minimize re-operation rates. Methods Patients with a diagnosis of breast cancer who underwent BCS from January 2013 to January 2021 were identified from our pathology database and included in the study. All patients underwent a lumpectomy with the removal of additional shaved cavity margins. Statistical analysis was used to assess the effect of patient clinical and pathological risk factors on the rate of positive margins. Results One hundred and twenty patients underwent BCS for breast cancer. Twenty-four percent of patients had positive margins. Of the 29 patients that underwent subsequent re-excisions, only 13 (45%) had residual disease in the re-excision specimen. In younger patients, tumors localized in lower quadrants and the presence of extensive intraductal component within invasive breast cancer increased the risk of positive margins. In addition, positive margins were encountered more significantly in patients with ductal carcinoma in situ (DCIS) compared to invasive tumors. Multivariate analysis showed that DCIS and young age were the only factors independently associated with positive margins. Conclusion DCIS and younger patients have a higher rate of positive margins during BCS than invasive breast cancer. For such patients at higher risk of positive margins, excision of cavity shave margins and intraoperative inking may be done to lower positive margin rates. Preoperative review of breast imaging, core biopsies, and counseling of patients about the likelihood of positive margins is important. Answer: Yes, socioeconomic factors and race do influence the likelihood of breast-conserving surgery (BCS). Studies have shown that socioeconomic status (SES), race, and ethnicity are associated with both the tumor size at diagnosis and the type of surgery chosen or available to breast cancer patients. For instance, a study in Harris County, Texas, found that low-income and ethnic minority patients, specifically African American and Hispanic individuals, were more likely to have larger tumor sizes at diagnosis and were less likely to receive BCS. The study suggested that breast cancer screening programs should target minority and low SES groups, and that rates of BCS should be interpreted with caution as a quality metric due to multiple factors influencing its use, including tumor size and biology (PUBMED:27297238). Similarly, in China, socioeconomic factors such as living in urban areas, having medical insurance, higher levels of education, and personal income were found to influence the choice of BCS over mastectomy. The study recommended reducing the influence of socioeconomic factors on the proportion of BCS and encouraged the use of preoperative MRI in patients preparing for BCS (PUBMED:35284315). In Metropolitan Detroit, after controlling for Medicaid enrollment and poverty, African-American women were not statistically significantly different from white women on most outcomes except for surgical choice. Low socioeconomic status was associated with late-stage breast cancer at diagnosis, type of treatment received, and death (PUBMED:11929949). A study of Medicare beneficiaries also indicated that women residing in higher SES areas were more likely to undergo BCS, and the reduced use of BCS in black women appeared attributable to SES rather than race alone (PUBMED:9010104). In regional Australia, distance to radiotherapy services was a significant factor influencing the choice of surgery, with women traveling longer distances more likely to choose mastectomy over BCS (PUBMED:29159936). In Sweden, socioeconomic factors such as education level and income per household were associated with the rates of BCS, and these factors also influenced patient-reported preoperative information and perceived involvement in the decision-making process (PUBMED:31735998). Lastly, refusal of radiotherapy after BCS was associated with factors such as age, income, marital status, race, grade, stage, subtype, and chemotherapy, indicating that socioeconomic and demographic factors can influence treatment decisions even after surgery (PUBMED:37543579).
Instruction: Is brain natriuretic peptide a reliable biomarker of hydration status in all peritoneal dialysis patients? Abstracts: abstract_id: PUBMED:24943906 Is brain natriuretic peptide a reliable biomarker of hydration status in all peritoneal dialysis patients? Background: Achievement of euvolemia is a fundamental challenge in the peritoneal dialysis (PD) population. Bioimpedance spectroscopy (BIS) is one of the best techniques for routine assessment of hydration status (HS) in PD, but in recent years, the role of brain natriuretic peptides (BNP) in the assessment of volume status has gained interest. The aim of this study was to investigate the relation between BNP and volume status as measured by BIS in PD patients and to assess how these variables correlate according to the time that a patient has been on PD. Methods: We prospectively studied 68 PD patients from whom measurements of BNP and assessments of HS by BIS were performed every 3 months. Three groups were defined based on HS: group A, measurements of HS &lt;-1.1 liters (underhydrated); group B, measurements of HS between -1.1 and +1.1 liters (normohydrated), and group C, measurements of HS &gt;+1.1 liters (overhydrated). Measurements were also separated according to the time on PD (&lt;6 vs. ≥6 months). Correlation between HS and BNP was performed using Spearman's correlation. Results: We performed a total of 478 measurements of HS and BNP. There was a statistically significant difference in BNP (p &lt; 0.001) among three HS groups, with higher levels of BNP detected in overhydrated patients. We found a positive correlation between HS and BNP (rs = 0.28; p &lt;0.001) that seemed stronger in the first 6 months on PD (rs = 0.42; p = 0.006). Conclusions: BNP correlated positively with fluid overload measured by HS, and this correlation was stronger in the first 6 months on PD. abstract_id: PUBMED:25231593 Comparing lung ultrasound with bioimpedance spectroscopy for evaluating hydration in peritoneal dialysis patients. Background: Bioimpedance spectroscopy (BIS), ultrasound lung comets (ULC) and serum biomarkers (N-terminal pro-brain natriuretic peptide, NT-proBNP) have all been used to assist clinicians to determine hydration status in dialysis patients. Methods: We performed simultaneous BIS, ULC and NT-proBNP measurements in 27 peritoneal dialysis patients to determine the concordance of the three methods. Results: Patients with evidence of increasing lung congestion (as determined by ultrasound) were more likely to be diabetic, have systolic hypertension and have higher NT-proBNP (r = 0.65, P &lt; 0.0005). Although there was a trend for patients with high ULC to be overhydrated as determined by BIS, this did not reach statistical significance. Moreover, the correlation between BIS and NT-proBNP (though statistically significant at r = 0.47, P &lt; 0.02) appeared to be weaker. Conclusion: BIS and ULC may be complementary, providing different information, whereas BIS may be more specific to hydration. ULC and NT-proBNP may indicate left ventricular failure coexisting with overhydration. abstract_id: PUBMED:22652744 Bioelectrical impedance analysis in the assessment of hydration status in peritoneal dialysis patients. Objective: Assessment of fluid status in chronic peritoneal dialysis (PD) patients is complex. Clinical evaluation based solely on body weight, blood pressure, volume of ultrafiltration (UF) and peripheral edema is insufficient. A non-invasive test, bioelectrical impedance analysis (BIA) might be of potential benefit. Aim: To test whether BIA correlates with other ancillary markers of extracellular fluid volume, namely B-type natriuretic peptide (BNP), residual renal function (RRF) and UF, and whether BIA provides complementary information in categorizing PD patients vis-à-vis hydration status. Methods: A cross-sectional study of 61 out-patients on chronic PD. Single-frequency BIA measurements of resistance/height were divided into tertiles (lowest: &lt;253 Ω/m; middle: &gt;253 Ω/m and &lt;316 Ω/m; highest: &gt;316 Ω/m). Results: Compared to patients in the highest tertile of BIA (least fluid), patients in the lowest tertile (most fluid) had highest BNP, RRF and UF (93.5 vs. 55.0 pg/ml, p = 0.029; 850 vs. 300 ml/day, p = 0.05; and 1.75 vs. 1.21 l/day, p = 0.023, respectively). Conclusions: BIA tertiles categorized PD patients who differed in BNP, RRF and UF in a stepwise pattern, suggesting BIA may better inform hydration status, and serve as an additional clinical tool in management of chronic PD patients. abstract_id: PUBMED:32721969 Assessment of Hydration Status in Peritoneal Dialysis Patients: Validity, Prognostic Value, Strengths, and Limitations of Available Techniques. Background: The majority of patients undergoing peritoneal dialysis (PD) suffer from volume overload and this overhydration is associated with increased mortality. Thus, optimal assessment of volume status in PD is an issue of paramount importance. Patient symptoms and physical signs are often unreliable indexes of true hydration status. Summary: Over the past decades, a quest for a valid, reproducible, and easily applicable technique to assess hydration status is taking place. Among existing techniques, inferior vena cava diameter measurements with echocardiography and natriuretic peptides such as brain natriuretic peptide and N-terminal pro-B-type natriuretic peptide were not extensively examined in PD populations; while having certain advantages, their interpretation are complicated by the underlying cardiac status and are not widely available. Bioelectrical impedance analysis (BIA) techniques are the most studied tool assessing volume overload in PD. Volume overload assessed with BIA has been associated with technique failure and increased mortality in observational studies, but the results of randomized trials on the value of BIA-based strategies to improve volume-related outcomes are contradictory. Lung ultrasound (US) is a recent technique with the ability to identify volume excess in the critical lung area. Preliminary evidence in PD showed that B-lines from lung US correlate with echocardiographic parameters but not with BIA measurements. This review presents the methods currently used to assess fluid status in PD patients and discusses existing data on their validity, applicability, limitations, and associations with intermediate and hard outcomes in this population. Key Message: No method has proved its value as an intervening tool affecting cardiovascular events, technique, and overall survival in PD patients. As BIA and lung US estimate fluid overload in different compartments of the body, they can be complementary tools for volume status assessment. abstract_id: PUBMED:19494603 Brain natriuretic peptide in peritoneal dialysis patients. Background: The expanding use of brain natriuretic peptides (BNP and NT-proBNP) testing in patients with end-stage kidney disease has increased our knowledge of relating heart-kidney interactions, but also brought concern of the reliability and usefulness of BNPs in dialysis patients. Methods: The review highlights the most important recent results of BNP research and discusses applicability of BNPs in the evaluation of peritoneal dialysis (PD) patients. Results: Relevant physiological background of BNP with relating aspects of PD treatment are reviewed, along with analysis of BNP measurement limitations and suggestions for rational use in the PD population. Conclusion: To date, interpretation of BNP levels in PD patients is limited in areas of hydration status assessment and optimal dry weight determination. However, elevated levels of BNPs exhibit validated correlation and prognostic value with pathological cardiac structure, cardiovascular adverse events and survival, thus assisting in individual patient risk profiling and deeper understanding of cardiorenal factors involved. abstract_id: PUBMED:22652734 Bioimpedance and brain natriuretic peptide in peritoneal dialysis patients. Assessment of ideal body weight in peritoneal dialysis (PD) patients is important for clinical practice. Fluid overload may produce hypertension, reduced arterial distensibility, left ventricular hypertrophy. All these are risk factors for mortality in PD patients: cardio- and cerebrovascular events are the main causes of morbidity and mortality in PD population. Nowadays, a clear and widely accepted definition of ideal body weight in PD patients does not exist. Probably the ideal body weight is the weight at which the extra cellular volume is normal. Many different tools have been used to assess the hydration status in dialysis patients. Ultrasonic evaluation of inferior vena cava diameter only assesses intravascular volume, and is also influenced by diastolic dysfunction and is thus a reflection of preload and not of tissue hydration. Direct measurement of extra cellular and total body water by dilution methods is considered as the golden standard, but these techniques are laborious and expensive. Parameters, such as brain natriuretic peptide (BNP) or NT-proBNP can reflect changes in hydration status and may help the nephrologist to estimate it. Natriuretic peptides are influenced both by preload and ventricular abnormalities and in patients with renal failure accumulation can occur. Bioimpedance is an accurate, reproducible, not expensive and not invasive technique that permits a good evaluation of hydration status in PD and can drive the nephrologist in his clinical choices. Clinical evaluation, strict control of body weight, diuresis, sodium and fluids intakes, bioimpedance monitoring and serum levels of natriuretic peptides may all together help us to maintain the PD patient euvolemic. abstract_id: PUBMED:32756167 Clinical efficacy of biomarkers for evaluation of volume status in dialysis patients. Volume status is a key parameter for cardiovascular-related mortality in dialysis patients. Although N-terminal pro-B-type natriuretic peptide (NT-proBNP), myeloperoxidase, copeptin, and pro-adrenomedullin have been reported as volume markers, the relationship between body fluid status and volume markers in dialysis patients is uncertain. Therefore, we investigated the utility of volume status biomarkers based on body composition monitor (BCM) analyses.We enrolled pre-dialysis, hemodialysis (HD), and peritoneal dialysis (PD) patients and age- and gender-matched healthy Korean individuals (N = 80). BCM and transthoracic echocardiography were performed and NT-proBNP, myeloperoxidase, copeptin, and pro-adrenomedullin concentrations were measured. Relative hydration status (ΔHS, %) was defined in terms of the hydration status-to-extracellular water ratio with a cutoff of 15%, and hyperhydrated status was defined as ΔHS &gt; 15%.Although there were no significant differences in total body water, extracellular water, or intracellular water among groups, mean amount of volume overload and hyperhydrated status were significantly higher in HD and PD patients compared with control and pre-dialysis patients. Mean amount of volume overload and hyperhydrated status were also significantly associated with higher NT-proBNP and pro-adrenomedullin levels in HD and PD patients, although not with myeloperoxidase or copeptin levels. Furthermore, they were significantly associated with cardiac markers (left ventricular mass index, ejection fraction, and left atrial diameter) in HD and PD patients compared with those in the control and pre-dialysis groups.On the basis of increased plasma NT-proBNP and pro-adrenomedullin concentrations, we might be able to make predictions regarding the volume overload status of dialysis patients, and thereby reduce cardiovascular-related mortality through appropriate early volume control. abstract_id: PUBMED:27734218 The importance of residual renal function in peritoneal dialysis. Background: Peritoneal dialysis (PD) patients with preserved residual diuresis have a lower risk of death and complications. Here we analyzed associations between residual diuresis and presence of fluid overload and biomarkers of cardiac strain and nutrition in PD patients. Methods: Among 44 PD patients placed into three subgroups, depending on volume of residual diuresis (group A ≤ 500; group B 600-1900; and group C ≥ 2000 mL/day), we examined: overhydration (OH) assessed by bioimpedance analysis (BIA; yielding OH index OHBIA) and by clinical criteria (edema and hypertension); nutritional status (by subjective global assessment, SGA); metabolic status (electrolytes, serum lipid profile, CRP, and albumin); biomarkers of fluid overload and cardiac strain (N-terminal probrain natriuretic peptide, NT-proBNP, and troponin T, TnT); and, echocardiography and chest X-ray. Results: With increasing residual diuresis in group A, B and C, fewer patients had signs of overhydration defined as OHBIA &gt; 1.1 L (75.0, 42.9 and 33.3 %) or peripheral edema (25.0, 21.4 and 0 %) and NT-proBNP (15199 ± 16150 vs. 5930 ± 9256 vs. 2600 ± 3907 pg/mL; p &lt; 0.05) and TnT (0.15 ± 0.17 vs. 0.07 ± 0.09 vs. 0.04 ± 0.03 ng/mL; p &lt; 0.05) were significantly lower. Significant differences were found also in ejection fraction, SGA, and total cholesterol, albumin and hemoglobin levels whereas blood pressures and serum CRP did not differ significantly. Conclusion: Signs of OH and cardiac strain are common in PD patients, even in those with diuresis of 1000-2000 mL/day and with no clinical signs or symptoms, suggesting that even moderate decrease in residual renal function in PD patients associate with OH and other complications. abstract_id: PUBMED:24385328 The association between arterial stiffness and fluid status in peritoneal dialysis patients. Objectives: In this study our aim was to evaluate the relationship between degree of fluid status and arterial stiffness measured by pulse wave velocity (PWV) in peritoneal dialysis (PD) patients. Fluid status was determined by different methods including fluid overload measured by bioimpedance (Body Composition Monitor, BCM), calf normalized resistivity (CNR), plasma N-terminal fragment of B-type natriuretic peptide (NT-proBNP) and extracellular to intracellular water ratio (ECW/ICW). Methods: Sixty PD patients were evaluated. They were stratified into normo- and hypervolemic groups according to their fluid overload (FO). CNR was calculated from resistance at 5 kHz using calf bioimpedance spectroscopy. Arterial stiffness was assessed by PWV. Additionally, all patients underwent transthoracic echocardiography and had levels of NT-proBNP measured. Results: PWV was higher in the hypervolemic compared to normovolemic patients (9.99 ± 2.4 m/sec vs 7.48 ± 2.3 m/sec, p &lt; 0.001). Hypervolemic patients had higher NT-proBNP levels (3065 ± 981 pg/mL vs 1095 ± 502 pg/mL, p &lt; 0.001), a higher ratio of ECW/ICW; (0.93 ± 0.11 vs 0.81 ± 0.08, p &lt; 0.001) and lower CNR (13.7 ± 2.4 vs 16.0 ± 3.3 W m(3)/kg(*)10(-2), p = 0.005). NT-pro BNP level, ECW/ICW ratio, relative FO, and left ventricular (LV) mass index were positively and CNR negatively correlated with PWV. Relative FO and CNR independently predicted PWV in multivariate analysis adjusted for age, duration of PD, body mass index and mean arterial pressure. Conclusions: Arterial stiffness is increased in fluid-overloaded PD patients. Our results indicated that fluid status is an independent predictor of PWV. abstract_id: PUBMED:26714380 Relationship Between Peritoneal Transport Characteristics and Natriuretic Peptides in Peritoneal Dialysis Patients. Few studies have attempted to evaluate the relationship between peritoneal permeability and fluid status in peritoneal dialysis (PD). The aim of the present study was to clarify the relationship between change in the dialysate-to-plasma ratio of creatinine (D/P Cr) and change in fluid status as evaluated by natriuretic peptides. We studied 49 PD patients (29 men, 62 ± 11 years, 36.7% with diabetes) who underwent a peritoneal equilibration test at least twice after PD initiation. We evaluated correlations between the rate of change in the D/P Cr (R C-D/P Cr), the rate of change in a human atrial natriuretic polypeptide (RC-αhANP), and the rate of change in brain natriuretic peptide (RC-BNP). The RC-αhANP was strongly correlated with RC-BNP (r = 0.637, p &lt; 0.001). In contrast, the RC-D/P Cr was not correlated with RC-αhANP (r = 0.041, p = 0.781) or with RC-BNP (r = 0.114, p = 0.435). However, positive correlations between RC-D/P Cr and RC-αhANP (r = 0.530, p = 0.006) and between RC-D/P Cr and RC-BNP (r = 0.625, p = 0.001) were observed in patients with increased D/P Cr The present study showed a positive correlation between change in peritoneal transport characteristics and change influid status in patients whose D/P Cr increased. Answer: Brain natriuretic peptide (BNP) has been studied as a biomarker for assessing hydration status in peritoneal dialysis (PD) patients, but its reliability varies. A study found a statistically significant difference in BNP levels among different hydration status groups, with higher levels detected in overhydrated patients, and a positive correlation between hydration status and BNP, particularly stronger in the first 6 months on PD (PUBMED:24943906). However, the correlation between BNP and bioimpedance spectroscopy (BIS) measurements of hydration status, although statistically significant, appeared to be weaker (PUBMED:25231593). Bioelectrical impedance analysis (BIA) has been suggested as a potentially better indicator of hydration status, as it categorizes PD patients who differ in BNP, residual renal function, and ultrafiltration in a stepwise pattern (PUBMED:22652744). The review of available techniques for assessing hydration status in PD patients indicates that while BNP and N-terminal pro-B-type natriuretic peptide (NT-proBNP) have certain advantages, their interpretation is complicated by underlying cardiac status and are not widely available (PUBMED:32721969). BNP levels in PD patients have limitations in areas of hydration status assessment and optimal dry weight determination, but they do exhibit validated correlation and prognostic value with pathological cardiac structure, cardiovascular adverse events, and survival (PUBMED:19494603). Bioimpedance is considered an accurate, reproducible, and non-invasive technique that provides a good evaluation of hydration status in PD and can guide clinical choices, while serum levels of natriuretic peptides may help estimate hydration status (PUBMED:22652734). In summary, while BNP can correlate with fluid overload and has been used to assess hydration status in PD patients, its reliability as a biomarker for hydration status is not consistent across all patients, particularly due to its interaction with cardiac function and other factors. BIA and other methods may provide complementary or more specific information regarding a patient's volume status (PUBMED:32721969, PUBMED:22652744). Therefore, BNP should not be solely relied upon as a biomarker of hydration status in all PD patients, and a combination of methods may be necessary for accurate assessment.
Instruction: Is BNP assay useful for the diagnosis of acute dyspnea in emergencies departments? Abstracts: abstract_id: PUBMED:24227511 Is BNP assay useful for the diagnosis of acute dyspnea in emergencies departments? Background: It would be interesting to the emergency doctor to have at his disposal a helpful diagnostic tool like brain natriuretic peptide (BNP). Such assay is simple, available and reliable. Aims: To report our experience on the role of BNP in the etiological diagnosis of acute dyspnea (AD) in emergency room (ER) and to assess the cost-effectiveness ratio of such diagnosis strategy. Methods: A prospective study conducted in the ER of Rabta university teaching hospital of Tunis, from March 1st to June 20th 2010, involving 30 consecutive patients presenting to the emergency for AD. All patients underwent echocardiography in their acute phase and benefited from the dosage of BNP during the first 4 hours. The echocardiography parameters were collected by a single operator who was unaware of the results of the BNP dosage. Results: The mean age of patients was 72.8years with a sex ratio of 1.5. AD was of orthopnea type in 9 cases and stage III NYHA dyspnea in the other patients. Clinical and radiological signs of left heart failure were noted in 30% of cases. Ultrasound data have objectified systolic dysfunction in 4 cases, diastolic in 3 cases and systolic plus diastolic in 10 cases. The BNP levels were below 100 pg/ml in 10 cases with pulmonary origin of the AD. A BNP level between 100 and 400 pg/ml was noted in 3 cases. In our study, the clinical probability of AHF prior to performing the test was estimated at 53% and estimated at 100% after the BNP assay. The BNP assay has reduced the length of stay in the emergency department 4 to 5 days and saved nearly 50% of the cost of care per patient. Conclusion: The BNP assay, has allowed us to confirm the AHF all cases. Given the prognostic value and economic benefit of this test we recommend its use in ER of our country. abstract_id: PUBMED:14574052 The diagnosis of acute congestive heart failure: role of BNP measurements. For the acutely ill patient presenting to the emergency department with dyspnea, an incorrect diagnosis could place the patient at risk for both morbidity and mortality. The stimulus for BNP release is a change in left-ventricular wall stretch and volume overload. A rapid whole blood BNP assay has recently approved by the FDA (Triage BNP Test, Biosite Inc, San Diego CA) that allows one to quickly evaluate the dyspneic patient, and set the stage for the recently completed multinational Breathing Not Properly (BNP) study. The Breathing Not Properly Multinational Study was a seven center, prospective study of 1586 patients who presented to the emergency department with acute dyspnea and had BNP measured with a point-of-care assay upon arrival. BNP was accurate in making the diagnosis of CHF and correlated to severity of disease. It could have reduced clinical indecision by 74%. Algorithms are being developed for use in the emergency room which takes into account other illnesses that might raise BNP levels. BNP levels should be extremely important in ruling out and diagnosing decompensated CHF, as long as baseline "euvolemic" BNP values are known. Finally, it is possible that use of BNP levels might not only be helpful in assessing whether or not a dyspneic patient has heart failure, but it my turn out to be useful in making both triage and management decisions. abstract_id: PUBMED:12446063 Comparative value of Doppler echocardiography and B-type natriuretic peptide assay in the etiologic diagnosis of acute dyspnea. Objectives: We compared the accuracy of B-type natriuretic peptide (BNP) assay with Doppler echocardiography for the diagnosis of decompensated congestive left-heart failure (CHF) in patients with acute dyspnea. Background: Both BNP and Doppler echocardiography have been described as relevant diagnostic tests for heart failure. Methods: One hundred sixty-three consecutive patients with severe dyspnea underwent BNP assay and Doppler echocardiogram on admission. The accuracy of the two methods for etiologic diagnosis was compared on the basis of the final diagnoses established by physicians who were blinded to the BNP and Doppler findings. Results: The final etiologic diagnosis was CHF in 115 patients. Twenty-four patients (15%) were misdiagnosed at admission. The BNP concentration was 1,022 +/- 742 pg/ml in the CHF subgroup and 187 +/- 158 pg/ml in the other patients (p &lt; 0.01). A BNP cutoff of 300 pg/ml correctly classified 88% of the patients (odds ratio [OR] 85 [19 to 376], p &lt; 0.0001), but a high negative predictive value (90%) was only obtained when the cutoff was lowered to 80 pg/ml. The etiologic value of BNP was low in patients with values between 80 and 300 pg/ml (OR 1.85 [0.4 to 7.8], p = 0.4) and also in patients who were studied very soon after onset of acute dyspnea. Among the 138 patients with assessable Doppler findings, a "restrictive" mitral inflow pattern had a diagnostic accuracy for CHF of 91% (OR 482 [77 to 3,011], p &lt; 0.0001), regardless of the BNP level. Conclusions: Bedside BNP measurement and Doppler echocardiography are both useful for establishing the cause of acute dyspnea. However, Doppler analysis of the mitral inflow pattern was more accurate, particularly in patients with intermediate BNP levels or "flash" pulmonary edema. abstract_id: PUBMED:19297125 Diagnosing the cause of acute dyspnea in elderly patients: role of biomarkers in emergencies Acute dyspnea is one of the leading causes of emergency hospitalization of elderly patients. Clinical diagnostic procedures are difficult in this geriatric population. Acute heart failure is the most frequent cause of acute dyspnea in geriatric patients. The use of plasma B natriuretic peptide (BNP) assays in the general population has profoundly improved its medical management. There has also been progress recently for other frequent causes of dyspnea in the elderly, including infection and venous thromboembolic disease. Procalcitonin assays may be useful as a prognostic factor for infectious disease. Nevertheless, the real value of BNP assays in geriatric populations must be clarified by interventional studies. abstract_id: PUBMED:16504634 Use of N-terminal prohormone brain natriuretic peptide assay for etiologic diagnosis of acute dyspnea in elderly patients. Background: B-type peptide assay (brain natriuretic peptide [BNP] and N-terminal prohormone brain natriuretic peptide [NT-proBNP]) is useful for the diagnosis of heart failure (HF), but few data are available on the use of these markers in elderly subjects. The aim of this study was to evaluate NT-proBNP assay for the diagnosis of acute left HF in patients older than 70 years hospitalized for acute dyspnea. Methods: We prospectively enrolled 256 elderly patients with acute dyspnea. They were categorized by 2 cardiologists unaware of NT-proBNP values into a cardiac dyspnea subgroup (left HF) and a noncardiac dyspnea subgroup (all other causes). Results: Mean age was 81 +/- 7 years, and 52% of the patients were women. The diagnoses made in the emergency setting were incorrect or uncertain in 45% of cases. The median NT-proBNP value was higher (P &lt; .0001) in patients with cardiac dyspnea (n = 142; 7906 pg/mL) than in patients with noncardiac dyspnea (n = 112; 1066 pg/mL). The area under the receiver operating characteristic curve was 0.86 (95% CI 0.81-0.91). At a cutoff of 2000 pg/mL, NT-proBNP had a sensitivity of 86%, a specificity of 71%, and an overall accuracy of 80% for cardiac dyspnea. The use of 2 cutoffs (&lt; 1200 and &gt; 4500 pg/mL) resulted in an 8% error rate and a gray area englobing 32% of values. Conclusion: NT-proBNP appears to be a sensitive and specific means of distinguishing pulmonary from cardiac causes of dyspnea in elderly patients. An optimal diagnostic strategy requires the use of 2 cutoffs and further investigations of patients with values in the gray area. abstract_id: PUBMED:18388161 Acute respiratory failure in the elderly: diagnosis and prognosis. Acute respiratory failure (ARF) in patients over 65 years is common in emergency departments (EDs) and is one of the key symptoms of congestive heart failure (CHF) and respiratory disorders. Searches were conducted in MEDLINE for published studies in the English language between January 1980 and August 2007, using 'acute dyspnea', 'acute respiratory failure (ARF)', 'heart failure', 'pneumonia', 'pulmonary embolism (PE)' keywords and selecting articles concerning patients aged 65 or over. The age-related structural changes of the respiratory system, their consequences in clinical assessment and the pathophysiology of ARF are reviewed. CHF is the most common cause of ARF in the elderly. Inappropriate diagnosis that is frequent and inappropriate treatments in ED are associated with adverse outcomes. B-type natriuretic peptides (BNPs) help to determine an accurate diagnosis of CHF. We should consider non-invasive ventilation (NIV) in elderly patients hospitalised with CHF or acidotic chronic obstructive pulmonary disease (COPD) who do not improve with medical treatment. Further studies on ARF in elderly patients are warranted. abstract_id: PUBMED:29761678 Ultrasound of Jugular Veins for Assessment of Acute Dyspnea in Emergency Departments and for the Assessment of Acute Heart Failure. Background: When a patient arrives at the emergency department (ED) presenting with symptoms of acute decompensated heart failure (ADHF), it is possible to reach a definitive diagnosis through many different venues, including medical history, physical examination, echocardiography, chest X-ray, and B-type natriuretic peptide (BNP) levels. Point-of-care ultrasound (POCUS) has become a mainstream tool for diagnosis and treatment in the field of emergency medicine, as well as in various other departments in the hospital setting. Currently, the main methods of diagnosis of ADHF using POCUS are pleural B-lines and inferior vena cava (IVC) width and respiratory variation. Objectives: To examine the potential use and benefits of bedside ultrasound of the jugular veins in the evaluation of dyspneic patients for identification of ADHF. Methods: A blood BNP level was drawn from each participant at time of recruitment. The area and size of the internal jugular vein (IJV) during inspiration and expiration were examined. Results: Our results showed that the respiratory area change of the IJVs had a specificity and sensitivity of nearly 70% accuracy rate in indentifying ADHF in our ED. Conclusions: Ultrasound of the IJV may be a useful tool for the diagnosis of ADHF because it is easy to measure and requires little skill. It is also not affected by patient body habitus. abstract_id: PUBMED:17413277 Clinical approaches to the diagnosis of acute heart failure. Purpose Of Review: Predicting which patients with congestive heart failure will decompensate is often difficult, and it is often difficult to distinguish congestive heart failure from other causes of acute dyspnea. This review will focus on some of the newer tools used to diagnose acute congestive heart failure in addition to reviewing the utility of more traditional tools. Recent Findings: The integration of pertinent positives and negatives on a routine history, key physical findings on examination and routine noninvasive imaging offers high positive and negative predictive power for the diagnosis of acute heart failure. Measurement of B-type natriuretic peptide and N-terminal proB-type natriuretic peptide offers additional and incremental diagnostic information. Measurement of intrathoracic impedance is a novel and potentially useful tool to track absolute changes in cardiac function and total lung fluid content, and may be useful for the outpatient titration of medical therapy to minimize acute congestive heart failure decompensation. Summary: Consistent accurate diagnosis of decompensated congestive heart failure is possible using no more than a complete history and physical examination along with routine imaging techniques. The ability to diagnose acute congestive heart failure however, is improved by using serum B-type natriuretic peptide and intrathoracic impedance, both of which offer additive and complementary diagnostic information. abstract_id: PUBMED:19633758 Use of natriuretic peptide assay in dyspnea. Introduction: Acute dyspnea is a common symptom in patients admitted to hospital via emergency department. Heart failure is a common cause with high morbidity and mortality, but diagnostically challenging. Improvement in diagnostic techniques is needed. Methods: Selective search of Medline. Results: B-type natriuretic peptide (BNP) and its N-terminal fragment (NT-proBNP) are extremely helpful in the diagnosis of heart failure in patients with acute dyspnea. The use of natriuretic peptide assay has also been shown to be cost-effective. Since plasma levels of natriuretic peptides reflect the extent of systolic and diastolic dysfunction, measurement of natriuretic peptides is helpful in estimating overall risk in patients with heart failure or acute myocardial infarction. They have also been used in the management of patients with valvular disease and in tailoring therapy in patients with heart failure. Discussion: BNP and NT-proBNP are quantitative markers of heart failure that are helpful for diagnosis, prognosis and treatment monitoring. abstract_id: PUBMED:16250184 Variation in the plasma concentration of B-type natriuretic peptide in emergent paroxysmal atrial fibrillation, in acute pulmonary embolism, in acute coronary syndrome and in dilated cardiomyopathy Our research is based on the critical evaluation of plasma concentration variation of B-type natriuretic peptide (BNP)--in emergency--in paroxysmal atrial fibrillation, acute pulmonary edema, acute coronary syndrome and dilated cardiomyopathy. The aim of our research was to assess if the BNP concentration variation may be useful in the diagnosis and therapy. Peptide synthesis takes place mainly in the ventricular myocardium. We selected 102 patients: 27 control subjects, and 75 admitted to the emergency and reception department for dyspnea and/or precordialgia and/or palpitations. At the beginning they were considered as one group only, and then they were divided into groups according to the diagnosis: 20 with paroxysmal atrial fibrillation with reversion to sinus rhythm in the first week; 20 with acute pulmonary edema; 22 with acute coronary syndrome without electrocardiographic ST-segment changes; 13 with compensated dilated cardiomyopathy. Our research assessed that the BNP activation and secretion are evident especially in patients with heart failure and remains at the high level until the administration of an effective therapy and then they reach a balance with values higher than the standards, while in the paroxysmal atrial fibrillation and in acute coronary syndrome they rise and come back to the standard levels or even at lower levels after the disease solution. For this reason, BNP reiterated measurements allow to assess treatment efficacy, even at home, and to optimize the therapy. The main limit of BNP diagnostic role is in the need of knowing in advance the specific values for each patient. The BNP concentration evaluation in the acute phase is necessary to differentiate patients with dyspnea due to heart failure from those with pulmonary pathologies, while the BNP assessment in the acute coronary syndrome predicted exitus or heart failure manifestations. Answer: Yes, BNP assay is useful for the diagnosis of acute dyspnea in emergency departments. Studies have shown that BNP levels are elevated in patients with acute heart failure (AHF), and the assay can help distinguish between cardiac and non-cardiac causes of dyspnea. In a study conducted in the emergency room of Rabta university teaching hospital of Tunis, the BNP assay confirmed AHF in all cases, reduced the length of stay in the emergency department, and saved nearly 50% of the cost of care per patient (PUBMED:24227511). The Breathing Not Properly Multinational Study found that BNP was accurate in making the diagnosis of congestive heart failure (CHF) and could have reduced clinical indecision by 74% (PUBMED:14574052). Another study compared the accuracy of BNP assay with Doppler echocardiography and found that while both are useful, Doppler analysis of the mitral inflow pattern was more accurate, particularly in patients with intermediate BNP levels or "flash" pulmonary edema (PUBMED:12446063). Furthermore, BNP assays have been shown to be particularly helpful in the geriatric population, where clinical diagnostic procedures are difficult, and acute heart failure is a frequent cause of acute dyspnea (PUBMED:19297125). The NT-proBNP assay has been evaluated for the diagnosis of acute left heart failure in elderly patients and was found to be a sensitive and specific means of distinguishing pulmonary from cardiac causes of dyspnea (PUBMED:16504634). B-type natriuretic peptides (BNPs) also help to determine an accurate diagnosis of CHF in elderly patients presenting with acute respiratory failure (ARF) (PUBMED:18388161). In summary, BNP and NT-proBNP assays are valuable tools for the diagnosis of acute dyspnea in emergency departments, aiding in the differentiation between cardiac and non-cardiac causes, and improving the management and outcomes of patients with acute dyspnea.
Instruction: Can the ankle brachial pressure index (ABPI) and carotis intima media thickness (CIMT) be new early stage markers of subclinical atherosclerosis in patients with rheumatoid arthritis? Abstracts: abstract_id: PUBMED:25777147 Can the ankle brachial pressure index (ABPI) and carotis intima media thickness (CIMT) be new early stage markers of subclinical atherosclerosis in patients with rheumatoid arthritis? Background: It takes years for atherosclerosis to manifest symptoms. However, it needs to be identified earlier because of the premature cardiovascular risk factors in patients with rheumatoid arthritis (RA). In this study, we aimed to investigate the effect of atherosclerosis on the ankle brachial pressure index (ABPI) and carotis intima media thickness (CIMT) in patients with RA. Methods: RA patients attending the rheumatology clinic were examined retrospectively; then we called them for the measurements of ABPI and CIMT prospectively. Subjects were divided into four groups, as follows (Table 1): group 1 comprised RA patients with an ABPI less than 0.9; group 2 included RA patients with an ABPI between 0.9 and 1.2; group 3 was made up of RA patients with an ABPI greater than 1.2; and group 4 included patients without RA with an ABPI between 0.9 and 1.2 as a control group. Patients' demographic data were recorded. Hypertension (HT), diabetes mellitus, ABPI and CIMT measurements were taken by specialists. Duration of RA and disease scores (disease activity score-28, health assessment questionnaire score and visual assessment score) were recorded. Results: The prevalence of peripheral vascular disease in patients with RA was twice as high as that in the normal population of equivalent age. Patients in group 2, with RA and normal ABPI, exhibited a significant higher mean in CIMT (mm) compared with the control group (p &lt; 0.01), despite having normal ABPI. This confirms that these patients have a higher risk of stroke compared with the control group. Group 1's newly diagnosed HT (p &lt; 0.01) and systolic blood pressure (SBP) values (p &lt; 0.01) were higher and statistically significant when compared with the group 4 (control group); in addition, significant plaque levels were observed in the carotid arteries (p &lt; 0.01). Group 3 patients had a similar history of HT and increased SBP compared with patients in group 4 (p &lt; 0.01), and had similar characteristics to with group 1. No statistically significant differences were found between the groups in terms of inflammatory markers such as C-reactive protein and rheumatoid factor, anti-cyclic citrullinated peptide and white blood cell counts. Conclusion: Based on the present findings, patients with RA need to be evaluated in the early stage of the disease for subclinical peripheral artery disease using the ABPI, as well as CIMT, which is also a non-invasive technique, in terms of cerebrovascular events. Inflammatory markers exhibited no statistically significant difference. We think that the atherosclerotic process stems not only from the inflammatory effects of RA, but also perhaps from its immunological nature. abstract_id: PUBMED:30081802 Characteristics and clinical associations of arterial stiffness and subclinical atherosclerosis in patients with rheumatoid arthritis Te aim of the study was to evaluate parameters of arterial stiffness (AS) (carotid-femoral pulse wave velocity (cf PWV), central pulse pressure (PP), cardio-ankle vascular index (CAVI) and stiffness gradient between aorta and brachial artery) and subclinical atherosclerosis (carotid intima-media thickness (CIMT) and ankle-brachial index (ABI)) according to inflammatory activity in patients with R. Materials And Methods: 85 patients with R (EULAR/ACR 2010) were examined (age 59,7±14,3 years, 64,7% with arterial hypertension (AH). Median duration of R was 7 years. PWV and central pulse wave were assessed by applanation tonometry. Arterial stiffness gradient was calculated as a ratio between carotid-femoral and carotid-radial PWV: its elevation ≥1 was considered as arterial stiffness mismatch. ABI and CAVI were measured by sphygmometry. CIMT was assessed according to the standard protocol, CIMT≥0,9 mm was considered as a subclinical marker of atherosclerosis. p. abstract_id: PUBMED:22490583 Impaired brachial artery flow-mediated dilation and increased carotid intima-media thickness in rheumatoid arthritis patients. Background: Carotid artery intima-media thickness (CIMT) and brachial artery flow-mediated dilation percentage (FMD%) are common parameters used for detecting subclinical atherosclerosis. This study compared subclinical atherosclerosis of the carotid and brachial arteries in rheumatoid arthritis (RA) patients and healthy controls using high resolution ultrasonography. We also investigated their correlation with clinical factors and the association between FMD% and CIMT. Methods: One hundred and two RA patients and 46 age-gender matched healthy controls were included in the study. FMD of the brachial artery and CIMT were measured ultrasonographically. Patients with diabetes mellitus, hypertension, renal failure, history of cardiovascular or cerebrovascular disease were excluded. Subjects who were receiving or used high dose steroids were also excluded. Results: The CIMT was significantly higher in patients than that in the control group ((0.697±0.053) vs. (0.554±0.051) mm, P&lt;0.001), whereas brachial artery FMD% was lower in patients than that in the controls ((5.454±2.653)% vs. (8.477±2.851)%, P&lt;0.001). CIMT was related to age, disease duration, tender and swollen joint score, C-reactive protein, systolic blood pressure and high-density lipoprotein. However, FMD% was only association with systolic blood pressure. There was no significant correlation between CIMT and FMD%. Conclusions: Compared with the healthy control subjects, RA patients without clinically evident cardiovascular disease had subclinical atherosclerosis in terms of impaired FMD% and increased CIMT. FMD% and CIMT may measure a different stage of subclinical atherosclerosis in RA patients. abstract_id: PUBMED:25139185 Epicardial adipose tissue thickness, flow-mediated dilatation of the brachial artery, and carotid intima-media thickness: Associations in rheumatoid arthritis patients. Aim: The purpose of this work was to evaluate epicardial adipose tissue (EAT), carotid intima-media thickness (CIMT), and flow-mediated dilatation (FMD) of the brachial artery in rheumatoid arthritis (RA) patients using ultrasonographic methods. Interrelationships between these three parameters in RA patients were also investigated. Methods: EAT thickness, CIMT, and FMD were measured by ultrasonography. We measured the disease activity score (DAS28), health assessment questionnaire (HAQ) score, and C-reactive protein (CRP) levels. Spearman or Pearson correlation analysis was used to evaluate the association between clinical findings, CIMT, FMD, and EAT. Results: A total of 90 RA patients [19 men, mean age 54 years (range 21-76 years)] and 59 age- and gender-matched control subjects [17 men, mean age 54 years (range 26-80 years)] were included in the study. Patients with RA had a mean 4.34 DAS28 points (range 0-40 points) and the mean duration of the disease was 77.1 months (range 1-360 months). We found that RA patients had thicker EAT (7.7 ± 1.7 mm vs 6.2 ± 1.8 mm, p &lt; 0.001), increased CIMT [0.9 (0.5-1.2) mm vs 0.6 (0.4-0.9) mm, p &lt; 0.001], and decreased FMD values [5.7 % (- 23.5 to 20 %) vs. 8.5 % (- 4.7 to 22.2 %), p = 0.028] when compared to control subjects. CRP levels were significantly higher in the RA group [0.81 (range 0.1-13.5) vs 0.22 (range 0.05-12), p &lt; 0.001]. EAT thickness was negatively correlated with FMD (r = - 0.26, p &lt; 0.001) and positively correlated with CIMT values (r = 0.52, p &lt; 0.001). CIMT also negatively correlated with FMD (r = - 0.29, p &lt; 0.001). Conclusion: EAT can be simply measured by echocardiography and correlated with FMD and CIMT. It can be used as a first-line measurement for estimating burden of atherosclerosis in RA patients. abstract_id: PUBMED:32779171 Investigating the relationship between carotid intima-media thickness, flow-mediated dilatation in brachial artery and nuclear heart scan in patients with rheumatoid arthritis for evaluation of asymptomatic cardiac ischemia and atherosclerotic changes. Background: Cardiovascular disease is the most common cause of death worldwide. In order to prevent and treat heart diseases, we need to estimate the trend of non-cardiac diseases with the cardiovascular system. Arthritis Rheumatoid is a chronic immune/inflammatory process which leads to subclinical atherosclerosis and increases cardiovascular disease. We examined the patients who referred to our nuclear medicine center for MPI and correlated their findings with flow-mediated dilatation (FMD) of the brachial artery and carotid intima-media thickness (CIMT) in arthritis rheumatoid patients. Material And Methods: A total 30 known cases with arthritis rheumatoid were referred to our department for MPI and the single-photon emission computed tomography (SPECT) imaging were visually and quantitatively evaluated by two nuclear medicine physicians and the correlation of the measured FMD and CIMT were evaluated and compared with ultrasonography data. Demographic information such as gender, age and sex and medical history (risk factors, cardiovascular sign and symptoms, lab findings, medication etc…) were recorded in questionnaire sheets and were analyzed by SPSS.20. Chi-square and student t-test were used for further analysis. Results: The mean CIMT (R = 0.452 ± 0.07, L = 0.447 ± 0.08) and %FMD (R = 7.22 ± 8.66, L = 6.42 ± 11.88) were measured for all subjects. Age was the only parameter correlated with both right and left CIMT (P = 0.033 and P = 0.024, respectively). Among the patients, 26.7% had mild ischemia (SSS &lt; 8) and 3 of them suffered from active arthritis rheumatoid. All patients with RA showed normal ventricular ejection fraction and normal volumes and among them, 93.3% had normal functional performance (normal wall motion…). Moreover, the mean CIMT and %FMD were not significantly different in ischemic and non-ischemic patients. Among ischemic patients, just the course of the disease was associated with CIMT and none of the parameters was correlated with FMD. Conclusions: There is no significant statistical difference between ischemic and non-ischemic patients and also the functional performance with values of CIMT and FMD. Among all populations, the parameter of age, and in ischemic group, the course of disease were found as the only variable correlated with CIMT. abstract_id: PUBMED:38028704 Comparison of the intima-media thickness of the common carotid artery in patients with rheumatoid arthritis: A single-center cross-sectional case-control study, and a brief review of the literature. Background And Aim: Rheumatoid arthritis (RA) is an autoimmune chronic inflammatory disease affecting 0.5%-1% of adults worldwide. The carotid intima-media thickness (CIMT) is a simple, reliable, noninvasive marker for subclinical atherosclerosis. The aim of this study was to compare the intima-media thickness of the common carotid artery in patients with RA with that of healthy patients. Methods: In this case-control study, subjects were recruited from the patients who presented to a private rheumatology clinic. RA was documented by a rheumatologist. All subjects underwent an ultrasound examination of the carotid artery to assess CIMT. Subjects with RA filled out the disease activity score (DAS28) questionnaire. Results: Sixty-two subjects (31 subjects with RA and 31 healthy subjects) took part in the study. The mean age of the subjects in the RA and the control groups was 42.39 ± 12.98 and 44.48 ± 13.56 years, respectively. Values of CIMT were significantly greater in RA subjects compared with their healthy counterparts (p &lt; 0.001). The CIMT increased significantly with increased disease severity (r = 0.73). Subjects were divided into two age groups (≤40 and &gt;40 years). A comparison of CIMT in the mentioned subgroups revealed a remarkable difference in CIMT values between those of the RA patients and those of their control counterparts in both age groups (p = 0.002 and p &lt; 0.001 for those below and above 40 years, respectively). Conclusion: CIMT could be used as an efficient clinical index for identifying the early stages of atherosclerosis and predicting cardiovascular events following atherosclerosis in RA patients. abstract_id: PUBMED:25366205 Subclinical atherosclerosis in patients with rheumatoid arthritis by utilizing carotid intima-media thickness as a surrogate marker. Background & Objectives: Patients with rheumatoid arthritis (RA) are more prone for accelerated atherosclerosis and Asian Indians as an ethnic group are predisposed to a high risk of premature atherosclerosis. However, sparse data are available regarding the burden of atherosclerosis among asymptomatic adult patients with RA in south India. We studied the burden of asymptomatic atherosclerosis in adult south Indian patients with RA at Tirupati, Andhra Pradesh, India, utilizing carotid intima-media thickness (CIMT) as a surrogate marker. Methods: Ultrasound examination of the carotids and CIMT measurement (mm) were carried out in 32 patients with RA, 32 age- and gender-matched normal controls, and 32 patients with atherosclerosis and angiographically proven coronary artery disease. The CIMT values in patients with CAD and normal controls were used to derive the appropriate cut-off value of CIMT for defining atherosclerosis that would be applicable for the ethnic population studied. Results: Patients with RA had a higher mean CIMT (mm) compared with normal control subjects (0.598 ± 0.131 vs 0.501 ± 0.081; p0 = 0.001). Carotid plaque was found more frequently among the cases compared with normal controls [5/32 (15.6%) vs 0/32 (0%), p0 =0.020]. Using this cut-off value derived by the receiver operator characteristic curve method (≥ 0.57 mm; sensitivity 84.4; specificity 90.6%) and the 75 th percentile value among normal controls (≥ 0.55 mm) as surrogate markers, the presence of subclinical atherosclerosis was significantly more among asymptomatic patients with RA compared with normal controls [(59.3 vs 12.5%; p0 &lt;0.001) and (62.5 vs 25%; P&lt;0.001) respectively]. Interpretation & Conclusions: Based on the present findings CIMT appears to be a useful surrogate marker for detecting subclinical atherosclerosis in adult Indian patients with RA. abstract_id: PUBMED:26555551 Relationship of osteoprotegerin to pulse wave velocity and carotid intima-media thickness in rheumatoid arthritis patients. Objective: Osteoprotegerin (OPG) is considered an important biomarker in cardiovascular (CV) disease. CV disease is the most common cause of mortality in patients with rheumatoid arthritis (RA), a consequence of accelerated atherosclerosis. The present study aimed to evaluate the relationship of serum OPG levels to arterial stiffness, carotid intima-media thickness (CIMT), and clinical and laboratory indices in RA patients. Patients And Methods: Included in the study were 68 RA patients with no history or signs of CV disease and 48 healthy subjects Disease activity was assessed by the 28-joint disease activity score (DAS28) in RA patients. Serum OPG level was measured using enzyme-linked immunosorbent assay (ELISA). Carotid femoral pulse wave velocity (PWV) was measured as an index of arterial stiffness and CIMT was evaluated by carotid ultrasonography. Results: The mean serum OPG level was significantly higher in RA patients than controls (p &lt; 0.001). Mean PWV and CIMT were also significantly increased in RA patients compared to controls (both p &lt; 0.001). In RA patients, serum OPG level was significantly correlated with PWV and CIMT, as well as rheumatoid factor (RF) and anti-cyclic citrullinated peptide (anti-CCP) antibody; but not with DAS28, high-sensitivity C-reactive protein (hsCRP), or erythrocyte sedimentation rate. Conclusion: Serum OPG levels were increased and correlated with CIMT and PWV in RA patients. In addition to PWV and CIMT, OPG may be a useful biomarker for CV risk management in RA patients. abstract_id: PUBMED:27028097 The relation between ischemia modified albumin levels and carotid intima media thickness in patients with rheumatoid arthritis. Background: Cardiovascular diseases, among which atherosclerotic heart disease, are known to be one of the most important mortality and morbidity causes in patients with rheumatoid arthritis (RA). Ischemia modified albumin (IMA) is a potential marker that can be used to assess atherosclerosis-related myocardial ischemia. Another frequently used marker for the assessment of atherosclerotic lesions is the carotid intima media thickness (CIMT). Aim: To evaluate the role that IMA has on atherosclerosis development and its clinical usability in patients with RA, by assessing the values of IMA and CIMT. Methods And Materials: Our prospective study was conducted between June 2012 and March 2013 at the Rheumatology Department of Necmettin Erbakan Meram Medical School, Turkey. Fifty-two RA patients, diagnosed according to the 1987 criteria of the American College of Rheumatology, and an age- and sex-matched control group of 46 healthy subjects were included in this study. Results: No significant difference was detected between the groups with respect to age, sex and body mass index. In the patient group the IMA and CIMT values were found to be 0.37 ± 0.12 absorbance units (ABSU) and 0.80 ± 0.22 mm, respectively, while in the control group they were 0.31 ± 0.11 ABSU and 0.51 ± 0.18 mm, respectively. The IMA and CIMT values were significantly higher in the patient group (P = 0.022 and P &lt; 0.0001, respectively). A positive correlation was found between IMA, CIMT and Disease Activity Score of 28 joints (P = 0.016 and P = 0.002, respectively). Conclusion: Since the values of IMA were higher in the patient group compared to controls and because of its correlation with CIMT, we suggest the use of IMA as an early marker of atherosclerosis in RA patients. abstract_id: PUBMED:26666335 Causes of changes in carotid intima-media thickness: a literature review. Atherosclerosis causes significant morbidity and mortality. Carotid intima-media thickness (CIMT) predicts future cardiovascular and ischaemic stroke incidence. CIMT, a measure of atherosclerotic disease, can be reliably determined in vivo by carotid ultrasound. In this review, we determined that CIMT is associated with traditional cardiovascular risk factors such as age, sex, race, smoking, alcohol consumption, habitual endurance exercise, blood pressure, dyslipidemia, dietary patterns, risk-lowering drug therapy, glycemia, hyperuricemia, obesity-related anthropometric parameters, obesity and obesity-related diseases. We also found that CIMT is associated with novel risk factors, including heredity, certain genotypic indices, anthropometric cardiovascular parameters, rheumatoid arthritis, immunological diseases, inflammatory cytokines, lipid peroxidation, anthropometric hemocyte parameters, infectious diseases, vitamin D, matrix metalloproteinases, and other novel factors and diseases. However, the conclusions are inconsonant; the underlying causes of these associations remain to be further explored. Answer: Yes, the ankle brachial pressure index (ABPI) and carotid intima-media thickness (CIMT) can be considered as new early stage markers of subclinical atherosclerosis in patients with rheumatoid arthritis (RA). Studies have shown that patients with RA have a higher prevalence of peripheral vascular disease, and even those with normal ABPI exhibit a significantly higher mean in CIMT compared with control groups, indicating a higher risk of stroke (PUBMED:25777147). Additionally, parameters such as arterial stiffness and subclinical atherosclerosis, including CIMT and ABI, have been evaluated in relation to inflammatory activity in patients with RA, suggesting their potential as markers (PUBMED:30081802). Furthermore, research comparing RA patients with healthy controls has found that RA patients without clinically evident cardiovascular disease had subclinical atherosclerosis in terms of impaired flow-mediated dilation (FMD) and increased CIMT (PUBMED:22490583). Epicardial adipose tissue thickness, FMD of the brachial artery, and CIMT have also been associated in RA patients, with correlations suggesting their use in estimating the burden of atherosclerosis (PUBMED:25139185). Moreover, studies have investigated the relationship between CIMT, FMD in the brachial artery, and nuclear heart scan in RA patients, although no significant statistical difference was found between ischemic and non-ischemic patients in terms of CIMT and FMD values (PUBMED:32779171). However, the comparison of CIMT in RA patients with healthy patients has shown that CIMT is significantly greater in RA subjects, with the thickness increasing with disease severity (PUBMED:38028704). Additionally, CIMT has been utilized as a surrogate marker for detecting subclinical atherosclerosis in RA patients, with findings indicating a higher burden of atherosclerosis among asymptomatic patients with RA compared with normal controls (PUBMED:25366205). Serum osteoprotegerin levels, which have been correlated with CIMT and arterial stiffness, may also serve as a useful biomarker for cardiovascular risk management in RA patients (PUBMED:26555551). Finally, the relation between ischemia modified albumin levels and CIMT in RA patients has been explored, suggesting the use of IMA as an early marker of atherosclerosis in RA patients due to its correlation with CIMT (PUBMED:27028097). Overall, Overall, the literature supports the use of ABPI and CIMT as early stage markers for subclinical atherosclerosis in patients with RA. These markers are non-invasive and can be used to identify individuals at higher risk for cardiovascular events. The studies collectively suggest that despite the absence of clinical cardiovascular disease, RA patients exhibit signs of atherosclerosis, which can be detected through these measures. The findings indicate that RA itself may contribute to the atherosclerotic process, not solely through traditional cardiovascular risk factors but also due to the inflammatory and possibly immunological nature of the disease (PUBMED:25777147). In summary, ABPI and CIMT are valuable tools for the early detection of subclinical atherosclerosis in RA patients, which is crucial for the timely management and prevention of cardiovascular complications associated with RA.
Instruction: Can Flemish women in semi-rural areas be motivated to attend organized breast cancer screening? Abstracts: abstract_id: PUBMED:10367299 Can Flemish women in semi-rural areas be motivated to attend organized breast cancer screening? Background: The implementation of organized breast cancer screening in Flanders was prepared by means of pilot projects within a multicenter study. In the semi-rural district of Kontich (Province of Antwerp, Flanders) a pilot project was performed using a mobile screening unit. Compared to international standards, the attendance rate for this pilot project (i.e. 34%) was low. Non-organized screening, which already exists in Flanders, at least partly explains this low attendance rate for the organized screening. The main purpose of our study was to investigate the experience of the pilot target group with respect to the organized breast cancer screening in the district of Kontich, in order to maximize the conditions for a high attendance rate in the organized breast cancer screening programme throughout Flanders. Methods: With a random numbers procedure, performed by the computer, 500 women were selected among those who were invited to the first screening round of the breast cancer screening programme in the district of Kontich (n = 6,897). These 500 randomly selected women were asked to cooperate with a face-to-face interview. The questionnaire used dealt with the different aspects of the organized mammographic screening which were expected to influence the decision to attend. Results: There were 348 women who responded to the questionnaire (69.6%): 138 of them were attenders and 210 were non-attenders at the organized breast cancer screening. Attenders and non-attenders at the organized breast cancer screening in the district of Kontich had different views about various aspects of the screening programme. The percentages of those who thought that an item was important or very important to them, were for the 138 attenders and the 210 non-attenders respectively: "to receive a personal invitation letter": 90.6 vs. 48.1% (p &lt; 0.05); "a preliminary visit to the GP": 9.4 vs. 34.3% (p &lt; 0.05); "possibility of examination outside business hours": 15.9 vs. 30.0% (p &lt; 0.05). Conclusions: Although the putting into action of a mobile unit in the semi-rural area of the district of Kontich was productive, the attendance rate was still too low compared to international standards. To increase the attendance rate, the following interventions should be considered: devising the personal invitation letter in a more attractive way, activating and stimulating the important motivational role of the GP in persuading women to attend the organized screening programme and offering the invited population the possibility to have a mammographic examination performed outside business hours. Appropriate measures are being explored. abstract_id: PUBMED:35886089 Organized Breast and Cervical Cancer Screening: Attendance and Determinants in Rural China. To evaluate the attendance and determinants of organized cervical and breast cancer (two-cancer) screening, especially higher-level factors, we conducted a cross-sectional survey in central China from June 2018 to November 2019 among 1949 women (age ≥ 35 years). We examined organizer-level factors, provider-level factors, receiver-lever factors and attendance and participation willingness of screening. The results indicate that the attendance and participation willingness of organized two-cancer screening was 61.19% and 77.15%, respectively. After adjustment for potential confounders, women who received screening notification were more likely to have greater participation willingness and higher attendance than those who received no notification (adjusted odds ratio [aOR] = 1.59, 95% confidence interval [CI]: 1.27-1.99; aOR = 98.03, 95% CI: 51.44-186.82, respectively). Compared with being notified about screening by GPs, being notified by community women's leaders and other community leaders were more likely to lead to greater willingness to participate again (aOR = 2.86, 95% CI: 1.13-7.24; aOR = 3.27, 95% CI: 1.26-8.48, respectively) and recommending screening to others (aOR = 2.18, 95% CI: 1.02-4.65; aOR = 4.14, 95% CI: 1.84-9.30, respectively). The results suggest that notification of women about screening by community leaders is an important organizer-level factor. As a part of public health services, the design and implementation of optimal cancer screening strategies may require public-sector involvement at the organizer level instead of a one-man show by the health sector. abstract_id: PUBMED:38282637 Evaluating an Enterprise-Wide Initiative to enhance healthcare coordination for rural women Veterans using the RE-AIM framework. Introduction: The Veterans Health Administration (VA) Office of Rural Health (ORH) and Office of Women's Health Services (OWH) in FY21 launched a three-year Enterprise-Wide Initiative (EWI) to expand access to preventive care for rural, women Veterans. Through this program, women's health care coordinators (WHCC) were funded to coordinate mammography, cervical cancer screening and maternity care for women Veterans at selected VA facilities. We conducted a mixed-methods evaluation using the RE-AIM framework to assess the program implementation. Materials And Methods: We collected quantitative data from the 14 program facilities on reach (i.e., Veterans served by the program), effectiveness (e.g., cancer screening compliance, communication), adoption, and maintenance of women's health care coordinators (WHCC) in FY2022. Implementation of the program was examined through semi-structured interviews with the facility WHCC funding initiator (e.g., the point of contact at facility who initiated the request for WHCC funding), WHCCs, and providers. Results: Reach. The number of women Veterans and rural women Veterans served by the WHCC program grew (by 50% and 117% respectively). The program demonstrated effectiveness as screening rates increased for cervical and breast cancer screening (+0.9% and +.01%, respectively). Also, maternity care coordination phone encounters with Veterans grew 36%. Adoption: All facilities implemented care coordinators by quarter two of FY22. Implementation. Qualitative findings revealed facilitators and barriers to successful program implementation and care coordination. Maintenance: The EWI facilitated the recruitment and retention of WHCCs at respective VA facilities over time. Implications: In rural areas, WHCCs can play a critical role in increasing Reach and effectiveness. The EWI demonstrated to be a successful care coordination model that can be feasibly Adopted, Implemented, and Maintained at rural VA facilities. abstract_id: PUBMED:36139515 The Effect of Two Interventions to Increase Breast Cancer Screening in Rural Women. Guideline-based mammography screening is essential to lowering breast cancer mortality, yet women residing in rural areas have lower rates of up to date (UTD) breast cancer screening compared to women in urban areas. We tested the comparative effectiveness of a tailored DVD, and the DVD plus patient navigation (PN) intervention vs. Usual Care (UC) for increasing the percentage of rural women (aged 50 to 74) UTD for breast cancer screening, as part of a larger study. Four hundred and two women who were not UTD for breast cancer screening, eligible, and between the ages of 50 to 74 were recruited from rural counties in Indiana and Ohio. Consented women were randomly assigned to one of three groups after baseline assessment of sociodemographic variables, health status, beliefs related to cancer screening tests, and history of receipt of guideline-based screening. The mean age of participants was 58.2 years with 97% reporting White race. After adjusting for covariates, 54% of women in the combined intervention (DVD + PN) had a mammogram within the 12-month window, over 5 times the rate of becoming UTD compared to UC (OR = 5.11; 95% CI = 2.57, 10.860; p &lt; 0.001). Interactions of the intervention with other variables were not significant. Significant predictors of being UTD included: being in contemplation stage (intending to have a mammogram in the next 6 months), being UTD with other cancer screenings, having more disposable income and receiving a reminder for breast screening. Women who lived in areas with greater Area Deprivation Index scores (a measure of poverty) were less likely to become UTD with breast cancer screening. For rural women who were not UTD with mammography screening, the addition of PN to a tailored DVD significantly improved the uptake of mammography. Attention should be paid to certain groups of women most at risk for not receiving UTD breast screening to improve breast cancer outcomes in rural women. abstract_id: PUBMED:11879559 Comparison of various characteristics of women who do and do not attend for breast cancer screening. Background: Information regarding the characteristics and health of women who do and do not attend for breast cancer screening is limited and representative data are difficult to obtain. Methods: Information on age, deprivation and prescriptions for various medications was obtained for all women at two UK general practices who were invited to breast cancer screening through the National Health Service Breast Screening Programme. The characteristics of women who attended and did not attend screening were compared. Results: Of the 1064 women invited to screening from the two practices, 882 (83%) attended screening. Screening attenders were of a similar age to non-attenders but came from significantly less deprived areas (30% of attenders versus 50% of non-attenders came from the most deprived areas, P &lt; 0.0001) and were more likely to have a current prescription for hormone replacement therapy (32% versus 19%, P &lt; 0.0001). No significant differences in recent prescriptions of medication for hypertension, heart disease, hypercholesterolaemia, diabetes mellitus, asthma, thyroid disease or depression/anxiety were observed between attenders and non-attenders. Conclusion: Women who attend the National Health Service Breast Screening Programme come from less deprived areas and are more likely to have a current prescription for hormone replacement therapy than non-attenders, but do not differ in terms of age or recent prescriptions for various other medications. abstract_id: PUBMED:26844118 Screening mammography uptake within Australia and Scotland in rural and urban populations. Objective: To test the hypothesis that rural populations had lower uptake of screening mammography than urban populations in the Scottish and Australian setting. Method: Scottish data are based upon information from the Scottish Breast Screening Programme Information System describing uptake among women residing within the NHS Highland Health Board area who were invited to attend for screening during the 2008 to 2010 round (N = 27,416). Australian data were drawn from the 2010 survey of the 1946-51 cohort of the Australian Longitudinal Study on Women's Health (N = 9890 women). Results: Contrary to our hypothesis, results indicated that women living in rural areas were not less likely to attend for screening mammography compared to women living in urban areas in both Scotland (OR for rural = 1.17, 95% CI = 1.06-1.29) and Australia (OR for rural = 1.15, 95% CI = 1.01-1.31). Conclusions: The absence of rural-urban differences in attendance at screening mammography demonstrates that rurality is not necessarily an insurmountable barrier to screening mammography. abstract_id: PUBMED:12115366 Breast and cervical carcinoma screening practices among women in rural and nonrural areas of the United States, 1998-1999. Background: Prior studies have suggested that women living in rural areas may be less likely than women living in urban areas to have had a recent mammogram and Papanicolau (Pap) test and that rural women may face substantial barriers to receiving preventive health care services. Methods: The authors examined both breast and cervical carcinoma screening practices of women living in rural and nonrural areas of the United States from 1998 through 1999 using data from the Behavioral Risk Factor Surveillance System. The authors limited their analyses of screening mammography and clinical breast examination to women aged 40 years or older (n = 108,326). In addition, they limited their analyses of Pap testing to women aged 18 years or older who did not have a history of hysterectomy (n = 131,813). They divided the geographic areas of residence into rural areas and small towns, suburban areas and smaller metropolitan areas, and larger metropolitan areas. Results: Approximately 66.7% (95% confidence interval [CI] = 65.8% to 67.6%) of women aged 40 years or older who resided in rural areas had received a mammogram in the past 2 years, compared with 75.4% of women living in larger metropolitan areas (95% CI = 74.9% to 75.9%). About 73.0% (95% CI = 72.2% to 73.9%) of women aged 40 years or older who resided in rural areas had received a clinical breast examination in the past 2 years, compared with 78.2% of women living in larger metropolitan areas (95% CI = 77.8% to 78.7%). About 81.3% (95% CI = 80.6% to 82.0%) of 131,813 rural women aged 18 years or older who had not undergone a hysterectomy had received a Pap test in the past 3 years, compared with 84.5% of women living in larger metropolitan areas (95% CI = 84.1% to 84.9%). The differences in screening across rural and nonrural areas persisted in multivariate analysis (P &lt; 0.001). Conclusions: These results underscore the need for continued efforts to provide breast and cervical carcinoma screening to women living in rural areas of the United States. abstract_id: PUBMED:38110243 Breast cancer screening motivation and behaviours of women aged over 75 years. Objective: In Australia, breast screening is offered free every two years to women aged 50-74 years. Women aged ≥75 are eligible to receive a free mammogram but do not receive an invitation. This study aimed to explore the motivations and behaviours of women living in Australia aged ≥75 years regarding ongoing breast cancer screening given the public health guidance. Methods: Sixty women aged ≥75 were recruited from metropolitan, regional, and rural areas across Australia to participate in a descriptive qualitative study. Semi-structured interviews were used to seek reflection on women's experience of screening, any advice they had received about screening beyond 75, their understanding of the value of screening and their intention to participate in the future. Thematic analysis of transcripts led to the development of themes. Results: Themes resulting from the study included: reasons to continue and discontinue screening, importance of inclusivity in the health system and availability of information. Regular screeners overwhelmingly wished to continue screening and had strong beliefs in the benefits of screening. Women received limited information about the benefits or harms of screening beyond age 75 and very few had discussed screening with their Primary Healthcare Provider. No longer receiving an invitation to attend screening impacted many women's decision-making. Conclusion: More information via structured discussion with health professionals is required to inform women about the risks and benefits of ongoing screening. No longer being invited to attend screening left many women feeling confused and for some this led to feelings of discrimination. abstract_id: PUBMED:11217186 Importance of fatalism in understanding mammography screening in rural elderly women. This study examined the relationship between demographic factors and other correlates of fatalism, and assessed the impact of fatalistic beliefs on the participation in breast cancer screening in rural women. The subjects were 220 women aged 50 and over recruited from 6 large rural counties in South Carolina. Data were collected using a demographic questionnaire and the revised Powe Fatalism Inventory. Results show significant associations between fatalism and increased age (p = 0.005), race (p = 0.0001), doctor recommendation (p = .0034) and decreased educational level (p = 0.001). Fatalism was associated with noncompliance with mammography screening in univariate analysis among African-American women (OR = .362; 95% CI: 1.11, 11.8). After adjusting for possible confounders (age, education, and doctor recommendation), fatalism was not significantly associated with noncompliance with screening. These results illustrate age, race, and education may be important predictors of fatalism and that fatalism may be one barrier that has previously gone unmeasured and unchallenged in understanding screening behavior in older women. abstract_id: PUBMED:25636359 Breast cancer screening in thirteen French departments Background: In France, breast cancer screening programme, free of charge for women aged 50-74 years old, coexists with an opportunistic screening and leads to reduction in attendance in the programme. Here, we reported participation in organized and/or opportunistic screening in thirteen French departments. Population And Methods: We analyzed screening data (organized and/or opportunistic) of 622,382 women aged 51-74 years old invited to perform an organized mammography screening session from 2010 to 2011 in the thirteen French departments. The type of mammography screening performed has been reported according to women age, their health insurance scheme, the rurality and the socioeconomic level of their area or residence. We also represented the tertiles of deprivation and participation in mammography screening for each department. Results: A total of 390,831 (62.8%) women performed a mammography screening (organized and/or opportunistic) after the invitation. These women were mainly aged from 55-69 years old, insured by the general insurance scheme and lived in urban, semi-urban or affluent areas. Conclusion: The participation in mammography screening (organized and opportunistic) in France remains below the target rate of 70% expected by health authorities to reduce breast cancer mortality through screening. Answer: Yes, Flemish women in semi-rural areas can be motivated to attend organized breast cancer screening, but it requires targeted strategies to address specific barriers and preferences. In the pilot project conducted in the semi-rural district of Kontich, Flanders, the attendance rate was low at 34%, which was attributed to the existence of non-organized screening options (PUBMED:10367299). To increase attendance rates, the study suggested interventions such as making personal invitation letters more attractive, emphasizing the motivational role of general practitioners (GPs) in persuading women to attend, and offering mammographic examinations outside business hours (PUBMED:10367299). Moreover, studies from other regions have identified additional factors that can influence attendance. For instance, in rural China, women who received screening notifications were more likely to attend organized screenings, and being notified by community leaders was particularly effective (PUBMED:35886089). In the United States, the Veterans Health Administration's initiative to expand access to preventive care for rural women Veterans through care coordinators showed increased screening rates for cervical and breast cancer (PUBMED:38282637). Similarly, a study testing the effectiveness of a tailored DVD and patient navigation intervention in rural Indiana and Ohio found that these interventions significantly improved the uptake of mammography (PUBMED:36139515). However, it is important to note that women's decisions to attend screenings are influenced by various factors, including socioeconomic status, access to information, and personal beliefs. For example, women from less deprived areas and those with a current prescription for hormone replacement therapy were more likely to attend screenings (PUBMED:11879559). Additionally, the absence of an invitation to attend screening impacted many women's decision-making, with some feeling confused or discriminated against (PUBMED:38110243). Fatalistic beliefs have also been associated with noncompliance with mammography screening among African-American women, although this association was not significant after adjusting for confounders (PUBMED:11217186). In conclusion, while there are challenges to motivating Flemish women in semi-rural areas to attend organized breast cancer screening, evidence from various studies suggests that tailored interventions addressing specific barriers and leveraging local community structures can improve attendance rates.
Instruction: Cephalhematoma and caput succedaneum: do they always occur in labor? Abstracts: abstract_id: PUBMED:29712501 The feasibility and accuracy of ultrasound assessment in the labor room. Objective: Vaginal examination is widely used to assess the progress of labor; however, it is subjective and poorly reproducible. We aim to assess the feasibility and accuracy of transabdominal and transperineal ultrasound compared to vaginal examination in the assessment of labor and its progress. Methods: Women were recruited as they presented for assessment of labor to a tertiary inner city maternity service. Paired vaginal and ultrasound assessments were performed in 192 women at 24-42 weeks. Fetal head position was assessed by transabdominal ultrasound defined in relation to the occiput position transformed to a 12-hour clock face; fetal head station defined as head-perineum distance by transperineal ultrasound; cervical dilatation by anterior to posterior cervical rim measurement and caput succedaneum by skin-skull distance on transperineal ultrasound. Results: Fetal head position was recorded in 99.7% (298/299) of US and 51.5% (154/299) on vaginal examination (p &lt; .0001 1 ). Bland-Altman analysis showed 95% limits of agreement, -5.31 to 4.84 clock hours. Head station was recorded in 96.3% (308/320) on vaginal examination (VE) and 95.9% (307/320) on US (p = .79 1 ). Head station and head perineum distance were negatively correlated (Spearman's r = -.57, p &lt; .0001). 54.4% (178/327) of cervical dilatation measurements were determined using US and 100% on VE/speculum (p &lt; .0001). Bland-Altman analysis showed 95% limits of agreement -2.51-2.16 cm. The presence of caput could be assessed in 98.4% (315/320) of US and was commented in 95.3% (305/320) of VEs, with agreement for the presence of caput of 76% (p &lt; .05). Fetuses with caput greater than 10 mm had significantly lower head station (p &lt; .0001). Conclusions: We describe comprehensive ultrasound assessments in the labor room that could be translated to the assessment of women in labor. Fetal head position is unreliably determined by vaginal examination and agrees poorly with US. Head perineum distance has a moderate correlation with fetal head station in relation to the ischial spines based on vaginal examination. Cervical dilatation is not reliably assessed by ultrasound except at dilatations of less than 4 cm. Caput is readily quantifiable by ultrasound and its presence is associated with lower fetal head station. Transabdominal and transperineal ultrasound is feasible in the labor room with an accuracy that is generally greater than vaginal examinations. abstract_id: PUBMED:9790368 Cephalhematoma and caput succedaneum: do they always occur in labor? Objective: Our purpose was to analyze our experience with cephalhematomas detected prenatally by ultrasonography. Study Design: Seven cases of cephalhematomas were identified prenatally among 16,292 fetuses having comprehensive ultrasonographic examinations between 1993 and 1996. The course of pregnancy and the neonatal outcome were reviewed in each case. Results: Cephalhematomas appeared as an echogenic bulge posterior to the occipital region (5 cases) or at the temporal region of the fetal head (2 cases). Conclusion: Cephalhematomas, which are believed to be a result of operative delivery, can also originate, in utero, antepartum. Premature rupture of membranes appears to be an associated factor. abstract_id: PUBMED:3932636 The influence of the progress of labor on the reliability of the transcutaneous PCO2 of the fetus. Transcutaneous PCO2 measurements were performed on 105 fetuses during labor. A modified Severinghaus electrode was calibrated with 5% and 10% carbon-dioxide gas at 33 and 66torr. This corresponds to a drop in the PCO2 levels measured transcutaneously of about 13% and to an adjustment to the blood gas level. The levels measured transcutaneously were compared with data compiled from the fetal blood analysis and values of blood gas analysis from the umbilical artery immediately after delivery. The object of the study was to find out to what extent the progress of labor influences the conformity between the PCO2 levels measured transcutaneously and measured in blood. Comparing the data of the transcutaneous measurement (pb PCO2) with the pb PCO2 of the peripheral blood (pb PCO2) in cases without a caput succedaneum, we found a correlation coefficient of r = 0.79 and a slope of 1.1. On the other hand with the development of a caput succedaneum the correlation coefficient was lowered to r = 0.72 and the slope to 0.85. An influence of the propulsion of the fetal head in the birth canal on the accuracy of the transcutaneous measurement was also obvious. When the position of the fetal head was either above or in the interspinal plane, the correlation coefficient amounted to r = 0.85. With the progression below the interspinal plane, the correlation coefficient was clearly lowered. While our results show a good overall conformity between PCO2 levels measured transcutaneously and those from peripheral blood, our analysis shows also to what extent the conformity can be influenced both by the existence of a caput succedaneum and by the propulsion of the presenting part. abstract_id: PUBMED:15925438 Fetal head position during the second stage of labor: comparison of digital vaginal examination and transabdominal ultrasonographic examination. Objective: To study the correlation between digital vaginal and transabdominal ultrasonographic examination of the fetal head position during the second stage of labor. Methods: Patients (n = 110) carrying a singleton fetus in a vertex position were included. Every patient had ruptured membranes and a fully dilated cervix. Transvaginal examination was randomly performed either by a senior resident or an attending consultant. Immediately afterwards, transabdominal ultrasonography was performed by the same sonographer (OD). Both examiners were blind to each other's results. Sample size was determined by power analysis. Confidence intervals around observed rates were compared using chi-square analysis and Cohen's Kappa test. Logistic regression analysis was performed. Results: In 70% of cases, both clinical and ultrasound examinations indicated the same position of the fetal head (95% confidence interval, 66-78). Agreement between the two methods reached 80% (95% CI, 71.3-87) when allowing a difference of up to 45 degrees in the head rotation. Logistic regression analysis revealed that gestational age, parity, birth weight, pelvic station and examiner's experience did not significantly affect the accuracy of the examination. Caput succedaneum tended to diminish (p = 0.09) the accuracy of clinical examination. The type of fetal head position significantly affected the results. Occiput posterior and transverse head locations were associated with a significantly higher rate of clinical error (p = 0.001). Conclusion: In 20% of the cases, ultrasonographic and clinical results differed significantly (i.e., &gt;45 degrees). This rate reached 50% for occiput posterior and transverse locations. Transabdominal ultrasonography is a simple, quick and efficient way of increasing the accuracy of the assessment of fetal head position during the second stage of labor. abstract_id: PUBMED:23865738 Occiput posterior position diagnosis: vaginal examination or intrapartum sonography? A clinical review. The occiput posterior (OP) position is one of the most frequent malposition during labor. During the first stage of labor, the fetal head may stay in the OP position in 30% of the cases, but of these only 5-7% remains as such at time of delivery. The diagnosis of OP position in the second stage of labor is made difficult by the presence of the caput succedaneum or scalp hair, both of which may give some problem in the identification of fetal head sutures and fontanels and their location in relationship to maternal pelvic landmarks. The capability of diagnosing a fetus in OP position by digital examination has been extremely inaccurate, whereas an ultrasound approach, transabdominal, transperineal and transvaginal, has clearly shown its superior diagnostic accuracy. This is true not only for diagnosis of malpositions, detected in both first and second stage of labor, but also in cases of marked asynclitism. abstract_id: PUBMED:37164504 The sonopartogram. The assessment of labor progress from digital vaginal examination has remained largely unchanged for at least a century, despite the current major advances in maternal and perinatal care. Although inconsistently reproducible, the findings from digital vaginal examination are customarily plotted manually on a partogram, which is composed of a graphical representation of labor, together with maternal and fetal observations. The partogram has been developed to aid recognition of failure to labor progress and guide management-specific obstetrical intervention. In the last decade, the use of ultrasound in the delivery room has increased with the advent of more powerful, portable ultrasound machines that have become more readily available for use. Although ultrasound in intrapartum practice is predominantly used for acute management, an ultrasound-based partogram, a sonopartogram, might represent an objective tool for the graphical representation of labor. Demonstrating greater accuracy for fetal head position and more objectivity in the assessment of fetal head station, it could be considered complementary to traditional clinical assessment. The development of the sonopartogram concept would require further undertaking of serial measurements. Advocates of ultrasound will concede that its use has yet to demonstrate a difference in obstetrical and neonatal morbidity in the context of the management of labor and delivery. Taking a step beyond the descriptive graphical representation of labor progress is the question of whether a specific combination of clinical and demographic parameters might be used to inform knowledge of labor outcomes. Intrapartum cesarean deliveries and deliveries assisted by forceps and vacuum are all associated with a heightened risk of maternal and perinatal adverse outcomes. Although these outcomes cannot be precisely predicted, many known risk factors exist. Malposition and high station of the fetal head, short maternal stature, and other factors, such as caput succedaneum, are all implicated in operative delivery; however, the contribution of individual parameters based on clinical and ultrasound assessments has not been quantified. Individualized risk prediction models, including maternal characteristics and ultrasound findings, are increasingly used in women's health-for example, in preeclampsia or trisomy screening. Similarly, intrapartum cesarean delivery models have been developed with good prognostic ability in specifically selected populations. For intrapartum ultrasound to be of prognostic value, robust, externally validated prediction models for labor outcome would inform delivery management and allow shared decision-making with parents. abstract_id: PUBMED:18696276 Sagittal suture overlap in cephalopelvic disproportion: blinded and non-participant assessment. Objective: To determine the role of assessment of overlap of fetal skull bones (molding) in intrapartum prediction of cephalopelvic disproportion (CPD). Design: Prospective cross-sectional study. Setting: South African high-risk obstetric unit that receives referrals from other facilities. Population: Women of at least 37 weeks' gestation in the active phase of labor, with singleton vertex presentations and live fetuses, and without previous cesarean sections. Method: The researcher was blinded to parity and previous clinical information on the women, and not involved in their obstetric care. The researcher performed clinical assessments, including estimation of level of head, cervical dilatation, head flexion, position, overlap of fetal skull bones, caput succedaneum and asynclitism. A single assessment was done on each woman. Main Outcome Measure: CPD, defined as cesarean section for poor progress in labor. Results: The author examined 504 women, and CPD occurred in 113 (22.4%). In multivariate logistic regression analysis, sagittal suture overlap was independently associated with CPD. Other factors associated were maternal height, duration of labor, birth weight, and the interaction between caput succedaneum and cervical dilatation at the time of examination. Lambdoid suture overlap was not significantly associated with CPD, and could be determined in only 66.5% of examinations because of frequent head deflexion. Conclusion: Assessment of sagittal suture overlap, but not lambdoid suture overlap, is useful for prediction of CPD. Knowledge of sagittal suture overlap may assist in decisions on clinical management where there is poor progress in a trial of labor. abstract_id: PUBMED:26008180 A model to predict vaginal delivery in nulliparous women based on maternal characteristics and intrapartum ultrasound. Objective: Accurate prediction of whether a nulliparous woman will have a vaginal delivery would be a major advance in obstetrics. The objective of the study was to develop such a model based on maternal characteristics and the results of intrapartum ultrasound. Study Design: One hundred twenty-two nulliparous women in the first stage of labor were included in a prospective observational 2-centre study. Labor was classified as prolonged according to the respective countries' national guidelines. Fetal head position was assessed with transabdominal ultrasound and cervical dilatation by digital examination, and transperineal ultrasound was used to determine head-perineum distance and the presence of caput succedaneum. The subjects were divided into a testing set (n = 61) and a validation set (n = 61) and a risk score derived using multivariable logistic regression with vaginal birth as the outcome, which was dichotomized into no/cesarean delivery and yes/vaginal birth. Covariates included head-perineum distance, caput succedaneum, and occiput posterior position, which were dichotomized respectively into the following: ≤40 mm, &gt;40 mm, &lt;10 mm, ≥10 mm, and no, yes. Maternal age, gestational age, and maternal body mass index were included as continuous covariates. Results: Dichotomized score is significantly associated with vaginal delivery (P = .03). Women with a score above the median had greater than 10 times the odds of having a vaginal delivery as compared with those with a score below the median. The receiver-operating characteristic curve showed an area under the curve of 0.853 (95% confidence interval, 0.678-1.000). Conclusion: A risk score based on maternal characteristics and intrapartum findings can predict vaginal delivery in nulliparous women in the first stage of labor. abstract_id: PUBMED:608921 Cerebral symptoms in the term newborn infant. Results of a prospective survey 1785 newborns of 37 weeks GA or more, were studied during a 15 months period at the Port-Royal Maternity Hospital. This study suggests that cerebral abnormalities during the neonatal period in fullterm neonates are related to deleterious intra partum factors. In 57 newborns, clear cut signs of cerebral birth injury were observed, in 31 newborns only mild and transitory signs were observed. These 88 newborns were compared with 1655 having had a normal neurological examination within the first week of life. We focused particulary on dysfunctional labor patterns, specifically false labor, protracted active phase dilatation, protracted descent or a marked caput succedaneum. When these abnormal patterns are followed by oxytocin infusion and forceps extraction, primiparity appears linked with a high risk of cerebral birth injury. Within cephalic presentations, the occipito-posterior position is carrying the same high risk. The most severe degree of cerebral birth injury has nearly disappeared. However, the main problem of modern obstetrics is one of eradicating brain damage of moderate degree without reaching an excessive incidence of C. section. abstract_id: PUBMED:36553005 Asynclitism and Its Ultrasonographic Rediscovery in Labor Room to Date: A Systematic Review. Asynclitism, the most feared malposition of the fetal head during labor, still represents to date an unresolved field of interest, remaining one of the most common causes of prolonged or obstructed labor, dystocia, assisted delivery, and cesarean section. Traditionally asynclitism is diagnosed by vaginal examination, which is, however, burdened by a high grade of bias. On the contrary, the recent scientific evidence highly suggests the use of intrapartum ultrasonography, which would be more accurate and reliable when compared to the vaginal examination for malposition assessment. The early detection and characterization of asynclitism by intrapartum ultrasound would become a valid tool for intrapartum evaluation. In this way, it will be possible for physicians to opt for the safest way of delivery according to an accurate definition of the fetal head position and station, avoiding unnecessary operative procedures and medication while improving fetal and maternal outcomes. This review re-evaluated the literature of the last 30 years on asynclitism, focusing on the progressive imposition of ultrasound as an intrapartum diagnostic tool. All the evidence emerging from the literature is presented and evaluated from our point of view, describing the most employed technique and considering the future implication of the progressive worldwide consolidation of asynclitism and ultrasound. Answer: Cephalhematomas and caput succedaneum are often associated with labor, particularly with operative deliveries or those involving difficult or prolonged labor. However, they do not always occur in labor. A study analyzing cephalhematomas detected prenatally by ultrasonography found that cephalhematomas can originate antepartum, in utero, and are not exclusively a result of operative delivery. Premature rupture of membranes was noted as an associated factor in these cases (PUBMED:9790368). This suggests that while these conditions are commonly seen during or after labor, they can also develop before labor begins.
Instruction: Serial measurements of serum transaminases in renal transplant recipients with chronic hepatitis C: do they reflect disease severity? Abstracts: abstract_id: PUBMED:11127304 Serial measurements of serum transaminases in renal transplant recipients with chronic hepatitis C: do they reflect disease severity? Unlabelled: Chronic hepatitis C infection is a common problem in renal allograft recipients, this study was designed to investigate the association of serum aminotransferase levels with liver histology, in renal transplant patients with chronic hepatitis C virus (HCV) infection, in the long term. Methods: In this study, 82 HCV-infected renal allograft recipients, who were followed up with functioning grafts for at least 6 months, were analyzed. Patients were classified according to their transaminase values as persistently normal, intermittently abnormal, or continuously abnormal liver function tests. Serum transaminase levels exceeding at least 1.5 times the upper limit of normal (40 IU) for periods longer than 1 month were taken as abnormal. Patients with abnormal liver function tests owing to HCV unrelated causes (drugs, alcohol, or other toxic substances, other viruses, etc.) were excluded from the study. Forty-eight of these patients underwent at least one liver biopsy. Results: Of the 82 patients, 34 (41.5%) had persistently normal (liver biopsy revealed normal or minimal changes in 77.0%, chronic persistent hepatitis in 15.3%, chronic active hepatitis in 7.7%; no patient had cirrhosis), 29 (35.3%) intermittently abnormal (liver histology was consistent with minimal changes in 50%, chronic persistent hepatitis in 27.8%, chronic active hepatitis in 16.7%, cirrhosis in 5.5%), 19 (23.2%) persistently abnormal (liver biopsy showed minimal changes in 41.1%, chronic persistent hepatitis in 17.6%, chronic active hepatitis in 35.3%, cirrhosis in 5.9%) transaminase values. Conclusion: Although continuously or intermittently elevated transaminases do not always indicate morphologically advanced disease, the normal course of serum transaminases is mostly accompanied by normal, or near-normal, liver histology, in HCV-infected renal transplant patients. Liver biopsy is not indicated in deciding disease severity in these patients unless clinical findings dictate otherwise. abstract_id: PUBMED:21176747 Impact of immunosuppressive therapy on hepatitis C infection after renal transplantation. Background: Among patients after renal transplantation (NTx), hepatitis C virus (HCV) infection is a risk factor for graft loss and patient death caused by hepatic decompensation. Also, HCV has been implicated in the pathogenesis of glomerular diseases in native and transplanted kidneys. Therefore, the aim of this retrospective cohort study was to determine the effects of the widely used calcineurin inhibitors (CNI) cyclosporine A (CsA) and tacrolimus (Tac) on hepatitis C virus replication, inflammatory activity, development of liver fibrosis, and long-term renal graft function. Subjects And Methods: A cohort of 71 patients with HCV infection after kidney transplantation under immunosuppression with either CsA or Tac were analyzed for viral kinetics and serum transaminases. In addition, presence of liver fibrosis was detected by non-invasive measurements using the FibroScan. Graft function was determined biochemically. Patients with interferon therapy prior to transplantation were excluded from the study in order to avoid any impact of the antiviral therapy on outcomes. Results: In the early period after transplantation, hepatitis C viral load was lower in patients treated with Tac as compared to CsA. This effect became negligible 3 months after transplantation. However, hepatic inflammatory activity was reduced in the CsA-treated group. Extent of liver fibrosis was similar in both groups of HCV-infected patients as well as in a control group of non-HCV-infected patients after renal transplantation (NTx), respectively. Renal function and glomerular filtration rate, as calculated by the modification of diet in renal disease (MDRD) formula, were significantly better in patients treated with Tac. Conclusions: During long-term immunosuppression, the CNIs cyclosporine A versus tacrolimus showed no significant differences in HCV-infected patients after renal transplantation with respect to viral replication and development of liver fibrosis. However, function of the renal graft is significantly better preserved in patients receiving tacrolimus. abstract_id: PUBMED:12832744 Sustained response with negative serum HCV-mRNA and disappearance of antibodies after interferon-alpha therapy in a kidney transplant recipient with chronic active viral hepatitis C. Background: The use of interferon-alpha (IFN-alpha) to treat viral hepatitis C (HCV) occurring in kidney transplant recipients is controversial. This study reports an HCV patient successfully treated with IFN-alpha therapy achieving sustained response, negative serum HCV-mRNA and the disappearance of HCV antibodies, without impairment of renal function. Method: A young kidney transplant recipient developed a proven HCV infection 70 months post-transplantation. The patient received IFN-alpha therapy, and for a 32-month follow-up period was evaluated clinically, serologically and virologically. Results: IFN-alpha therapy resulted in normal transaminase activities within 2 months. Serum HCV-mRNA was negative after 4 weeks of treatment and is still negative. Ten months after IFN-alpha therapy withdrawal, the enzyme immunoassay revealed that HCV antibodies (HCVAb) were absent in the serum. IFN-alpha therapy was safe, well tolerated and renal function was not impaired. abstract_id: PUBMED:8943974 Correlation between serum HCV RNA and aminotransferase levels in patients with chronic HCV infection. Cross-sectional studies on the correlation between serum hepatitis C virus (HCV) RNA and alanine aminotransferase (ALT) levels in patients with chronic hepatitis C have yielded conflicting results. We conducted a longitudinal study to examine the correlation between HCV viremia and serum ALT levels in individual patients over time. Serial samples (mean 9) from 25 patients with chronic HCV infection, including interferon-treated and untreated immunocompetent and immunosuppressed patients, collected over a period of 1-4.8 years (mean 2.6 years) were tested for HCV RNA and ALT levels using a highly reproducible quantitative (bDNA) assay. A significant correlation was found between serum HCV RNA and ALT levels in the patients who received IFN therapy, but no correlation was observed in the untreated patients. Among the untreated patients, the immunosuppressed patients had significantly higher HCV RNA levels (39 +/- 4 vs 3.6 +/- 8 Meq/ml, P &lt; 0.0001) but significantly lower ALT (56 +/- 11 vs 97 +/- 12 units/liter, P = 0.03) levels when compared to the immunocompetent ones. In summary, we found no correlation between serum HCV RNA and ALT levels in chronic hepatitis C patients who are not receiving interferon therapy. Immunosuppression results in higher HCV RNA but lower ALT levels. abstract_id: PUBMED:23816714 Outcomes following renal transplantation in patients with chronic hepatitis C based on severity of fibrosis on pre-transplant liver biopsy. Data regarding long-term outcomes following renal transplantation in patients with hepatitis C virus (HCV) infection have been controversial. Our aim was to determine whether there is a difference in outcomes between patients with HCV and more advanced fibrosis on pretransplant biopsy and those with minimal or no fibrosis. Patients were divided according to the severity of fibrosis and their outcomes (including acute rejection, chronic rejection, re-initiation of dialysis, progression of liver disease and mortality) were compared. Thirty-one patients with minimal or no fibrosis (Scheuer stages 0 and 1: Group-A) and 10 patients with more advanced fibrosis (Scheuer stages 2 and 3: Group-B) were included in the final data analysis. Acute rejection occurred in 29% (9/31) of the patients with minimal and 30% (3/10) of the patients with advanced fibrosis (P = 0.95), while chronic allograft nephropathy occurred in 6.5% (2/31) of the patients without and 50% (5/10) of the patients with fibrosis (P = 0.006). None of the patients without fibrosis required re-initiation of dialysis compared with 50% (5/10) of the patients with fibrosis (P &lt;0.05). Median graft survival was 46 months and 18 months for patients with minimal and advanced fibrosis, respectively. There were four deaths among patients with advanced and three deaths among patients with minimal fibrosis (P = 0.04). Our data suggests that patients with chronic HCV and more advanced fibrosis on liver biopsy who undergo a renal transplant have a higher incidence of chronic rejection, graft failure and mortality following renal transplant compared with those with minimal fibrosis. abstract_id: PUBMED:10803633 Which patients with hepatitis C virus should be treated? Since the National Institutes of Health (NIH) Consensus Conference in 1997, our understanding of the natural history of hepatitis C (HCV) infection and our ability to treat patients has improved. Thus, a large number of clinical studies, confounding terminology, and a growing dilemma in targeting particular populations for treatment who have HCV infection, will continue to be at the forefront of clinical research and treatment. In this report, we examine which HCV-infected populations of patients should be treated. Beginning with treatment guidelines from the NIH Consensus Conference, and a brief overview of the terminology used in the HCV literature, we subsequently review data regarding treatment outcomes based on HCV viral load, genotype, and various epidemiological factors. Similarly, more challenging treatment strategies are discussed for patients with HCV infection, including those with ongoing psychiatric disorders, patients who are coinfected with the human immunodeficiency virus and HCV, and those patients with normal serum transaminases. Finally, a review and guidelines about other HCV treatment dilemmas, including patients with chronic renal failure on hemodialysis, patients who have undergone renal transplantation, and treatment of patients acutely exposed to HCV are also addressed. abstract_id: PUBMED:12830472 Evaluation of hepatitis B and hepatitis C virus-infected renal allograft recipients with liver biopsy and noninvasive parameters. Background: Patients with end-stage renal disease are at high risk for hepatitis B (HBV) or hepatitis C virus (HCV) infection. Because therapy indication for viral hepatitis depends on virologic, biochemical, and histologic criteria, liver biopsy usually is necessary. Recently, a panel of serum fibrosis markers has been postulated to allow quantification of liver fibrosis by noninvasive means. Methods: A cross-sectional study of all hepatitis B surface antigen (HBsAg)- and anti-HCV-positive renal allograft recipients among 900 renal allograft recipients regularly controlled in the authors' outpatient nephrology service was performed. The correlation between histologic, biochemical, and virologic parameters was assessed with an emphasis on the fibrosis marker hyaluronate in this immunosuppressed population. Results: Twenty-two HBsAg- and 62 anti-HCV-positive patients were analyzed. Based on polymerase chain reaction results, 86% of anti-HCV-positive and 95% of HBsAg-positive patients had actively replicating infection. In 41 of 67 (61%) patients with replicating disease, liver biopsy was performed, and the association of various biochemical parameters with the histologic scores for necroinflammation and fibrosis was investigated. Less than 10% of these patients had advanced fibrosis, although the mean time of infection was more than 15 years. We found no correlation of any of the serum parameters (including hyaluronate) with histologic activity of liver disease except for the peak glutamate-oxalacetate transaminase value recorded during the entire posttransplant period. Conclusion: Liver biopsy remains the gold standard for evaluation of liver disease and therapy decision in immunosuppressed renal allograft recipients. abstract_id: PUBMED:9395376 High rate of hepatitis C virus clearance in hemodialysis patients after interferon-alpha therapy. To gain insight into the long-term effect of interferon-alpha (IFN-alpha) therapy on hepatitis C virus (HCV) RNA-positive hemodialysis patients, 23 subjects were given 3 MU of IFN-alpha 3 times a week for 6 (n = 12) or 12 months (n = 11). They were followed for 19 months after cessation of therapy. Sustained serum HCV RNA clearance occurred in 42% of patients treated for 6 months and in 64% of those treated for 12 months. HCV was eradicated from 6 of 13 patients infected with HCV genotype 1b and from 2 of 6 patients also infected with hepatitis G virus. HCV RNA remained undetectable in both serum and a liver biopsy of 2 patients who were given cadaveric kidney transplants after IFN-alpha treatment. These data suggest that HCV RNA-positive dialysis patients can be considered for treatment while receiving dialysis, particularly those awaiting transplant. abstract_id: PUBMED:28370058 Hepatitis C viral load, genotype, and increased risk of developing end-stage renal disease: REVEAL-HCV study. The association between hepatitis C virus (HCV) infection and end-stage renal disease (ESRD) remains controversial without considering the role of HCV viral load and genotype. This study aimed to determine whether HCV RNA level and genotype affect the risk of developing ESRD. Between 1991 and 1992, 19,984 participants aged 30-65 years were enrolled in a community-based prospective cohort study in Taiwan. Chronic HCV infection was defined by detectable HCV viral load. ESRD was determined as the need for chronic dialysis or renal transplantation. Conventional Cox proportional hazard and competing risk models were used to determine the hazard ratio (HR) for ESRD. After a median follow-up of 16.8 years, 204 cases were detected during 319,474 person-years. The incidence rates of ESRD for nonchronically HCV-infected and chronically HCV-infected patients were 60.2 and 194.3 per 100,000 person-years, respectively. The multivariable HR was 2.33 (95% confidence interval [CI] 1.40-3.89) when comparing patients with and without chronic HCV infection. Patients with low and high HCV RNA levels were at higher risk of ESRD than those who were nonchronically HCV-infected (HR, 2.11, 95% CI 1.16-3.86, and HR, 3.06, 95% CI 1.23-7.58; Ptrend &lt; 0.001). This association remained robust after taking pre-ESRD death as a competing event for ESRD. Patients with HCV genotype 1 tended to have a higher risk of developing ESRD (HR, 3.60 95% CI 1.83-7.07) compared with nonchronically HCV-infected subjects. Conclusions: This study reveals that chronic HCV infection is associated with an increased risk of developing ESRD and suggests that elevated serum levels of HCV RNA (&gt;167,000 IU/mL) and HCV genotype 1 are strong predictors of ESRD, indicating clinical implications for the management of chronic HCV. (Hepatology 2017;66:784-793). abstract_id: PUBMED:31760038 Treatment of chronic hepatitis C in patients with chronic kidney disease with Sofosbuvir-basead regimes. Background: To analyze the effectiveness and the safety of Sofosbuvir-based regimens to treat patients with chronic hepatitis C virus (HCV) infection and chronic kidney disease (CKD). Methods: A retrospective, observational study in patients with chronic HCV infection and CKD treated with Sofosbuvir-based regimens was performed. Liver fibrosis, comorbidities, HCV genotype and sustained virological resposnse (SVR) at 12th week post-treatment were evaluated. Kidney function was accessed by serum creatinine and glomerular filtration rate (GFR). The assumed level of significance was 5 %. Results: Thirty-five patients were treated. The mean age was 52.1±10.9 years, 19 (54.3 %) were women, 32 (91.4 %) were already kidney transplanted and 3 (8.6 %) were on hemodialysis. The SVR by intention to treat was 88.6 %. The mean GFR was 65.8±28.6 and 63.7±28.3ml/min pre- and post-treatment respectively (p&gt;0.05). Treatment was interrupted in 1 (2.85 %) patient due to anemia and in 2 (5.7 %) due to loss of kidney function. Conclusion: Sofosbuvir-based regimens are effective to treat HCV in patients with CKD. In patients with mild CKD this type of therapy seems to be safe. Answer: Serial measurements of serum transaminases in renal transplant recipients with chronic hepatitis C do not always reflect disease severity. A study involving 82 HCV-infected renal allograft recipients found that although continuously or intermittently elevated transaminases do not always indicate morphologically advanced disease, the normal course of serum transaminases is mostly accompanied by normal, or near-normal, liver histology in these patients. Liver biopsy is not indicated in deciding disease severity in these patients unless clinical findings dictate otherwise (PUBMED:11127304). This finding is consistent with other research indicating that serum transaminase levels may not correlate well with liver histology. For example, a study on the correlation between serum HCV RNA and alanine aminotransferase (ALT) levels in patients with chronic hepatitis C found no correlation between serum HCV RNA and ALT levels in chronic hepatitis C patients who are not receiving interferon therapy (PUBMED:8943974). Furthermore, in the context of renal transplantation, the impact of immunosuppressive therapy on hepatitis C infection is also relevant. A study comparing the effects of calcineurin inhibitors cyclosporine A (CsA) and tacrolimus (Tac) on HCV-infected patients post-renal transplantation found no significant differences in viral replication and development of liver fibrosis between the two groups, although renal graft function was significantly better preserved in patients receiving tacrolimus (PUBMED:21176747). In summary, while serum transaminase levels can provide some information, they are not reliable indicators of disease severity in HCV-infected renal transplant patients, and liver biopsy remains the gold standard for evaluating liver disease severity in these patients (PUBMED:12830472).
Instruction: Serum C-reactive protein in acute biliary pancreatitis. Is it a reliable marker for the early assessment of severity of the disease? Abstracts: abstract_id: PUBMED:9513832 Serum C-reactive protein in acute biliary pancreatitis. Is it a reliable marker for the early assessment of severity of the disease? Background: The cut-off point of serum C-reactive protein to differentiate the mild from the severe form of acute pancreatitis is still debated; data concerning the C-reactive protein pattern in assessing the severity of acute biliary pancreatitis are lacking. Aim: To define the best cut-off point in differentiating the severe from the mild form of acute biliary pancreatitis. Patients: Fifty patients with acute biliary pancreatitis: 34 patients with mild pancreatitis and 16 with the severe form of the disease were studied. Methods: Serum C-reactive protein concentrations were assessed in all patients upon admission and for the following 5 days. Results: No significant difference in serum C-reactive protein levels was found in the first 2 days in patients with mild pancreatitis compared to those with the severe form of the disease. Using a cut-off point of 11 mg/dl, the sensitivity of serum C-reactive protein in assessing the severity of acute pancreatitis during the first two days of the study was 9% and 57%, the specificity, 93% and 81%, and the accuracy 71% and 74%, respectively. Conclusions: Serum determination of C-reactive protein in the first 48 hours of the disease is not a reliable marker of the severity of acute biliary pancreatitis. abstract_id: PUBMED:9759598 Serum interleukin 6 in the prognosis of acute biliary pancreatitis. Background: Data concerning the interleukin 6 pattern in acute biliary pancreatitis are lacking. Aim: To define the best cut-off point of this molecule in differentiating the severe form of acute biliary pancreatitis from the mild form and to evaluate its sensitivity, specificity and diagnostic accuracy in the prognosis of acute biliary pancreatitis in comparison with those of serum C-reactive protein. Patients: Forty-four patients with acute biliary pancreatitis: 27 patients with mild pancreatitis and 17 with the severe form of the disease. Methods: Serum interleukin-6 and C-reactive protein concentrations were assessed in all patients on admission and for the following 5 days. Results: Serum interleukin-6 levels were significantly higher (p &lt; 0.02) in patients with severe acute biliary pancreatitis than in those with the mild form of the disease. No significant difference in serum C-reactive protein levels was found in the first 2 days in patients with mild biliary pancreatitis when compared to those with the severe form of the disease. Using a cut-off point of 2.7 pg/ml for serum interleukin-6 and 11 mg/dl for serum C-reactive protein, the sensitivity of the two molecules in assessing the severity of acute pancreatitis on the first day of the study was 87.5% for interleukin-6 and 6.3% for C-reactive protein, the specificity, 83.3% for interleukin-6 and 91.7% for C-reactive protein, and the accuracy 85.0% for interleukin-6 and 57.5% for C-reactive protein. Conclusions: Serum determination of interleukin-6 in the first 24 hours of the disease is a better marker of the severity of acute biliary pancreatitis than C-reactive protein. abstract_id: PUBMED:15984987 The clinical value of procalcitonin in early assessment of acute pancreatitis. Objectives: Early assessment of the severity and the etiology is crucial in the management of acute pancreatitis. To determine the value of procalcitonin (PCT) as a prognostic marker and as an indicator of biliary etiology in the early phase of acute pancreatitis. Methods: In a prospective study, 75 consecutive patients were included (severe pancreatitis in 12 patients, biliary etiology in 42 cases). The value of PCT as a prognostic marker was compared to C-reactive protein (CRP), hematocrit (HCT), acute physiology and chronic health evaluation (APACHE) II score, and Ranson score. The value of PCT as an indicator of biliary etiology was compared to alanine aminotransferase (ALT) and alkaline phosphatase (AP). The area under the receiver operating characteristic curve (AUC) was applied as a measure of the overall accuracy of the single markers and multiple scoring systems. Results: The most accurate prediction of severe disease was provided by the APACHE II score on the day of admission (AUC: APACHE II, 0.78; CRP, 0.73; HCT, 0.73; and PCT, 0.61), and by CRP after 48 h (AUC: CRP, 0.94; Ranson score, 0.81; PCT, 0.71; APACHE II score, 0.69; and HCT, 0.46). ALT was the most accurate indicator of biliary pancreatitis (AUC: ALT, 0.83; AP, 0.81; and PCT, 0.68). Conclusions: PCT is of limited additional value for early assessment of severity and etiology in acute pancreatitis. CRP is found to be a reliable prognostic marker with a delay of 48 h, while ALT is validated as the best indicator of biliary etiology. abstract_id: PUBMED:29525967 Serum levels of tumor necrosis factor-like weak inducer of apoptosis (TWEAK) in predicting the severity of acute pancreatitis. Introduction: Acute pancreatitis (AP) is a severe disease associated with significant morbidity and mortality. The overall outcome has improved, but specific treatment(s) remains elusive. The challenge is the early identification and treatment of patients who will develop severe acute pancreatitis. Therefore, the aim of the present study is to investigate plasma levels of tumor necrosis factor-like weak inducer of apoptosis (TWEAK) in the initial phase of predicted severe acute pancreatitis. Methods: Between June 2014 and January 2016, 64 patients with acute pancreatitis and 36 healthy individuals were included to study. Four blood samples, for serum TWEAK measurement, were taken from each individual in each group. The first measurement was taken from the admission blood sample. The subsequent three samples were taken at 12, 24, and 48 h after the hospital admission. Results: Serum TWEAK levels were significantly higher in patients with acute pancreatitis when compared with healthy controls. TWEAK plasma concentrations in severe pancreatitis patients were significantly higher than in mild pancreatitis patients. Conclusion: Serum TWEAK levels increase progressively with the severity of acute pancreatitis and TWEAK might be a novel early marker of severity in acute pancreatitis. abstract_id: PUBMED:9148369 Serum interleukin-6 in acute pancreatitis due to common bile duct stones. A reliable marker of necrosis. In a prospective clinical study we have assessed the value of serum interleukin-6 in comparison with C-reactive protein in discriminating necrotizing from oedematous acute pancreatitis due to common bile duct stones in the first hours of disease. The study comprised 36 patients with acute biliary pancreatitis; inclusion criteria were admission in hospital within 48 hours from the onset of symptoms, availability of contrast enhanced CT scan within 72 hours from admission and presence of common bile duct stones at early ERCP. A sample of serum was taken at hospitalization and interleukin-6 and C-reactive protein were measured. Interleukin-6 levels were significantly higher in necrotizing pancreatitis, being closely related to the extension of necrosis. C-reactive protein showed low efficacy in detecting necrotizing forms, although its levels were higher than in oedematous. We conclude that serum interleukin-6 is a very reliable marker of necrosis in the first 48 hours of acute biliary pancreatitis. abstract_id: PUBMED:24548450 Clinical significance for monitoring of serum ghrelin in acute pancreatitis severity assessment Objective: To explore the value of altered serum level of Ghrelin for severity assessment in patients with acute pancreatitis (AP). Methods: Peripheral blood samples were collected from 47 AP patients at admission, 48 hours post-admission and at discharge. According to the criteria of APACHEII score ≥ 8, RANSON ≥ 3, CT ≥ 4, they were divided into mild (n = 17) and severe (n = 30) groups. Enzyme-linked immunosorbent assay (ELISA) was used to measure the serum level of Ghrelin. And correlation analysis was made with the score of APACHEII and the level of C reactive protein (CRP). Also the serum level of Ghrelin was analyzed with receiver operating characteristic (ROC) curve. Results: The serum levels of Ghrelin after 24 h were 358.6 ± 119.3 vs 212.1 ± 42.7 ng/L (P &lt; 0.001); after 48 hours, 253.1 ± 71.2 vs 275.5 ± 73.6 ng/L (P = 0.572); at discharge, 327.8 ± 103.8 vs 319.4 ± 87.1 ng/L respectively (P = 0.816). And serum level of Ghrelin was positively correlated with APACHEII and CRP. ROC area under curve was 0.841 ± 0.057 and 95% confidence interval 0.729-0.952 (P &lt; 0.001). Conclusion: The serum level of Ghrelin during early-stage AP has significant differences between two groups. And it may become an early predictor of pancreatic necrosis and a degree marker of clinical severity. abstract_id: PUBMED:16301845 Diagnosis and predicting severity in acute pancreatitis Acute pancreatitis is an inflammatory disease of pancreas which come from various etiologies. The pathologic spectrum of acute pancreatitis varies from mild edematous pancreatitis to severe necrotizing pancreatitis. To diagnose and to predict severity in acute pancreatitis, various biochemical marker, imaging modalities and clinical scoring system are needed. Ideal parameters should be accurate, be performed easily and enable earlier assess. Unfortunately, no ideal parameter is available up to date. Serum amylase and lipase are still useful for the diagnosis but meaningless in predicting severity. C-reactive protein and inflammatory cytokines are promising single parameters to predict the severity. CT finding is also an useful determinant of severity, but is expensive and is delayed in assessment. abstract_id: PUBMED:36590773 Role of Serum Interleukin-6 and C-reactive Protein in Early Prediction of Severe Acute Pancreatitis. Background: Early prediction of severity is an important goal in acute pancreatitis (AP), to identify 20% of patients who are likely to have a severe course. Such patients have an expected mortality of 15-20% and may benefit from early admission to high dependency or intensive care units, with parenteral or nasojejunal feeding and prophylactic antibiotics. In severe AP (SAP), multiorgan dysfunction accounts for most of early deaths. Aims: The aim of this article is to assess the role of serum interleukin (IL)-6 and serum C-reactive protein (CRP) in early prediction of severity of AP. Materials And Methods: This observational analytical study was conducted in the Department of General Surgery and Department of Biochemistry in our hospital in 62 patients as per inclusion and exclusion criteria. Results: IL-6 on day 1 and day 2 as well as CRP on day 2 was 100% sensitive but IL-6 on day 1 and day 2 had a maximum specificity of 88.37% among them when compared with a specificity of 81.4% of CRP on day 2. Though CRP on day 1 also had a specificity of 88.37%, its sensitivity was 89.47%. Conclusion: IL-6 and CRP together appear to be a promising marker for assessing the severity of AP within 48 h. We recommend to do IL-6 and CRP in patients with AP, which can help in predicting severity of the disease in patients. abstract_id: PUBMED:12221326 Early prediction of severity in acute pancreatitis. Is this possible? One out of ten cases of acute pancreatitis develops into severe acute pancreatitis which is a life threatening disorder with a high mortality rate. The other nine cases are self limiting and need very little therapy. The specificity of good clinical judgement on admission, concerning the prognosis of the attack, is high (high specificity) but misses a lot of severe cases (low sensitivity). The prediction of severity in acute pancreatitis was first suggested by John HC Ranson in 1974. Much effort has been put into finding a simple scoring system or a good biochemical marker for selecting the severe cases of acute pancreatitis immediately on admission. Today C-reactive protein is the method of choice although this marker is not valid until 48-72 hours after the onset of pain. Inflammatory mediators upstream from CRP like interleukin-6 and other cytokines are likely to react faster and preliminary results for some of these mediators look promising. Another successful approach has been to study markers for the activation of trypsinogen such as TAP and CAPAP. This is based on studies showing that active trypsin is the initial motor of the inflammatory process in acute pancreatitis. In the near future a combined clinical and laboratory approach for early severity prediction will be the most reliable. Clinical judgement predicts 1/3 of the severe cases on admission and early markers for either inflammation or trypsinogen activation should accurately identify 50-60% of the mild cases among the rest, thus missing only 2-4% of the remaining severe cases. One problem is that there is no simple and fast method to analyze any of these parameters. abstract_id: PUBMED:16024009 Clinical laboratory assessment of acute pancreatitis. Background: Several biochemical markers in blood and urine have been investigated to establish their clinical application in patients with acute pancreatitis (AP). The relevant studies are reviewed and critically appraised. Methods: Medline and the World Wide Web were searched and the relevant literature was classified under the following categories: (1) diagnosis of AP and (2) prediction of: a) disease severity, b) pancreatic necrosis and its secondary infection, c) organ failure and death, and d) disease etiology. Results And Conclusions: Serum lipase is a more reliable diagnostic marker of AP than serum amylase. Urinary strip tests for trypsinogen activation peptide (TAP) and trypsinogen-2 provide a reliable early diagnosis of AP. Useful predictors of severity may include serum procalcitonin and urinary TAP and trypsinogen-2 on admission, serum interleukins-6 and -8 and polymorphonuclear elastase at 24 h, and serum C-reactive protein (CRP) at 48 h. Other markers such as amyloid A and carboxypeptidase B activation peptide (CAPAP) need further investigation. Biochemical prediction of pancreatic necrosis requires 72 h to reach reliability and is impractical. However, the daily monitoring of serum procalcitonin provides a non-invasive detection of infected necrosis; the promising role of phospholipase A(2) in this regard requires further investigation. Early transient hypertransaminasemia reliably predicts biliary etiology, while serum carbohydrate-deficient transferrin and trypsin may predict an alcoholic etiology. Answer: Serum C-reactive protein (CRP) is not considered a reliable marker for the early assessment of the severity of acute biliary pancreatitis. Studies have shown that there is no significant difference in serum CRP levels in the first 2 days between patients with mild and severe forms of the disease (PUBMED:9513832; PUBMED:9759598). Specifically, using a cut-off point of 11 mg/dl, the sensitivity of serum CRP in assessing the severity of acute pancreatitis during the first two days was found to be very low at 9% and 57%, with an accuracy of 71% and 74%, respectively (PUBMED:9513832). In comparison, serum interleukin-6 (IL-6) was found to be a better marker, with a sensitivity of 87.5% and an accuracy of 85.0% on the first day of the study (PUBMED:9759598). Furthermore, other studies have suggested that CRP is a reliable prognostic marker but with a delay of 48 hours (PUBMED:15984987). This indicates that CRP may not be useful for immediate early assessment but could be more informative later in the course of the disease. Additionally, other markers such as serum interleukin-6, procalcitonin, and tumor necrosis factor-like weak inducer of apoptosis (TWEAK) have been studied for their potential in early severity assessment of acute pancreatitis, with some showing promising results (PUBMED:29525967; PUBMED:9148369; PUBMED:24548450; PUBMED:36590773). In conclusion, while CRP is not reliable for early assessment within the first 48 hours, it may still play a role in the prognostic evaluation of acute biliary pancreatitis after this initial period. Clinicians may need to rely on a combination of markers and clinical judgment for early severity prediction in acute pancreatitis (PUBMED:12221326; PUBMED:16024009).
Instruction: Use of cervical collar after single-level anterior cervical fusion with plate: is it necessary? Abstracts: abstract_id: PUBMED:30593959 Utility of Cervical Collars Following Cervical Fusion Surgery. Does It Improve Fusion Rates or Outcomes? A Systematic Review. Background: The use of postoperative cervical collars following cervical fusions is common practice. Its use has been purported to improve fusion rates and outcomes. There is a paucity in the strength of evidence to support its clinical benefit. Our objective is to critically evaluate the published literature to determine the strength of evidence supporting the use of postoperative cervical collar use following cervical fusions. Methods: A systematic review using Preferred Reporting Items for Systematic Reviews and Meta-Analyses (also known as PRISMA) was performed. An online search using Medline and Cochrane Central Register of Controlled Trials databases was used to query prospective and retrospective clinical trials evaluating cervical fusions with or without postoperative collar. Results: The search identified 894 articles in Medline and 65 articles in the Cochrane database. From these articles, 130 were selected based on procedure and collar use. Only 3 studies directly compared between collar use and no collar use. Our analysis of the mean improvement in neck disability index scores and improvement over time intervals did not show a statistically significant difference between collar versus no collar (P = 0.86). Conclusions: We found no strong evidence to support the use of cervical collars after 1- and 2-level anterior cervical discectomy and fusion procedures, and no studies comparing collar use and no collar use after posterior cervical fusions. Given the cost and likely impact of collar use on driving and the return to work, our study shows that currently there is no proven benefit to routine use of postoperative cervical collar in patients undergoing 1- and 2-level anterior cervical discectomy and fusion for degenerative cervical pathologies. abstract_id: PUBMED:38454504 Comparison of outcomes after anterior cervical discectomy and fusion with and without a cervical collar: a systematic review and meta-analysis. Purpose: The clinical outcomes of patients who received a cervical collar after anterior cervical decompression and fusion were evaluated by comparison with those of patients who did not receive a cervical collar. Methods: All of the comparative studies published in the PubMed, Cochrane Library, Medline, Web of Science, and EMBASE databases as of 1 October 2023 were included. All outcomes were analysed using Review Manager 5.4. Results: Four studies with a total of 406 patients were included, and three of the studies were randomized controlled trials. Meta-analysis of the short-form 36 results revealed that wearing a cervical collar after anterior cervical decompression and fusion was more beneficial (P &lt; 0.05). However, it is important to note that when considering the Neck Disability Index at the final follow-up visit, not wearing a cervical collar was found to be more advantageous. There were no statistically significant differences in postoperative cervical range of motion, fusion rate, or neck disability index at 6 weeks postoperatively (all P &gt; 0.05) between the cervical collar group and the no cervical collar group. Conclusions: This systematic review and meta-analysis revealed no significant differences in the 6-week postoperative cervical range of motion, fusion rate, or neck disability index between the cervical collar group and the no cervical collar group. However, compared to patients who did not wear a cervical collar, patients who did wear a cervical collar had better scores on the short form 36. Interestingly, at the final follow-up visit, the neck disability index scores were better in the no cervical collar group than in the cervical collar group. PROSPERO registration number: CRD42023466583. abstract_id: PUBMED:19077924 Use of cervical collar after single-level anterior cervical fusion with plate: is it necessary? Study Design: Randomized clinical trial. Objective: This study is evaluates whether the use of a cervical collar after single-level anterior cervical fusion with plating increases the fusion rate and improved clinical outcomes. Summary Of Background Data: Plates limit motion between the graft and the vertebra in anterior cervical fusion. Still, the use of cervical collars after instrumented anterior cervical fusion is widely practiced. Methods: Patients enrolled in an FDA-regulated, multicenter trial in 32 centers treated with single-level decompression and arthrodesis using allograft and an anterior cervical plate were included in the analysis. Patients were divided into Braced and Nonbraced groups regardless of type of brace. SF-36, Neck Disability Index (NDI), Numerical Rating Scales (0-100) for neck and arm pain were determined before surgery, 1.5, 3, 6, 12, and 24 months after surgery. Fusion was assessed by independent radiologists at 6, 12, and 24 months after surgery using upright AP, lateral, and flexion-extension views. Fusion success was defined as the presence of bridging trabecular bone, angulation of less than or equal 4 degrees on flexion-extension radiographs; and absence of radiolucencies. Results: Two hundred fifty-seven patients were included in the analysis, 149 were braced and 108 were not. Demographic characteristics and baseline outcome measures of both groups were similar. There was also no statistically significant difference in any of the clinical measures at baseline except for SF-36 Physical Component Summary score. The SF-36 Physical Component Summary, NDI, neck, and arm pain scores were similar in both groups at all time intervals and showed statistically significant improvement when compared with preoperative scores. There was no difference in the proportion of patients working at any time point between the Braced and Nonbraced group. Independent radiologists reported higher rates of fusion in the Nonbraced group over all time intervals, none of which were statistically significant. Conclusion: Our results show that the use of a cervical brace does not improve the fusion rate or the clinical outcomes of patients undergoing single-level anterior cervical fusion with plating. abstract_id: PUBMED:36174948 Soft Cervical Orthosis Use Does Not Improve Fusion Rates After One-Level and Two-Level Anterior Cervical Discectomy and Fusion. Objective: To determine if postoperative soft cervical orthosis use affects arthrodesis rates on a per-level or construct basis after 1-level and 2-level anterior cervical discectomy and fusion (ACDF). Methods: Electronic medical records were queried for 1-level and 2-level primary ACDF between 2016 and 2019 at a single academic center. Surgeons prescribed either a soft cervical orthosis or no orthosis. Pseudarthrosis rates were evaluated by dynamic cervical spine radiographs with arthrodesis defined by &lt;1 mm of interspinous motion. Continuous and categorical data were compared using analysis of variance or χ2 tests. Multivariate logistic regression analysis was used to examine independent predictors of pseudarthrosis. Results: A total of 316 unique patients (504 instrumented levels) met the inclusion criteria. Eighty-four percent of patients were prescribed a soft cervical orthosis. Overall, arthrodesis occurred at 344 (80.9%) and 62 (78%) levels in patients with and without cervical orthosis, respectively. When evaluating patients placed in a cervical orthosis versus those who were not, there were no differences in pseudarthrosis or revision rates. Further, there were no differences in pseudarthrosis on a per-level basis. Further, cervical orthosis use was not an independent predictor of pseudarthrosis (odds ratio, 0.86; 95% confidence interval, 0.47-1.57; P =0.623) on multivariate analysis. Conclusions: Postoperative placement of soft cervical orthoses after 1-level or 2-level ACDF was not associated with improved arthrodesis or reduced rate of revision surgery. abstract_id: PUBMED:31894403 The role of cervical collar in functional restoration and fusion after anterior cervical discectomy and fusion without plating on single or double levels: a systematic review and meta-analysis. Purpose: Even though the anterior cervical discectomy and fusion (ACDF) is one of the most common spinal procedures, a consensus on the real need for prescribing a cervical collar (CC) after surgery is still missing. In fact, the role of external immobilization in decreasing non-fusion rate and implants displacement has not been clarified yet. Methods: This study was conducted according to the PRISMA statement. Six different online medical databases were screened. Papers reporting the neck disability index (NDI), cervical range of motion (RoM) and fusion rate after ACDF without plating, on single or multiple levels, for cervical spondylosis were considered for eligibility. Results: There were no significant differences in terms of NDI scores at 2 weeks (WMD = 4.502; 95% CI - 5.953, 14.957; p = 0.399; I2 = 65.14%; p = 0.090) and 1-year (WMD = 2.052; 95% CI - 1.386, 5.490 p = 0.242; I2 = 0%; p = 0.793), RoM reduction at 1-year (WMD = 1.597; 95% CI - 5.886, 9.079; p = 0.676; I2 = 0%; p = 0.326) or fusion rate (OR = 1.127; 95% CI 0.387, 3.282; p = 0.827; I2 = 2.166%; p = 0.360). Conclusions: The use of a CC after ACDF without plating on single or double levels for cervical spondylosis seems not supported by scientific evidence. These slides can be retrieved under Electronic Supplementary Material. abstract_id: PUBMED:31143262 The Utility of Cervical Spine Bracing as a Postoperative Adjunct to Single-level Anterior Cervical Spine Surgery. Background Context: Use of cervical bracing/collar subsequent to anterior cervical spine discectomy and fusion (ACDF) is variable. Outcomes data regarding bracing after ACDF are limited. Purpose: The purpose of the study is to study the impact of bracing on short-term outcomes related to safety, quality of care, and direct costs in single-level ACDF. Study Design/setting: This retrospective cohort analysis of all consecutive patients (n = 578) undergoing single-level ACDF with or without bracing from 2013 to 2017 was undertaken. Methods: Patient demographics and comorbidities were analyzed. Tests of independence (Chi-square, Fisher's exact, and Cochran-Mantel-Haenszel test), Mann-Whitney-Wilcoxon tests, and logistic regressions were used to assess differences in length of stay (LOS), discharge disposition (home, assisted rehabilitation facility-assisted rehabilitation facility, or skilled nursing facility), quality-adjusted life year (QALY), surgical site infection (SSI), direct cost, readmission within 30 days, and emergency room (ER) visits within 30 days. Results: Among the study population, 511 were braced and 67 were not braced. There was no difference in graft type (P = 1.00) or comorbidities (P = 0.06-0.73) such as obesity (P = 0.504), smoking (0.103), chronic obstructive pulmonary disease hypertension (P = 0.543), coronary artery disease (P = 0.442), congestive heart failure (P = 0.207), and problem list number (P = 0.661). LOS was extended for the unbraced group (median 34.00 + 112.15 vs. 77.00 + 209.31 h, P &lt; 0.001). There was no difference in readmission (P = 1.000), ER visits (P = 1.000), SSI (P = 1.000), QALY gain (P = 0.437), and direct costs (P = 0.732). Conclusions: Bracing following single-level cervical fixation does not alter short-term postoperative course or reduce the risk for early adverse outcomes in a significant manner. The absence of bracing is associated with increased LOS, but cost analyses show no difference in direct costs between the two treatment approaches. Further evaluation of long-term outcomes and fusion rates will be necessary before definitive recommendations regarding bracing utility following single-level ACDF. abstract_id: PUBMED:38421334 The Use of Osteobiologics in Single versus Multi-Level Anterior Cervical Discectomy and Fusion: A Systematic Review. Study Design: Systematic literature review. Objectives: In this study we assessed evidence for the use of osteobiologics in single vs multi-level anterior cervical discectomy and fusion (ACDF) in patients with cervical spine degeneration. The primary objective was to compare fusion rates after single and multi-level surgery with different osteobiologics. Secondary objectives were to compare differences in patient reported outcome measures (PROMs) and complications. Methods: After a global team of reviewers was selected, a systematic review using different repositories was performed, confirming to PRISMA and GRADE guidelines. In total 1206 articles were identified and after applying inclusion and exclusion criteria, 11 articles were eligible for analysis. Extracted data included fusion rates, definition of fusion, patient reported outcome measures, types of osteobiologics used, complications, adverse events and revisions. Results: Fusion rates ranged from 87.7% to 100% for bone morphogenetic protein 2 (BMP-2) and 88.6% to 94.7% for demineralized bone matrix, while fusion rates reported for other osteobiologics were lower. All included studies showed PROMs improved significantly for each osteobiologic. However, no differences were reported when comparing osteobiologics, or when comparing single vs multi-level surgery specifically. Conclusion: The highest fusion rates after 2-level ACDF for cervical spine degeneration were reported when BMP-2 was used. However, PROMs did not differ between the different osteobiologics. Further blinded randomized trials should be performed to compare the use of BMP-2 in single vs multi-level ACDF specifically. abstract_id: PUBMED:27555986 Are External Cervical Orthoses Necessary after Anterior Cervical Discectomy and Fusion: A Review of the Literature. Introduction & Background: The use of external cervical orthosis (ECO) after anterior cervical discectomy and fusion (ACDF) varies from physician to physician due to an absence of clear guidelines. Our purpose is to evaluate and present evidence answering the question, "Does ECO after ACDF improve fusion rates?" through a literature review of current evidence for and against ECO after ACDF. Review: A PubMed database search was conducted using specific ECO and ACDF related keywords. Our search yielded a total of 1,267 abstracts and seven relevant articles. In summary, one study provided low quality of evidence results supporting the conclusion that external bracing is not associated with improved fusion rates after ACDF. The remaining six studies provide very low quality of evidence results; two studies concluded that external bracing after cervical procedures is not associated with improved fusion rates, one study concluded that external bracing after cervical procedures is associated with improved fusion rates, and the remaining three studies lacked sufficient evidence to draw an association between external bracing after ACDF and improved fusion rates. Conclusion: We recommend against the routine use of ECO after ACDF due to a lack of improved fusion rates associated with external bracing after surgery. abstract_id: PUBMED:30588004 Is correction of segmental kyphosis necessary in single-level anterior cervical fusion surgery? An observational study. Background: This study was conducted to determine whether sagittal lordotic alignment and clinical outcomes could be improved by the correction of segmental kyphosis after single-level anterior cervical discectomy and fusion (ACDF) surgery. Patients And Methods: We retrospectively reviewed patients who underwent single-level ACDF surgery in our hospital between January 2014 and February 2017. Basic characteristics of patients included age at surgery, gender, diagnosis, duration of symptoms, and location of target level. Pre- and postoperative radiographs at the 6-month follow-up were used to evaluate the following parameters, such as segmental angle, C2-C7 angle, T1 slope, and C2-C7 sagittal vertical axis (SVA). Postoperative clinical outcomes were assessed by the Neck Disability Index and VAS. According to the segmental angle of postoperative radiographs, patients were divided into noncorrection group and correction group. Results: A total of 181 patients (99 males and 82 females) were analyzed in our study. There were 32 patients in the noncorrection group and 149 patients in the correction group. There was no significant difference in demographic and clinical data between the two groups before surgery. However, patients in the correction group showed larger C2-C7 angle and lower C2-C7 SVA after surgery in comparison with those in the noncorrection group. Besides, changes in the segmental angle were positively correlated with changes in C2-C7 angle and negatively correlated with changes in C2-C7 SVA. Conclusion: Surgical correction of segmental kyphosis in single-level cervical surgery contributed to balanced cervical alignment in comparison with those without satisfactory correction. However, we could not demonstrate that the correction of segmental alignment is associated with a better recovery in clinical outcomes. abstract_id: PUBMED:32523483 Strategies to Achieve Spinal Fusion in Multilevel Anterior Cervical Spine Surgery: An Overview. Background: Anterior cervical fusion offers surgeons a safe and reliable surgical option for single-level and multilevel pathology; however, multilevel fusions pose a higher risk of complications than single-level fusions, including possible pseudoarthrosis, adjacent segment disease, sagittal imbalance, and construct subsidence. Various techniques can be used to mitigate risk in multilevel anterior cervical fusion. Questions/purposes: We reviewed the literature to determine the best surgical strategies in multilevel anterior cervical fusion. Methods: We searched the PubMed database for articles published from January 1980 through July 2019. Two authors identified relevant articles and then manually screened them for others to include in this review. Results: We initially identified 1936 articles and included 48 in our review. We found that clinical outcomes of multilevel anterior cervical fusion can be optimized through the use of biologics and graft selection, the evaluation of pre-existing deformity, the assessment of comorbidities, and the selection of fusion levels. Meticulous surgical technique in conjunction with modern surgical tools, such as instrumentation and biologics, allow surgeons to address complex cervical problems while limiting morbidity and enhancing clinical outcomes. Conclusions: Multilevel anterior cervical fusions offer a relatively safe and reliable treatment option for both single-level and multilevel pathology. Answer: The use of a cervical collar after single-level anterior cervical fusion with plating has been a topic of debate among spine surgeons. The evidence from the literature suggests that the routine use of cervical collars in this context may not be necessary. A randomized clinical trial found that the use of a cervical brace did not improve the fusion rate or the clinical outcomes of patients undergoing single-level anterior cervical fusion with plating (PUBMED:19077924). Similarly, a systematic review and meta-analysis concluded that the use of a cervical collar after anterior cervical discectomy and fusion (ACDF) without plating on single or double levels for cervical spondylosis is not supported by scientific evidence, as there were no significant differences in terms of neck disability index (NDI) scores, cervical range of motion (RoM), or fusion rate (PUBMED:31894403). Another study showed that postoperative placement of soft cervical orthoses after 1-level or 2-level ACDF was not associated with improved arthrodesis or reduced rate of revision surgery (PUBMED:36174948). Furthermore, a retrospective cohort analysis indicated that bracing following single-level cervical fixation does not alter short-term postoperative course or reduce the risk for early adverse outcomes in a significant manner, and the absence of bracing is associated with increased length of stay (LOS) but no difference in direct costs (PUBMED:31143262). In summary, the current literature, including systematic reviews, meta-analyses, and clinical trials, does not provide strong evidence to support the routine use of cervical collars after single-level anterior cervical fusion with plating. The decision to use a cervical collar postoperatively should be individualized based on the patient's specific circumstances and the surgeon's clinical judgment.
Instruction: Does the availability of snack foods in supermarkets vary internationally? Abstracts: abstract_id: PUBMED:23672409 Does the availability of snack foods in supermarkets vary internationally? Background: Cross-country differences in dietary behaviours and obesity rates have been previously reported. Consumption of energy-dense snack foods and soft drinks are implicated as contributing to weight gain, however little is known about how the availability of these items within supermarkets varies internationally. This study assessed variations in the display of snack foods and soft drinks within a sample of supermarkets across eight countries. Methods: Within-store audits were used to evaluate and compare the availability of potato chips (crisps), chocolate, confectionery and soft drinks. Displays measured included shelf length and the proportion of checkouts and end-of-aisle displays containing these products. Audits were conducted in a convenience sample of 170 supermarkets across eight developed nations (Australia, Canada, Denmark, Netherlands, New Zealand, Sweden, United Kingdom (UK), and United States of America (US)). Results: The mean total aisle length of snack foods (adjusted for store size) was greatest in supermarkets from the UK (56.4 m) and lowest in New Zealand (21.7 m). When assessed by individual item, the greatest aisle length devoted to chips, chocolate and confectionery was found in UK supermarkets while the greatest aisle length dedicated to soft drinks was in Australian supermarkets. Only stores from the Netherlands (41%) had less than 70% of checkouts featuring displays of snack foods or soft drinks. Conclusion: Whilst between-country variations were observed, overall results indicate high levels of snack food and soft drinks displays within supermarkets across the eight countries. Exposure to snack foods is largely unavoidable within supermarkets, increasing the likelihood of purchases and particularly those made impulsively. abstract_id: PUBMED:22420759 The availability of snack food displays that may trigger impulse purchases in Melbourne supermarkets. Background: Supermarkets play a major role in influencing the food purchasing behaviours of most households. Snack food exposures within these stores may contribute to higher levels of consumption and ultimately to increasing levels of obesity, particularly within socioeconomically disadvantaged neighbourhoods. We aimed to examine the availability of snack food displays at checkouts, end-of-aisle displays and island displays in major supermarket chains in the least and most socioeconomically disadvantaged neighbourhoods of Melbourne. Methods: Within-store audits of 35 Melbourne supermarkets. Supermarkets were sampled from the least and most socioeconomically disadvantaged suburbs within 30 km of the Melbourne CBD. We measured the availability of crisps, chocolate, confectionery, and soft drinks (diet and regular) at the checkouts, in end-of-aisle displays, and in island bin displays. Results: Snack food displays were most prominent at checkouts with only five stores not having snack foods at 100% of their checkouts. Snack foods were also present at a number of end-of-aisle displays (at both the front (median 38%) and back (median 33%) of store), and in island bin displays (median number of island displays: 7; median total circumference of island displays: 19.4 metres). Chocolate items were the most common snack food item on display. There was no difference in the availability of these snack food displays by neighbourhood disadvantage. Conclusions: As a result of the high availability of snack food displays, exposure to snack foods is almost unavoidable in Melbourne supermarkets, regardless of levels of neighbourhood socioeconomic disadvantage. Results of this study could promote awareness of the prominence of unhealthy food items in chain-brand supermarkets outlets. abstract_id: PUBMED:28441947 Indicators of the relative availability of healthy versus unhealthy foods in supermarkets: a validation study. Background: In-store availability of healthy and unhealthy foods may influence consumer purchases. Methods used to measure food availability, however, vary widely. A simple, valid, and reliable indicator to collect comparable data on in-store food availability is needed. Methods: Cumulative linear shelf length of and variety within 22 healthy and 28 unhealthy food groups, determined based on a comparison of three nutrient profiling systems, were measured in 15 New Zealand supermarkets. Inter-rater reliability was tested in one supermarket by a second researcher. The construct validity of five simple indicators of relative availability of healthy versus unhealthy foods was assessed against this 'gold standard'. Results: Cumulative linear shelf length was a more sensitive and feasible measure of food availability than variety. Four out of five shelf length ratio indicators were significantly associated with the gold standard (ρ = 0.70-0.75). Based on a non-significant difference from the 'gold standard' (d = 0.053 ± 0.040) and feasibility, the ratio of cumulative linear shelf length of fresh and frozen fruits and vegetables versus soft and energy drinks, crisps and snacks, sweet biscuits and confectionery performed best for use in New Zealand supermarkets. Conclusions: Four out of the five shelf length ratio indicators of the relative availability of healthy versus unhealthy foods in-store tested could be used for future research and monitoring, but additional validation studies in other settings and countries are recommended. Consistent use of those shelf length ratio indicators could enhance comparability of supermarket food availability between studies, and help inform policies to create healthy consumer food retail environments. abstract_id: PUBMED:20403377 Disparities in food access: does aggregate availability of key foods from other stores offset the relative lack of supermarkets in African-American neighborhoods? Objective: Recent work demonstrates the importance of in-store contents, yet most food access disparity research has focused on differences in store access, rather than the foods they carry. This study examined in-store shelf space of key foods to test whether other types of stores might offset the relative lack of supermarkets in African-American neighborhoods. Methods: New Orleans census tract data were combined with health department information on food stores open in 2004-2005. Shelf space of fruits, vegetables, and energy-dense snacks was assessed using a measuring wheel and established protocols in a sample of stores. Neighborhood availability of foods was calculated by summing shelf space in all stores within 2km of tract centers. Regression analyses assessed associations between tract racial composition and aggregate food availability. Results: African-American neighborhoods had fewer supermarkets and the aggregate availability of fresh fruits and vegetables was lower than in other neighborhoods. There were no differences in snack food availability. Conclusions: Other store types did not offset the relative lack of supermarkets in African-American neighborhoods in the provision of fresh produce, though they did for snack foods. Altering the mix of foods offered in such stores might mitigate these inequities. abstract_id: PUBMED:32912374 Comparison of food and beverage products' availability, variety, price and quality in German and US supermarkets. Objective: To assess availability, variety, price and quality of different food products in a convenience sample of supermarkets in Germany and the USA. Design: Cross-sectional study using an adapted version of the Bridging the Gap Food Store Observation Form. Setting: Information on availability, quality, price and variety of selected food products in eight German and seven US supermarkets (discount and full service) was obtained and compared by country. Results: A general tendency for lower prices of fruits and vegetables in Germany was observed, while produce quality and variety did not seem to differ between countries, with the exception of the variety of some vegetables such as tomatoes. Chips and cereals did not differ significantly in variety nor price. In both countries, high energy-dense foods were lower in energy costs than lower energy-dense foods. Conclusions: The influence of food prices and availability on consumption should be further explored, including the impact of country differences. abstract_id: PUBMED:31462336 Is neighbourhood social deprivation in a Brazilian city associated with the availability, variety, quality and price of food in supermarkets? Objective: To verify differences in the availability, variety, quality and price of unprocessed and ultra-processed foods in supermarkets and similar establishments in neighbourhoods with different social deprivation levels at Juiz de Fora, Minas Gerais, Brazil. Design: Cross-sectional study. Setting: The Obesogenic Environment Study in São Paulo's Food Store Observation Tool (ESAO-S) was applied in thirty-three supermarket chains, wholesale and retail supermarkets. Results: Fruits, vegetables and ultra-processed foods were available in almost all establishments, without differences according to Health Vulnerability Index (HVI; which varies from 0 to 1 point and the higher the worse; P &gt; 0·05). Most establishments were concentrated in low vulnerability areas and offered healthy foods with greater variety and quality, despite higher prices. The Healthy Food Store Index (HFSI; which varies from 0 to 16 points and the higher the best) was calculated from the ESAO-S and the mean score was 8·91 (sd 1·51). The presence and variety of unprocessed foods count as positive points, as do the absence of ultra-processed products. When HFSI was stratified by HVI, low HVI neighbourhoods presented higher HFSI scores, compared with medium, high and very high HVI neighbourhoods (P = 0·001). Conclusions: Supermarkets and similar establishments are less dense in areas of greater social deprivation and have lower prices of healthy foods, but the variety and quality of those foods are worse, compared with areas of low vulnerability. We found worse HFSI for supermarkets located in areas with greater vulnerability. Those findings can guide specific public policies improving the urban food environment. abstract_id: PUBMED:37608383 Unhealthy food availability, prominence and promotion in a representative sample of supermarkets in Flanders (Belgium): a detailed assessment. Introduction: The supermarket food environment is a key setting for potential public health interventions. This study assessed food availability, prominence and promotion in a representative sample of supermarkets in Flanders (Belgium). Methods: A sample of 55 supermarkets across five chains and 16 Flemish municipalities was selected in 2022, about 64% in the most deprived socioeconomic areas. Healthiness indicators related to food availability (ratio of cumulative linear shelf length for healthy versus unhealthy foods), prominence (proportion of unhealthy foods at check-outs and end-of-aisle endcaps), and promotion (food marketing on food packages) were measured. Results: Overall, the average ratio of healthy/unhealthy foods in supermarkets in Flanders was 0.36, meaning that for every 10m of shelf length of unhealthy foods there was 3.6m of healthy foods. There was a large variation in ratio's across supermarket chains. Of all foods available, 97.5% were ultra-processed at the check outs, while 72.2% and 58.5% were ultra-processed at the front and back end-of-aisle end-caps, respectively. Confectionery and sweet biscuits were the food categories with on average the highest number of marketing messages on pack per 10m of shelf length. Conclusion: Supermarket in-store food environments in Flanders were found generally unhealthy, with those located in low income areas having unhealthier in-store food environments than supermarkets located in medium and high income areas. Despite commitments of all large supermarket chains in Flanders to promote and create healthier in-store food environments, our findings indicate that currently consumers are incentivized to buy unhealthy rather than healthy food products. abstract_id: PUBMED:35565789 A Cross-Sectional Audit of Sorghum in Selected Cereal Food Products in Australian Supermarkets. Sorghum (Sorghum bicolor (L.) Moench) may play a role in mechanisms that elicit favourable health effects. In Australia, sorghum is successfully grown, but it is not widely consumed, and its presence in common food products is unknown. This study examined the utilisation of sorghum in common food products, specifically breakfast cereals and snack bars, in a cross-sectional study of five supermarkets in New South Wales, over a 7-day period in February 2020. Details relating to ingredients, food format, brand, and product name were recorded. Sorghum was present in 6.1% (23/379) of breakfast cereals in a variety of formats, such as extruded shapes, flour, and puffed grain. In 8.7% of these, sorghum was listed as the first ingredient (greatest contribution by weight). Sorghum was utilised in 2% (6/298) of snack bars mainly as puffed sorghum and was listed in the fourth or subsequent position in the ingredient lists for all. 'Sorghum' did not appear in the name of any products. In conclusion, this baseline study indicates that sorghum is present in a small proportion of breakfast cereals and snack bars, highlighting the opportunity for greater investment in sorghum food innovation and marketing that would encourage consumer recognition and expand the product range. abstract_id: PUBMED:33499044 Urban Retail Food Environments: Relative Availability and Prominence of Exhibition of Healthy vs. Unhealthy Foods at Supermarkets in Buenos Aires, Argentina. There is growing evidence that the food environment can influence diets. The present study aimed to assess the relative availability and prominence of healthy foods (HF) versus unhealthy products (UP) in supermarkets in Buenos Aires, Argentina and to explore differences by retail characteristics and neighborhood income level. We conducted store audits in 32 randomly selected food retails. Food availability (presence/absence, ratio of cumulative linear shelf length for HF vs. UP) and prominence inside the store (location visibility) were measured based on the International Network for Food and Obesity/NCDs Research, Monitoring and Action Support (INFORMAS) protocol. On average, for every 1 m of shelf length for UP, there was about 25 cm of shelf length for HF (HF/UP ratio: 0.255, SD 0.130). UP were more frequently available in high-prominence store areas (31/32 retails) than HF (9/32 retails). Shelf length ratio differed across commercial chains (p = 0.0268), but not by store size or type. Retails in the lower-income neighborhoods had a lower HF/UP ratio than those in the higher-income neighborhoods (p = 0.0329). Availability of the selected HF was overcome largely by the UP, particularly in high prominence areas, and in neighborhoods with lower income level, which may pose an opportunity for public health interventions. abstract_id: PUBMED:23195394 Adolescent television viewing and unhealthy snack food consumption: the mediating role of home availability of unhealthy snack foods. Objective: To examine whether home availability of energy-dense snack foods mediates the association between television (TV) viewing and energy-dense snack consumption among adolescents. Design: Cross-sectional. Setting: Secondary schools in Victoria, Australia. Subjects: Adolescents (n 2984) from Years 7 and 9 of secondary school completed a web-based survey, between September 2004 and July 2005, assessing their energy-dense snack food consumption, school-day and weekend-day TV viewing and home availability of energy-dense snack foods. Results: School-day and weekend-day TV viewing were positively associated with energy-dense snack consumption among adolescent boys (β = 0·003, P &lt; 0·001) and girls (β = 0·03, P &lt; 0·001). Furthermore, TV viewing (school day and weekend day) were positively associated with home availability of energy-dense snack foods among adolescent boys and girls and home availability of energy-dense snack foods was positively associated with energy-dense snack food consumption among boys (β = 0·26, P &lt; 0·001) and girls (β = 0·28, P &lt; 0·001). Home availability partly mediated the association between TV viewing and energy-dense snack consumption. Conclusions: The results of the present study suggest that TV viewing has a significant role to play in adolescent unhealthy eating behaviours. Future research should assess the efficacy of methods to reduce adolescent energy-dense snack food consumption by targeting parents to reduce home availability of energy-dense foods and by reducing TV viewing behaviours of adolescents. Answer: Yes, the availability of snack foods in supermarkets does vary internationally. A study assessing variations in the display of snack foods and soft drinks within supermarkets across eight developed nations (Australia, Canada, Denmark, Netherlands, New Zealand, Sweden, United Kingdom (UK), and United States of America (US)) found that the mean total aisle length of snack foods, adjusted for store size, was greatest in supermarkets from the UK and lowest in New Zealand. The greatest aisle length devoted to chips, chocolate, and confectionery was found in UK supermarkets, while the greatest aisle length dedicated to soft drinks was in Australian supermarkets. The study concluded that exposure to snack foods is largely unavoidable within supermarkets across the eight countries, with high levels of snack food and soft drinks displays (PUBMED:23672409). Additionally, research in Melbourne supermarkets showed that snack food displays were most prominent at checkouts, with only five stores not having snack foods at 100% of their checkouts. The availability of these snack food displays did not differ by levels of neighborhood socioeconomic disadvantage, indicating that exposure to snack foods is almost unavoidable in Melbourne supermarkets, regardless of the socioeconomic status of the neighborhood (PUBMED:22420759). Moreover, disparities in food access and the availability of key foods have been studied in different contexts, such as African-American neighborhoods in New Orleans, where supermarkets were fewer and the aggregate availability of fresh fruits and vegetables was lower than in other neighborhoods. However, there were no differences in snack food availability (PUBMED:20403377). In summary, while there are international variations in the availability of snack foods in supermarkets, the overall trend indicates a significant presence of snack foods and soft drinks in supermarkets across different countries, contributing to the potential for impulse purchases and increased consumption of these products.
Instruction: The 72-Hour Medicare Mandate After Total Joint Arthroplasty: Is This Medically Necessary? Abstracts: abstract_id: PUBMED:19418640 Medicare and Medicaid programs; approval of the Joint Commission for continued deeming authority for hospices. Final notice. This final notice announces the approval of a deeming application from the Joint Commission for continued recognition as a national accreditation program for hospices that request participation in the Medicare or Medicaid programs. abstract_id: PUBMED:26606762 Medicare Program; Comprehensive Care for Joint Replacement Payment Model for Acute Care Hospitals Furnishing Lower Extremity Joint Replacement Services. Final rule. This final rule implements a new Medicare Part A and B payment model under section 1115A of the Social Security Act, called the Comprehensive Care for Joint Replacement (CJR) model, in which acute care hospitals in certain selected geographic areas will receive retrospective bundled payments for episodes of care for lower extremity joint replacement (LEJR) or reattachment of a lower extremity. All related care within 90 days of hospital discharge from the joint replacement procedure will be included in the episode of care. We believe this model will further our goals in improving the efficiency and quality of care for Medicare beneficiaries with these common medical procedures. abstract_id: PUBMED:27653006 Association Between Hospital Participation in a Medicare Bundled Payment Initiative and Payments and Quality Outcomes for Lower Extremity Joint Replacement Episodes. Importance: Bundled Payments for Care Improvement (BPCI) is a voluntary initiative of the Centers for Medicare &amp; Medicaid Services to test the effect of holding an entity accountable for all services provided during an episode of care on episode payments and quality of care. Objective: To evaluate whether BPCI was associated with a greater reduction in Medicare payments without loss of quality of care for lower extremity joint (primarily hip and knee) replacement episodes initiated in BPCI-participating hospitals that are accountable for total episode payments (for the hospitalization and Medicare-covered services during the 90 days after discharge). Design, Setting, And Participants: A difference-in-differences approach estimated the differential change in outcomes for Medicare fee-for-service beneficiaries who had a lower extremity joint replacement at a BPCI-participating hospital between the baseline (October 2011 through September 2012) and intervention (October 2013 through June 2015) periods and beneficiaries with the same surgical procedure at matched comparison hospitals. Exposure: Lower extremity joint replacement at a BPCI-participating hospital. Main Outcomes And Measures: Standardized Medicare-allowed payments (Medicare payments), utilization, and quality (unplanned readmissions, emergency department visits, and mortality) during hospitalization and the 90-day postdischarge period. Results: There were 29 441 lower extremity joint replacement episodes in the baseline period and 31 700 in the intervention period (mean [SD] age, 74.1 [8.89] years; 65.2% women) at 176 BPCI-participating hospitals, compared with 29 440 episodes in the baseline period (768 hospitals) and 31 696 episodes in the intervention period (841 hospitals) (mean [SD] age, 74.1 [8.92] years; 64.9% women) at matched comparison hospitals. The BPCI mean Medicare episode payments were $30 551 (95% CI, $30 201 to $30 901) in the baseline period and declined by $3286 to $27 265 (95% CI, $26 838 to $27 692) in the intervention period. The comparison mean Medicare episode payments were $30 057 (95% CI, $29 765 to $30 350) in the baseline period and declined by $2119 to $27 938 (95% CI, $27 639 to $28 237). The mean Medicare episode payments declined by an estimated $1166 more (95% CI, -$1634 to -$699; P &lt; .001) for BPCI episodes than for comparison episodes, primarily due to reduced use of institutional postacute care. There were no statistical differences in the claims-based quality measures, which included 30-day unplanned readmissions (-0.1%; 95% CI, -0.6% to 0.4%), 90-day unplanned readmissions (-0.4%; 95% CI, -1.1% to 0.3%), 30-day emergency department visits (-0.1%; 95% CI, -0.7% to 0.5%), 90-day emergency department visits (0.2%; 95% CI, -0.6% to 1.0%), 30-day postdischarge mortality (-0.1%; 95% CI, -0.3% to 0.2%), and 90-day postdischarge mortality (-0.0%; 95% CI, -0.3% to 0.3%). Conclusions And Relevance: In the first 21 months of the BPCI initiative, Medicare payments declined more for lower extremity joint replacement episodes provided in BPCI-participating hospitals than for those provided in comparison hospitals, without a significant change in quality outcomes. Further studies are needed to assess longer-term follow-up as well as patterns for other types of clinical care. abstract_id: PUBMED:26455363 Implementing Anticipatory Care Plans in general practice: a practice approach to improving the health literacy of the community and reducing reliance on emergency services during after-hour periods. The objective of this study was to trial a general practice approach to improve the health literacy of patients at risk of utilising medical, emergency or ambulatory services during after-hour periods in Australia. It did so by introducing an anticipatory after-hours care component in all new and revised care plans, known as an Anticipatory Care Plan (AntCaP).The pilot was conducted over a 6-month period in 2013-14. Thirteen general practices were recruited via expressions of interest and were paid a financial grant. Key practice staff were required to attend three workshops conducted by a Medicare Local and to be involved in the evaluation process. A pragmatic qualitative and quantitative evaluation process was conducted during the pilot, and ceased 6 months after the final workshop. The results indicate that the integration of AntCaPs into general practice was generally well received by practice staff and their patients, with early indications that AntCaPs can influence patient behaviour in the after-hours period. abstract_id: PUBMED:30019875 Medicare Program; Changes to the Comprehensive Care for Joint Replacement Payment Model (CJR): Extreme and Uncontrollable Circumstances Policy for the CJR Model. Final rule. This final rule finalizes a policy that provides flexibility in the determination of episode spending for Comprehensive Care for Joint Replacement Payment Model (CJR) participant hospitals located in areas impacted by extreme and uncontrollable circumstances for performance years 3 through 5. abstract_id: PUBMED:24101680 Health care quality improvement publication trends. To analyze the extent of academic interest in quality improvement (QI) initiatives in medical practice, annual publication trends for the most well-known QI methodologies being used in health care settings were analyzed. A total of 10 key medical- and business-oriented library databases were examined: PubMed, Ovid MEDLINE, EMBASE, CINAHL, PsycINFO, ISI Web of Science, Scopus, the Cochrane Central Register of Controlled Trials, ABI/INFORM, and Business Source Complete. A total of 13 057 articles were identified that discuss at least 1 of 10 well-known QI concepts used in health care contexts, 8645 (66.2%) of which were classified as original research. "Total quality management" was the only methodology to demonstrate a significant decline in publication over time. "Continuous quality improvement" was the most common topic of study across all publication years, whereas articles discussing Lean methodology demonstrated the largest growth in publication volume over the past 2 decades. Health care QI publication volume increased substantially beginning in 1991. abstract_id: PUBMED:33301638 History and Efficacy of the "Three-Hour Rule". N/A abstract_id: PUBMED:24740660 Variation in hospital-level risk-standardized complication rates following elective primary total hip and knee arthroplasty. Background: Little is known about the variation in complication rates among U.S. hospitals that perform elective total hip arthroplasty (THA) and total knee arthroplasty (TKA) procedures. The purpose of this study was to use National Quality Forum (NQF)-endorsed hospital-level risk-standardized complication rates to describe variations in, and disparities related to, hospital quality for elective primary THA and TKA procedures performed in U.S. hospitals. Methods: We conducted a cross-sectional analysis of national Medicare Fee-for-Service data. The study cohort included 878,098 Medicare fee-for-service beneficiaries, sixty-five years or older, who underwent elective THA or TKA from 2008 to 2010 at 3479 hospitals. Both medical and surgical complications were included in the composite measure. Hospital-specific complication rates were calculated from Medicare claims with use of hierarchical logistic regression to account for patient clustering and were risk-adjusted for age, sex, and patient comorbidities. We determined whether hospitals with higher proportions of Medicaid patients and black patients had higher risk-standardized complication rates. Results: The crude rate of measured complications was 3.6%. The most common complications were pneumonia (0.86%), pulmonary embolism (0.75%), and periprosthetic joint infection or wound infection (0.67%). The median risk-standardized complication rate was 3.6% (range, 1.8% to 9.0%). Among hospitals with at least twenty-five THA and TKA patients in the study cohort, 103 (3.6%) were better and seventy-five (2.6%) were worse than expected. Hospitals with the highest proportion of Medicaid patients had slightly higher but similar risk-standardized complication rates (median, 3.6%; range, 2.0% to 7.1%) compared with hospitals in the lowest decile (3.4%; 1.7% to 6.2%). Findings were similar for the analysis involving the proportion of black patients. Conclusions: There was more than a fourfold difference in risk-standardized complication rates across U.S. hospitals in which elective THA and TKA are performed. Although hospitals with higher proportions of Medicaid and black patients had rates similar to those of hospitals with lower proportions, there is a continued need to monitor for disparities in outcomes. These findings suggest there are opportunities for quality improvement among hospitals in which elective THA and TKA procedures are performed. abstract_id: PUBMED:2292999 Medicare. The reasonable and necessary exclusion. N/A abstract_id: PUBMED:29232073 Medicare Program; Cancellation of Advancing Care Coordination Through Episode Payment and Cardiac Rehabilitation Incentive Payment Models; Changes to Comprehensive Care for Joint Replacement Payment Model: Extreme and Uncontrollable Circumstances Policy for the Comprehensive Care for Joint Replacement Payment Model. Final rule; interim final rule with comment period. This final rule cancels the Episode Payment Models (EPMs) and Cardiac Rehabilitation (CR) Incentive Payment Model and rescinds the regulations governing these models. It also implements certain revisions to the Comprehensive Care for Joint Replacement (CJR) model, including: Giving certain hospitals selected for participation in the CJR model a one-time option to choose whether to continue their participation in the model; technical refinements and clarifications for certain payment, reconciliation and quality provisions; and a change to increase the pool of eligible clinicians that qualify as affiliated practitioners under the Advanced Alternative Payment Model (Advanced APM) track. An interim final rule with comment period is being issued in conjunction with this final rule in order to address the need for a policy to provide some flexibility in the determination of episode costs for providers located in areas impacted by extreme and uncontrollable circumstances. Answer: The provided abstracts do not directly address the "72 Hour Medicare Mandate After Total Joint Arthroplasty" or discuss its medical necessity. The abstracts focus on various aspects of Medicare programs and payment models, quality improvement initiatives, and outcomes related to joint replacement surgeries, but there is no mention of a specific "72 Hour Medicare Mandate" or its medical necessity in the context of total joint arthroplasty. For instance, the abstracts discuss the Comprehensive Care for Joint Replacement (CJR) model, which is a payment model that includes retrospective bundled payments for episodes of care for lower extremity joint replacement and covers all related care within 90 days of hospital discharge (PUBMED:26606762, PUBMED:30019875, PUBMED:29232073). Another abstract evaluates the association between hospital participation in the Medicare Bundled Payment Initiative and payments and quality outcomes for lower extremity joint replacement episodes (PUBMED:27653006). However, none of these abstracts mention a 72-hour mandate. Additionally, there is an abstract discussing the implementation of Anticipatory Care Plans in general practice to improve health literacy and reduce reliance on emergency services during after-hour periods (PUBMED:26455363), but this does not relate to a postoperative mandate following joint arthroplasty. To answer the question about the medical necessity of a 72-hour Medicare mandate after total joint arthroplasty, one would need to consult Medicare policy documents, clinical guidelines, or research studies specifically evaluating the outcomes and rationale behind such a mandate, none of which are provided in the abstracts here.
Instruction: Should diagnosis and classification be kept separate in psychiatry? Abstracts: abstract_id: PUBMED:25132593 Should diagnosis and classification be kept separate in psychiatry? Background: In medicine it is common practice to diagnose patients before classifying their symptoms. In psychiatry, however, the two procedures cannot be kept separate; they overlap and are interlinked. Aim: To discuss relevant classification systems and the relationship between diagnosis and classification and to find out what kind of relationship is the best one for psychiatry. Method: The literature was searched and a conceptual analysis was performed on the basis of relevant literature, manuals and principles formulated by psychiatrists. Results: It is argued that deliberation, an important part of the diagnostic process, can only play a significant role if diagnosis and symptom classification are kept completely separate. In this process of deliberation there should be a role for clinical phenomena such as improvement of symptoms, worsening of symptoms, objectification and reification, and psychiatrists should have the opportunity to consider whether these aspects really belong to the field of psychiatry. Conclusion: In psychiatry the relationship between diagnosis and symptom classification is not clear-cut. However, since deliberation plays a major role in psychiatric diagnosis, it is important that psychiatrists continue to keep diagnosis separate from symptom classification. Unlike other medical specialists, psychiatrists sometimes classify an illness before making a diagnosis. Existing guidelines and an all-embracing guideline regarding diagnosis need to be harmonised. Confusion and misdiagnosis could be reduced if classifications from two classification systems were to be included in medico-psychiatric diagnosis. abstract_id: PUBMED:21836665 Indian Psychiatry and classification of psychiatric disorders. The contribution of Indian psychiatry to classification of mental disorders has been limited and restricted to acute and transient psychosis and to possession disorders. There is a need for leadership in research in order to match diagnosis and management strategies to the Indian context and culture. abstract_id: PUBMED:9530555 The future of diagnosis in psychiatry Following a brief introduction to the history of psychiatric classification, the article describes the development of the international classification of diseases as a whole, especially ICD-10 as a model of operationalised diagnosis, and reports on the corresponding development of DSM-III to DSM-IV. A future task is the foundation of a new nosology that will include aetiology in the framework of operationalised diagnosis. Atheoretical diagnosis as proposed nowadays according to ICD-10 will remain fictitious. Dimensional diagnoses and multiaxiality will gain in importance, especially in the category of neurotic and personality disorders. Psychiatry in primary health care will be much more important than it is today, as will rehabilitation in psychiatry. The paper concludes with the hope that the next generation will benefit from a single worldwide classification of diagnosis and that the schism between ICD-10 and DSM-IV will be overcome. abstract_id: PUBMED:20664773 Aberrant origin of the conus branch: Diagnosis of split right coronary artery with two separate ostia by conventional angiography. Split right coronary artery is a rare congenital anomaly. Most cases originate from the same orifice in the right sinus of Valsalva. The correct diagnosis of split right coronary artery with separate ostia is believed to be extremely rare. The true incidence of this anomaly is unknown. The main problem in diagnosis is that another ostium might be missed on selective coronary angiography. The use of multidetector computed tomography has been emphasized in the diagnosis of the anomaly. Two cases of patients with a split coronary artery arising from two separate ostia are reported; the cases were both detected by conventional coronary angiography. To avoid missing the diagnosis of this rare anomaly by conventional coronary angiography, the possibility should be kept in mind and a Judkins catheter technique may be helpful. abstract_id: PUBMED:21434914 Research review: Child psychiatric diagnosis and classification: concepts, findings, challenges and potential. The conceptual issues are briefly noted with respect to the distinctions between classification and diagnosis; the question of whether mental disorders can be considered to be 'diseases'; and whether descriptive psychiatry is outmoded. The criteria for diagnosis are reviewed, with the conclusion that, at present, there are far too many diagnoses, and a ridiculously high rate of supposed comorbidity. It is concluded that a separate grouping of disorders with an onset specific to childhood should be deleted, the various specific disorders being placed in appropriate places, and the addition for all diagnoses of the ways in which manifestations vary by age. A new group should be formed of disorders that are known to occur but for which further testing for validity is needed. The overall number of diagnoses should be drastically reduced. Categorical and dimensional approaches to diagnosis should be combined. The requirement of impairment should be removed from all diagnoses. Research and clinical classifications should be kept separate. Finally, there is a need to develop a primary care classification for causes of referral to both medical and non-medical primary care. abstract_id: PUBMED:9384865 Diagnosis and classification in psychiatry: Gerald Klerman's contribution. Gerald Klerman (1928-1992) made substantial contributions to diagnosis and classification in psychiatry during a time of great change. He understood and appreciated the importance of descriptive, biological, psychoanalytic, social, interpersonal and behavioral approaches and was uniquely able to integrate them cogently. He demanded that theories and hypotheses be tested empirically, and he spearheaded many key scientific research programs directed toward this goal, including the Clinical Studies of the National Institute of Mental Health Program on the Psychobiology of Depression. This article provides an overview of his contributions. abstract_id: PUBMED:18516306 A context for classification in child psychiatry. Objective: To provide a context for classification in child psychiatry over last 45 years including debate over different approaches. Method: The context for classification of child psychiatric disorders has changed drastically since the introduction of categorical classification and the multi-axial formulation in the Diagnostic and Statistical Manual (DSM) and the International Classification of Disease (ICD). The authors review some historical factors including the shift in psychiatry to a universal classification system spanning the lifespan. Results: The adaptation of categorical and universal diagnosis has resulted in a series of child-adult lifespan continuities and discontinuities about how problems are conceptualized within the categorical, multi-axial system. Conclusion: There is need for a more flexible classification system to incorporate emerging data from longitudinal and gene-environment (GxE) interaction studies within the framework of attachment, developmental and systems theory. abstract_id: PUBMED:11618728 Back to the future: Valentin Magnan, French psychiatry, and the classification of mental diseases, 1885-1925. To this day one of the most curious gaps in the historiography of French psychiatry is the era between the fin-de-siècle and the 1920s, years that overlapped the life and career of Valentin Magnan (1835-1916), a pivotal figure in the historical classification of mental diseases. This paper seeks to address this shortcoming as well as contribute to the growing scholarly interest in the history of clinical psychiatry. It argues that Magnan was in many ways a tragic figure, someone who lived and worked at a time when circumstances conspired against him and his efforts to reform psychiatric classification. Essentially Magnan had the misfortune to practise psychiatry when Emil Kraepelin's influence began to spread beyond Germany's borders, sparking a nationalist reaction that penalized both French Kraepelinians and Magnan whose theories shared similarities with Kraepelin's. But Magnan's stature also suffered because of the intense internecine quarrels that arose in late nineteenth-century French psychiatry. Magnan was no helpless victim, though, and there is reason to believe that some of the criticism directed at him was based on documented personal failings. Ultimately, Magnan's theory of psychiatric classification was overtaken by these and other events in French psychiatry, culminating by the interwar period in the emergence of a new national, nosological pardigm that has dominated French psychiatry for most of the twentieth century. Thus Magnan was in many respects a pariah within French psychiatry by the early twentieth century. An examination of his career casts light on this crucial turning-point in the history of French psychiatry and indicates why and how the new model of classification was more to the tastes of his medical colleagues. abstract_id: PUBMED:26602907 Medicalization in psychiatry: the medical model, descriptive diagnosis, and lost knowledge. Medicalization was the theme of the 29th European Conference on Philosophy of Medicine and Health Care that included a panel session on the DSM and mental health. Philosophical critiques of the medical model in psychiatry suffer from endemic assumptions that fail to acknowledge the real world challenges of psychiatric nosology. The descriptive model of classification of the DSM 3-5 serves a valid purpose in the absence of known etiologies for the majority of psychiatric conditions. However, a consequence of the "atheoretical" approach of the DSM is rampant epistemological confusion, a shortcoming that can be ameliorated by importing perspectives from the work of Jaspers and McHugh. Finally, contemporary psychiatry's over-reliance on neuroscience and pharmacotherapy has led to a reductionist agenda that is antagonistic to the inherently pluralistic nature of psychiatry. As a result, the field has suffered a loss of knowledge that may be difficult to recover. abstract_id: PUBMED:11215385 On the importance of the "decision-making model" view of diagnosis as a clinical framework in psychiatry After the advent of DSM-III, operational diagnostic criteria, along with the classification of disorders using such criteria, received considerable attention, and many studies on the reliability and validity of psychiatric diagnosis were conducted worldwide. Operational methodology was applied to diagnosis and classification, especially, in the area of research, and has contributed greatly to advances in reliable and refined clinical research. Such methodology, however, has not necessarily been accepted as a guiding principle in the area of clinical practice by all psychiatrists. Rather, some psychiatrists, especially more experienced psychiatrists, took a somewhat negative attitude toward the use of operational methodology. The author contends that one of the causes for the relatively poor acceptance of operational methodology in the area of clinical practice lies in the "classification model" view of diagnosis that forms the implicit background for the methodology. From a clinical perspective, it is not from the "classification model" basis but rather, from the "decision-making model" basis that the actual process of clinical diagnosis in psychiatry is explained properly. This is a very important point, because the latter model is potentially more useful both to psychiatric patients and to researchers in psychiatry than the former model. There have been however, few reports in psychiatry that highlight the importance of this model as the clinical framework. The author analyzes the limitations of the "classification model" view, and then, based on this analysis, lists prerequisites that a model for the framework of clinical practice should possess. The prerequisites listed are: that clinical information not sufficient to meet the disease criteria should be used as effectively as possible, that diseases low in probability but high in seriousness should be considered by clinicians in the differential diagnoses; that diagnosis should be readily changed when necessary; that the component of benefit to the patient should be included in the model; and that the relationship between nosological definition of diseases and practical diagnostic criteria should be explained explicitly. In addition, the author emphasizes that psychiatric diagnosis involves "decision-making under the condition of incomplete information", because most psychiatric diseases are still of unknown etiology. With all these conditions in mind, the author contends that the Bayesian statistical model of decision-making is suited to the frame of reference, and that it should replace the classification model as a guiding principle. This model can integrate various aspects of psychiatric clinical activities systematically and explicitly. A case is presented to illustrate the model in a clinical context. The author points out problems to be solved in current psychiatry from the perspective of the Bayesian model of decision-making. Finally, limitations of the Bayesian model view are discussed. Answer: The question of whether diagnosis and classification should be kept separate in psychiatry is a subject of debate. According to one perspective, diagnosis and classification cannot be kept separate in psychiatry as they overlap and are interlinked. It is argued that deliberation, an important part of the diagnostic process, can only play a significant role if diagnosis and symptom classification are kept completely separate. This separation allows psychiatrists to consider clinical phenomena such as the improvement or worsening of symptoms, objectification, and reification, and to deliberate whether these aspects truly belong to the field of psychiatry. The relationship between diagnosis and symptom classification in psychiatry is not clear-cut, but since deliberation is crucial in psychiatric diagnosis, it is important for psychiatrists to continue to keep diagnosis separate from symptom classification. This approach could reduce confusion and misdiagnosis, and it is suggested that classifications from two systems could be included in medico-psychiatric diagnosis to harmonize existing guidelines (PUBMED:25132593). On the other hand, some argue that the current approach to psychiatric diagnosis, which is often descriptive and atheoretical as per the DSM and ICD systems, may lead to a high number of diagnoses and an inflated rate of comorbidity. It is suggested that the overall number of diagnoses should be drastically reduced and that categorical and dimensional approaches to diagnosis should be combined. The requirement of impairment should be removed from all diagnoses, and research and clinical classifications should be kept separate. Additionally, there is a call for the development of a primary care classification for causes of referral to both medical and non-medical primary care (PUBMED:21434914). In summary, while there is a viewpoint that supports the separation of diagnosis and classification in psychiatry to enhance the deliberative process and reduce misdiagnosis, there is also a call for a more integrated approach that combines categorical and dimensional diagnoses and separates research from clinical classifications. The debate reflects the complexity of psychiatric diagnosis and the need for a system that is both clinically useful and scientifically valid.
Instruction: Does physical activity change over 24 months in persons with relapsing-remitting multiple sclerosis? Abstracts: abstract_id: PUBMED:23668848 Does physical activity change over 24 months in persons with relapsing-remitting multiple sclerosis? Objective: There is strong evidence for prevalent physical inactivity among persons with multiple sclerosis (MS). By comparison, very little is known about natural occurring change in physical activity over time. Such inquiry is important for identifying the rate, patterns, and predictors of change for the design and delivery of behavioral interventions in this population. The present study conducted latent growth modeling (LGM) and latent class growth analysis (LCGA) for understanding the rate, patterns, and predictors of change in physical activity over a 24-month period among persons with MS. Methods: On three occasions each separated by 12 months, persons (n = 269) with relapsing-remitting MS (RRMS) completed a battery of questionnaires that included assessment of physical activity behavior. Data were analyzed using Mplus 3.0. Results: The LGM indicated that a linear model provided a good fit to the data (χ2 = 3.94, p = .05, CFI = .987, SRMR = .025), but the slope (Ms = 0.8) was nonsignificant (p &gt; .05) and indicated no change in physical activity over time. LCGA identified a 2-class solution, and, based on the Lo-Mendell-Rubin likelihood ratio test, this model fit the data better than the 1-class solution. The 2-class solution consisted of low-active (∼80%) and high-active (∼20%) persons, but there was no change in physical activity over time per group. Sex and disability, but not age and disease duration, were predictors of being in the low active class. Conclusions: There was prevalent physical inactivity, but little interindividual and intraindividual change over 24 months in this cohort of persons with RRMS. Such results identify the importance of behavior interventions, perhaps early in the disease process wherein physical inactivity originates. abstract_id: PUBMED:27918703 Patterns and Predictors of Change in Moderate-to-Vigorous Physical Activity Over Time in Multiple Sclerosis. Background: Physical inactivity is common in persons with multiple sclerosis (MS), but there is very little known about the pattern and predictors of changes in physical activity over time. Purpose: This study examined changes in moderate-to-vigorous physical activity (MVPA) over a 30-month time period and the demographic and clinical predictors of such changes in relapsing-remitting MS (RRMS). Methods: 269 persons with MS wore an accelerometer for a 7-day period and completed a demographic/clinical scale every 6 months over a 30-month period. Data were analyzed using latent class growth modeling (LCGM). Results: LCGM identified a two-class model for changes in levels of MVPA over time. Class 1 involved higher initial levels of MVPA and linear decreases in MVPA over time, whereas Class 2 involved lower initial levels of MVPA and linear increases in MVPA over time. LCGM further indicated that males were more likely (OR = 5.8, P &lt; .05) and those with higher disability status were less likely (OR = 0.51, P &lt; .05) to belong to Class 1 than Class 2. Conclusion: Levels of MVPA change over time in persons with RRMS and the pattern of change suggests that behavioral physical activity interventions for persons with MS might target men and those with lower disability. abstract_id: PUBMED:22989612 Premorbid physical activity predicts disability progression in relapsing-remitting multiple sclerosis. Background: Disability progression is a hallmark feature of multiple sclerosis (MS) that has been predicted by a variety of demographic and clinical variables and treatment with disease modifying therapies. This study examined premorbid physical activity as a predictor of change in disability over 24 months in persons with relapsing-remitting MS (RRMS). Methods: 269 persons with RRMS completed baseline measures of demographic and clinical variables, premorbid and current physical activity, and disability status. The measure of disability was further completed every six months over the subsequent 24-month period. The data were analyzed with unconditional and conditional latent growth curve modeling (LGCM). Results: The unconditional LGCM indicated that there was a significant, linear increase in disability scores over time (p=.0015). The conditional LGCM indicated that premorbid physical activity significantly predicted the linear change in disability scores (standardized β=-.23, p&lt;.005); current physical activity (standardized β=-.02, p=.81), gender (standardized β=-.06, p=.54), age (standardized β=.05, p=.56), duration of MS (standardized β=.11, p=.15), and treatment with disease modifying therapies (standardized β=-.03, p=.77) did not predict change in disability scores. Conclusions: The current research highlights the possible role of premorbid physical activity for lessening disability progression over time in persons with RRMS. Additional research is necessary on physical activity initiated after the diagnosis of RRMS as a lifestyle approach for bolstering physiological reserve and preventing disability progression. abstract_id: PUBMED:20921239 Internet intervention for increasing physical activity in persons with multiple sclerosis. Background: Physical activity has been associated with improvements in walking mobility and quality of life in persons with multiple sclerosis (MS), and yet this population is largely sedentary and inactive compared with the general population. Objectives: We conducted a pilot, randomized controlled trial (RCT) for examining the effect of an Internet intervention based on social cognitive theory (SCT) for favorably increasing physical activity among persons with MS. We further examined variables from SCT as possible mediators of the Internet intervention. Methods: We randomly allocated 54 persons with MS into either an Internet intervention condition or a waitlist control condition. The participants completed measures of physical activity, self-efficacy, outcome expectations, functional limitations, and goal setting before and after the 12-week period. Results: The intervention group reported a statistically significant (p = 0.01) and large increase in physical activity over time (d = 0.72), whereas the control group had a small (d = 0.04) and non-significant change in physical activity (p = 0.71). The intervention group further reported a statistically significant (p = 0.001) and large increase in goal setting over time (d = 0.97), whereas the control group had a small (d = -0.13) and non-significant change (p = 0.17). The change in goal setting over time mediated the effect of the Internet intervention on physical activity behavior. Conclusions: This pilot study sets the stage for a subsequent RCT that includes a larger sample of persons with MS, longer intervention period along with a follow-up, objective measure of physical activity, and secondary outcomes of walking mobility and QOL. abstract_id: PUBMED:25876450 Fatigue, depression, and physical activity in relapsing-remitting multiple sclerosis: Results from a prospective, 18-month study. Background: Fatigue, depression, and physical inactivity are common in multiple sclerosis (MS), but there is limited information on the bi-directional associations among those variables over a long period of time. Objective: This study examined the hypothesis that fatigue and depression would predict change in physical activity and that physical activity would predict changes in fatigue and depression over an 18-month period of time in persons with MS, even after controlling for disability status, disease duration, sex, and age. Methods: This longitudinal study collected data on fatigue, depression, physical activity, and confounding variables from the same sample of persons with relapsing-remitting MS on two occasions that were separated by 18 months. Results: The cross-lagged path coefficient between baseline fatigue and follow-up physical activity was statistically significant (path coefficient=-.26, p&lt;.0001) as was the cross-lagged path coefficient between baseline physical activity and follow-up fatigue (path coefficient=-.11, p&lt;.05). Those bi-directional associations were independent of depression, disability status, disease duration, sex, and age. There were no statistically significant cross-lagged path coefficients between depression and physical activity. Conclusions: This study identified bi-directional associations between fatigue and physical activity over an 18-month period of time. The nature of such associations opens the door for research on fatigue management as an approach for sustaining or promoting physical activity over time. abstract_id: PUBMED:21895426 Social cognitive variables as correlates of physical activity in persons with multiple sclerosis: findings from a longitudinal, observational study. There is a lack of data regarding the associations among changes in social cognitive variables and physical activity over time in persons with multiple sclerosis (MS). To that end, the current study adopted a panel design and analysis for examining hypothesized relationships among changes in social cognitive variables and physical activity over time in persons with MS, and this is necessary for designing effective behavioral interventions. On two occasions separated by an 18-month period, persons (N = 218) with relapsing-remitting MS (RRMS), who were initially recruited by telephone for a cross-sectional study, completed a battery of questionnaires that assessed social cognitive variables and physical activity. Those study materials were delivered and returned via the United State Postal Service. The 18-month changes in self-efficacy (path coefficient = .25, p &lt; .01) and goal setting (path coefficient = .26, p &lt; .01) had direct effects on residual change in physical activity. The change in self-efficacy further had an indirect effect on residual change in physical activity that was accounted for by change in goal setting (path coefficient = .05, p &lt; .05). This longitudinal study suggests that self-efficacy and goal setting represent plausible targets for changing physical activity behavior in persons with RRMS. abstract_id: PUBMED:36090238 Characterizing Relationships Between Cognitive, Mental, and Physical Health and Physical Activity Levels in Persons With Multiple Sclerosis. Background: Although persons with multiple sclerosis (MS) are encouraged to engage in physical activity, they are less active than the general population and experience poorer emotional/cognitive health, underscoring the need for increased understanding of the factors independently associated with exercise in MS. Methods: Six hundred forty people with MS completed a detailed demographic survey, the Godin Leisure-Time Exercise Questionnaire, and Quality of Life in Neurological Disorders short forms. The average number of weekly sessions of exercise was examined as a count, as a binary variable (a weekly minimum of 4 sessions of physical activity), and as an ordinal variable of being active using multivariable zero-inflated negative binomial, logistic, and ordered logistic regression models, respectively. Primary predictors of interest included depression, cognitive function, positive affect, and lower extremity functioning as measured by the Quality of Life in Neurological Disorders short forms. Results: The study sample was 91% White race, 83% female, 65% with a relapsing-remitting MS diagnosis. The mean participant age was 52 years. Across analyses, body mass index and disability were inversely associated with exercising. Greater lower extremity impairment was associated with decreased odds of exercising and being active. A greater burden of depression symptoms was correlated with lower odds of engaging in physical activity. People with MS with higher self-reported cognitive functioning were less likely to engage in any exercise, but it was not associated with frequency of activities. Conclusions: These results demonstrate associations between exercise and cognitive and emotional health in people with MS, underscoring the need to consider these factors when designing MS-targeted physical activity recommendations. abstract_id: PUBMED:37583953 Physical activity is related to disease severity and fatigue, but not to relapse rate in persons with relapsing remitting multiple sclerosis - a self-reported questionnaire based study. Introduction: Based on theoretical models, physical activity has been introduced as a promoting method to mitigate the disease severity, fatigue and relapse rate in multiple sclerosis. The primary objective of the study was to investigate the relation between self-reported physical activity level and disease severity, fatigue and relapse rate in persons with relapsing remitting multiple sclerosis (RRMS). Methods: A survey was offered to persons with RRMS from March 2019 to August 2021 (n = 253). Physical activity level, fatigue and disease severity were determined using the Godin Leisure-Time Questionnaire (GLTEQ), the Patient Determined Disease Steps (PDDS) scale and the Fatigue Scale for Motor and Cognitive Functions (FSMC). Additionally, participants' relapse rate was recorded. Results: Bivariate correlations revealed an inverse relation between physical activity level and PDDS (ρ = -0.279; p &lt; 0.001) as well as between physical activity and FSMC (r = -0.213, p &lt; 0.001), but not between physical activity and relapse rate (r = 0.033, p &gt; 0.05). Multiple linear regression analyses explained 12.6% and 5.2% of the variance of PDDS and FSMC. Conclusion: Our findings confirm a relation between self-reported physical activity, disease severity and fatigue in persons with RRMS. However, self-reported physical activity level does not seem to affect the annualised relapse rate. abstract_id: PUBMED:20060544 Accelerometry in persons with multiple sclerosis: measurement of physical activity or walking mobility? Objective: Motion sensors such as accelerometers have been recognized as an ideal measure of physical activity in persons with MS. This study examined the hypothesis that accelerometer movement counts represent a measure of both physical activity and walking mobility in individuals with MS. Methods: The sample included 269 individuals with a definite diagnosis of relapsing-remitting MS who completed the Godin Leisure-Time Exercise Questionnaire (GLTEQ), International Physical Activity Questionnaire (IPAQ), Multiple Sclerosis Walking Scale-12 (MSWS-12), Patient Determined Disease Steps (PDDS), and then wore an ActiGraph accelerometer for 7days. The data were analyzed using bivariate correlation and confirmatory factor analysis. Results: The results indicated that (a) the GLTEQ and IPAQ scores were strongly correlated and loaded significantly on a physical activity latent variable, (b) the MSWS-12 and PDDS scores strongly correlated and loaded significantly on a walking mobility latent variable, and (c) the accelerometer movement counts correlated similarly with the scores from the four self-report questionnaires and cross-loaded on both physical activity and walking mobility latent variables. Conclusion: Our data suggest that accelerometers are measuring both physical activity and walking mobility in persons with MS, whereas self-report instruments are measuring either physical activity or walking mobility in this population. abstract_id: PUBMED:22403041 Physical activity, self-efficacy, and health-related quality of life in persons with multiple sclerosis: analysis of associations between individual-level changes over one year. Background: Physical activity and self-efficacy represent behavioral and psychological factors, respectively, that are compromised in persons with multiple sclerosis (MS), but might be modifiable through intervention and result in better health-related quality of life (HRQOL). Purpose: The present study adopted a panel research design and examined the associations between individual-level changes in physical activity, self-efficacy, and HRQOL over a one-year period in persons with MS. Method: The sample consisted of 269 persons with relapsing-remitting MS who completed the Godin Leisure-Time Questionnaire (GLTEQ), Multiple Sclerosis Self-Efficacy (MSSE) Scale, and Multiple Sclerosis Quality of Life-29 (MSIS-29) Scale on two occasions that were separated by 1 year. The data were analyzed using panel analysis in Mplus 3.0. Results: The initial panel analysis indicated that individual-level change in physical activity was associated with individual-level change in both physical and psychological HRQOL. The subsequent panel analysis indicated that (a) individual-level change in self-efficacy for functioning with MS was associated with individual-level change in physical HRQOL, whereas individual-level change in self-efficacy for control was associated with individual-level change in psychological HRQOL; (b) individual-level change in self-efficacy for functioning with MS, but not self-efficacy for control, mediated the association between individual-level change in physical activity and physical HRQOL; and (c) individual-level change in self-efficacy for controlling MS was the strongest predictor of individual-level change in HRQOL. Conclusion: Physical activity and self-efficacy both might be important targets of subsequent behavioral and self-management interventions for improving the HRQOL of persons with MS, although self-efficacy is seemingly more important than physical activity. Answer: Physical activity does not appear to significantly change over 24 months in persons with relapsing-remitting multiple sclerosis (RRMS). A study using latent growth modeling (LGM) and latent class growth analysis (LCGA) found that there was no significant change in physical activity over time among a cohort of persons with RRMS. The LGM indicated that the slope was nonsignificant, suggesting no change in physical activity levels over the 24-month period. Additionally, LCGA identified two classes of individuals, low-active and high-active, but again, there was no change in physical activity over time within each group. Sex and disability were predictors of being in the low-active class, but not age and disease duration (PUBMED:23668848). Another study examining changes in moderate-to-vigorous physical activity (MVPA) over a 30-month period in RRMS found that there were two distinct patterns of change. One class showed higher initial levels of MVPA with linear decreases over time, while the other class had lower initial levels with linear increases over time. However, the study suggested that behavioral physical activity interventions might be beneficial, particularly targeting men and those with lower disability (PUBMED:27918703). In summary, while individual patterns may vary, overall, physical activity levels do not show significant change over a 24-month period in the general population of persons with RRMS. This underscores the importance of interventions aimed at increasing or maintaining physical activity levels in this population.
Instruction: Thyroid cancer in hyperthyroid patients: is it different clinical entity? Abstracts: abstract_id: PUBMED:24824801 Thyroid cancer in hyperthyroid patients: is it different clinical entity? Objective: In this retrospective study, we aimed to analyze the frequency of thyroid cancer in patients who underwent thyroidectomy for hyperthyroidism. Patients And Methods: A total number of 177 patients, who underwent surgery for hyperthyroidism between August 2005 and March 2010, were included in this study. Demographic, clinical, radiologic, and laboratory data were collected retrospectively.Results. Postoperative histopathological examinations revealed thyroid malignancy in 13 (7.3%) patients. Among these 13 patients presenting thyroid malignancy, 53.9% were diagnosed with multinodular toxic goiter (MTG), 38.5% with uninodular toxic goiter (UTG) and 7.6% with Graves' disease. Conclusions: Thyroid carcinoma is common in hyperthyroidism and thyroid fine-needle aspiration biopsy (TFNAB) is a reliable method in the diagnosis of the thyroid malignancy in these patients. We suggest that it is reasonable to evaluate nodules with TFNAB in hyperthyroid patients prior to surgical intervention. abstract_id: PUBMED:27286994 Association between new-onset hypothyroidism and clinical response in patients treated with tyrosine kinase inhibitor therapy in phase I clinical trials. Purpose: Tyrosine kinase inhibitor (TKI)-induced thyroid dysfunction has been identified as an important but manageable adverse effect of targeted therapy. Several studies have suggested that patients who develop hypothyroidism respond better to TKIs, but this relationship is not well elucidated. We evaluated the relationship between new-onset hypothyroidism and clinical response in patients with advanced cancers treated with TKIs at our institution. Methods: We retrospectively reviewed records for patients from four clinical trials that included at least one TKI therapy between January 2006 and December 2011. Patients with preexisting thyroid disease, including thyroid cancer, hypothyroidism, or hyperthyroidism, were excluded. Analysis of 197 patients was performed. Response was determined using RECIST 1.0. Clinical benefit was described as complete response, partial response, or stable disease greater than 4 months. Multivariable logistic regression analysis was performed to correlate patient characteristics with clinical response. Results: The median age for the 197 patients was 58 years (range, 13-85 years), and 56 % were female. Of the 197 patients, 52 (26 %) developed hypothyroidism after therapy. Clinical benefit rates were 50 % in patients with new-onset hypothyroidism versus 34 % in patients without hypothyroidism. In the univariate model, the odds ratio (OR) for new-onset hypothyroidism was 1.9 [95 % confidence interval (CI) (1.0, 3.6) and p = 0.05]. We grouped tumor types into six categories (breast, colorectal carcinoma, melanoma, non-small cell lung cancer, pancreas, and other). When adjusted for tumor type, age (&gt;50 years) and sex, the OR was 2.9 [95 % CI (1.3, 6.5) and p = 0.012] for new-onset hypothyroidism. Conclusion: New-onset hypothyroidism was associated with favorable clinical response in patients who received TKI treatment. abstract_id: PUBMED:17063811 Subclinical thyropathies Subclinical thyreopathies are pathological states of the thyroid gland that show no corresponding clinical symptoms, yet may be detected sporadically by laboratory examination or screening methods. They represent a novel diagnostic entity (analogous to glucose tolerance impairment--IGT or impaired fasting glycemia--IFG), which appeared due to innovations in laboratory diagnostics (sensitive TSH detection methods) and recent focus on pre-clinical stages of manifestative diseases. From a wider point of view, subclinical thyreopathies include subclinical hypothyroidism, subclinical hyperthyrodism, thyroid volume or structure changes found accidentally by sonography, initial stages of malignancy--accidental detection of a microcarcinoma and subclinical forms of thyroiditis. Controversy remains concerning exact definition, epidemiological issues, therapeutic intervention, evaluation of risk and gain implied in treatment of these borderline clinical stages and, last but not least, early screening of risk groups if necessary. abstract_id: PUBMED:6395135 A review of clinical trials of lithium in medicine. Since the approval of lithium use in treatment of acute mania, there have been numerous clinical trials of lithium in medical and psychiatric disorders. This paper gives a brief review of the literature on lithium trials in approximately fourteen medical conditions. These are: hyperthyroidism, metabolizing thyroid cancer, syndrome of inappropriate secretion of antidiuretic hormone, premenstrual tension syndrome, anorexia nervosa, Felty's syndrome, chemotherapy-induced neutropenia, aplastic anemia, seborrheic dermatitis, eczematoid dermatitis, cyclic vomiting, diabetes mellitus and asthma. Most of the case reports cited showed the efficacy of the side effects from lithium salt in the management of the symptoms and signs of these disorders, however, well-designed and controlled studies give negative results. The positive results are reported in the group of disorders having an underlying subdromal affective syndrome such as premenstrual tension syndrome and anorexia nervosa. Other encouraging reports include the effect of lithium to induce leucocytosis in Felty's syndrome and chemotherapy-induced neutropenia. abstract_id: PUBMED:34018373 Clinical Analysis of 2 170 Cases of Thyroid-Associated Ophthalmopathy Involving Extraocular Muscles Objective: To explore the clinical features of thyroid-associated ophthalmopathy (TAO) with extraocular muscle involvement. Methods: The data of 2170 TAO patients who were seen at the Orbital Disease Clinic, West China Hospital, Sichuan University from September, 2009 to January, 2020 were collected retrospectively. The extraocular muscle involvement of these patients was confirmed by CT or MRI. Their general condition, medical history, clinical manifestations and imaging features were analyzed retrospectively. Results: Among the 2170 TAO patients, 932 were male and 1238 were female. The mean (± SD) age of all the patients was (46.95±13.06) years, ranging between 6 and 85. 1684 patients (77.60%) suffered from hyperthyroidism, 13 patients (0.59%) had thyroid cancer, 80 patients (3.69%) had hypothyroidism, and 393 patients (18.11%) had normal thyroid function. Proptosis (55.25%) and diplopia (33.09%) were the main reasons for their visits to the clinic, and restricted eye movements (83.46%) was the most common sign. 122 patients with a mean age of (53.24±13.07) years did not show any eyelid sign and had only extraocular muscle involvement. The 2170 TAO patients had a total of 3799 eyes of extrocular muscle involvement, with 541 patients experiencing monocular involvement and 1629 patients, binocular involvement; 1204 eyes (31.69%) had a single extrocular muscle involved and 2595 eyes (68.31%) had multiple extrocular muscles involved. Inferior rectus was the most commonly involved muscle, followed by superior rectus, medial rectus, and lateral rectus in descending order of involvement frequency. Of the 1014 patients who underwent enhanced MRI, 71.99% were shown to be in the active phase. 69.03% of the 775 patients identified as being in inactive phase according to their clinical activity score (CAS) were shown to be in the active phase according to their MRI results. Conclusion: TAO patients with extraocular muscle involvement have their own specific clinical manifestations. CT and MRI can both be used to assist in the diagnosis of extraocular muscle involvement. MRI can be used to assess the pathological stage of extraocular muscles and is more sensitive than CAS. abstract_id: PUBMED:37908574 An Unexpected Finding of Poorly Differentiated Thyroid Carcinoma in a Toxic Thyroid Nodule. Poorly differentiated thyroid carcinoma (PDTC) is a rare entity of thyroid cancer with an intermediate clinical behavior between differentiated and anaplastic thyroid cancer. Here we present a patient who was referred to the endocrinology clinic for evaluation of hyperthyroidism and multinodular goiter. Due to presence of right toxic thyroid nodules and compressive symptoms, the patient underwent right lobectomy and isthmectomy, where surgical pathology revealed PDTC in the right thyroid lobe. Based on this unusual case of malignancy within a toxic nodule, we propose further evaluation of hot nodules with concerning features such as growth rate. Furthermore, exploration of relative sodium iodine symporter (NIS) expression in PDTC may help us better understand how iodine uptake changes as PDTC develops, which may impact our approach to assessing and treating PDTC in the future. abstract_id: PUBMED:3392847 Adenomatous goiter with hyperthyroidism. Adenomatous goiter with hyperthyroidism is a rare disease entity in Japan. Over a five-year period, we operated on 20 patients with this disease. Pre-operatively, basal thyrotropin was not necessarily suppressed and the thyrotropin-binding inhibiting immunoglobulin activity, which had been recently measured in five patients, showed normal values. Uneven patches of cold areas were noted on 131I thyroidal scintigrams. Thyroid function tests carried out three years after surgery in one lobectomy case and in eleven subtotal thyroidectomy cases revealed hypothyroidism in seven, hyperthyroidism in two and euthyroidism in only three cases. These results suggest that the pathogenesis and clinical features of adenomatous goiter with hyperthyroidism are quite different from those of Graves' disease, and that routinely performing near-total thyroidectomy may be considered as the treatment of choice. abstract_id: PUBMED:35868808 Marine-Lenhart syndrome with a cold nodule: an uncommon entity. N/A abstract_id: PUBMED:31540622 Bilateral Thyroid Carcinosarcoma in a Cat. A neutered female domestic shorthaired cat was presented for a rapidly growing left cervical mass and a 6-month history of primary hyperthyroidism. Cytological examination of the mass was consistent with a sarcoma. Due to poor clinical response the cat was humanely destroyed and a post-mortem examination was performed. This revealed a markedly enlarged, irregularly shaped left thyroid gland with signs of infiltration of the trachea. The contralateral (right) thyroid was also moderately enlarged and irregularly shaped. Histopathological examination of the cervical masses indicated bilateral thyroid carcinosarcomas, evidenced by positive immunohistochemical labelling for vimentin, pan-cytokeratin and thyroid transcription factor-1 of the appropriate cell populations. The cat also had a concurrent pulmonary adenocarcinoma (papillary-lepidic type), unrelated to the thyroid neoplasm. Thyroid carcinosarcoma is an uncommonly recorded canine and human neoplasm and this is the first case of this entity to be reported in a cat. abstract_id: PUBMED:32264785 Thyroid Toxicity Following Immune Checkpoint Inhibitor Treatment in Advanced Cancer. Background: Inhibitory antibodies against cytotoxic T lymphocyte antigen-4 (CTLA-4) and programmed cell death-1 (PD-1) have antitumor efficacy and are now standard of care in the management of multiple cancer subtypes. However, the use is complicated by the development of autoimmunity, which can occur in multiple organ systems. Thyroiditis is the most common immune-related adverse event. Summary: Immune checkpoint inhibitor (ICI)-associated thyroiditis affects over 10% of treated patients. PD-1 inhibitors are associated with greater risk of thyroid dysfunction relative to CTLA-4 inhibitors, although the highest risk occurs with combined anti-CTLA-4 and anti-PD-1 treatment. Onset is typically rapid, within weeks to months and both hyperthyroidism and hypothyroidism can occur. The most frequent pattern of thyroid dysfunction is transient hyperthyroidism with evolution to hypothyroidism over four to six weeks. Most cases are asymptomatic and resolve without dedicated treatment. There is no sex or age predominance, and predictive risk factors have not been reliably identified. Thyroid autoantibodies are variably present and are not clearly related to the risk or progression of thyroid dysfunction following treatment with an ICI. Observational data suggest that development of ICI-associated thyroiditis may predict improved survival. Conclusions: ICI-associated thyroiditis is a distinct clinical entity. Mechanisms underlying etiology remain largely unknown. Awareness among health professionals is important to limit morbidity and avoid unnecessary periods of untreated hypothyroidism. Answer: Thyroid cancer in hyperthyroid patients does appear to be a distinct clinical entity. A retrospective study aimed at analyzing the frequency of thyroid cancer in patients who underwent thyroidectomy for hyperthyroidism found that postoperative histopathological examinations revealed thyroid malignancy in 7.3% of patients. Among these patients with thyroid malignancy, the distribution was 53.9% with multinodular toxic goiter (MTG), 38.5% with uninodular toxic goiter (UTG), and 7.6% with Graves' disease. The study concluded that thyroid carcinoma is common in hyperthyroidism and that thyroid fine-needle aspiration biopsy (TFNAB) is a reliable method for diagnosing thyroid malignancy in these patients. It was suggested that it is reasonable to evaluate nodules with TFNAB in hyperthyroid patients prior to surgical intervention (PUBMED:24824801). Furthermore, another study reported an unexpected finding of poorly differentiated thyroid carcinoma (PDTC) in a toxic thyroid nodule, which is a rare entity of thyroid cancer with intermediate clinical behavior between differentiated and anaplastic thyroid cancer. This case suggests that further evaluation of hot nodules with concerning features such as growth rate may be warranted, and understanding the changes in iodine uptake as PDTC develops could impact the approach to assessing and treating PDTC in the future (PUBMED:37908574). These findings indicate that thyroid cancer in hyperthyroid patients may present differently and may require specific diagnostic and therapeutic approaches, distinguishing it as a different clinical entity within the spectrum of thyroid diseases.
Instruction: Does fluorescent urine indicate antifreeze ingestion by children? Abstracts: abstract_id: PUBMED:11134443 Does fluorescent urine indicate antifreeze ingestion by children? Objective: Fluorescent urine has been reported to indicate antifreeze ingestion. Recently, we evaluated a child who was suspected of ethylene glycol ingestion. Although she had fluorescent urine, subsequent studies showed that she had not ingested antifreeze. We tested whether fluorescent urine indicates antifreeze ingestion by children. Methods: A convenience sample of urine specimens from 30 hospitalized children was obtained. All of the patients had been hospitalized for reasons unrelated to poisoning. The specimens were viewed with a Wood's lamp, and the samples were identified as fluorescent or not fluorescent. A second convenience sample of urine specimens from a group of 16 healthy children was obtained, and these specimens were identified as fluorescent or not fluorescent in a similar manner. Results: The majority of urine specimens obtained from children are fluorescent. There is variation in the interpretation of urine fluorescence among observers. The type of container used may influence the finding of fluorescence. Conclusions: Fluorescent urine is not an indicator of ethylene glycol antifreeze ingestion by children. abstract_id: PUBMED:37734990 Combined Ethylene Glycol Poisoning with Methemoglobinemia Due to Antifreeze Ingestion. Background: Antifreeze poisoning is potentially life-threatening and often requires multiple antidotal therapies and hemodialysis. Ethylene or propylene glycol toxicity is commonly caused by antifreeze ingestion. However, ingestion of antifreeze is typically not associated with methemoglobinemia. Currently, only one other case of antifreeze ingestion causing combined ethylene glycol poisoning and methemoglobinemia has been reported. Case Report: A 56-year-old man presented after a witnessed, intentional, large-volume antifreeze ingestion. Evaluation revealed dark brown blood and significantly elevated methemoglobin and ethylene glycol levels. He was successfully treated with methylene blue, fomepizole, and hemodialysis. No other potential cause for methemoglobinemia was elucidated, and further research indicated that minor components of the specific antifreeze product served as an oxidizing agent. WHY SHOULD AN EMERGENCY PHYSICIAN BE AWARE OF THIS?: This case highlights the impact of minor, unreported product components that may significantly contribute to clinical toxicity, as well as the need to remain vigilant when reviewing product information and potential limitations therein. abstract_id: PUBMED:16182989 The usefulness of urine fluorescence for suspected antifreeze ingestion in children. Purpose: To evaluate urine fluorescence as a diagnostic tool. Procedures: Using a Wood lamp, 60 physicians, assigned to group 1 or 2, independently rated 150 urine specimens from nonpoisoned children as fluorescent or nonfluorescent. Interobserver and intraobserver agreements were assessed. Physician ratings were compared with fluorometry results. The prevalence of urine fluorescence was determined by fluorometry. Main Findings: Group 1 reported fluorescence in 80.7% (95% CI 73.4%-86.6%) of urine specimens; group 2 reported fluorescence in 69.3% (95% CI 61.3%-76.5%). Interrater agreement was poor (72.5%, kappa = 0.25, 95% CI 0.13-0.37); intrarater agreement was good (physician group 1: 97.9%, kappa = 0.93, 95% CI 0.77-1.00; physician group 2: 93.3%, kappa = 0.85, 95% CI 0.69-1.00). The prevalence of urine fluorescence was 100% (95% CI 98.1%-100%). Conclusion: Our data suggest that determination of urine fluorescence using a Wood lamp is a poor screening tool for suspected antifreeze ingestion in children. abstract_id: PUBMED:11423812 Diagnostic use of physicians' detection of urine fluorescence in a simulated ingestion of sodium fluorescein-containing antifreeze. Study Objective: We sought to assess physicians' ability to accurately determine the presence or absence of sodium fluorescein (SF) in urine at a concentration corresponding to that present after ingestion of a toxic amount of commercial automotive antifreeze. Methods: We studied 2 different urine specimen evaluation formats--one presenting isolated specimens, and the other presenting specimens grouped for comparison--to determine whether the visual clues afforded by grouped comparison aided the accuracy of the evaluation. On each study day, 3 urine specimens (1 control specimen obtained before SF administration and 2 specimens obtained after SF administration) were obtained from each of 9 or 10 volunteers. Each of these 27 or 30 urine specimens were presented sequentially and in random order to 2 emergency physicians during separate evaluation time periods. Each physician was asked to classify each specimen as fluorescent or nonfluorescent (sequential format). After a rest period, each physician, again separately, was asked to look at the same 27 or 30 urine specimens, this time all together in a test tube rack so that grouped comparisons were possible. The physicians again classified each sample as either fluorescent or nonfluorescent (grouped format). We assessed sensitivity, specificity, and accuracy of the evaluation by each presentation format (sequential or grouped). Results: Mean examiner sensitivity, specificity, and accuracy for detecting the presence of SF in urine using the sequential presentation format were 35%, 75%, and 48%, respectively, whereas the same test performance indices were 42%, 66%, and 50%, respectively, when the grouped format was used. Conclusion: Wood's lamp determination of urine fluorescence is of limited diagnostic utility in the detection of SF ingestion in an amount equivalent to toxic ingestion of some ethylene glycol--containing automotive antifreeze products. abstract_id: PUBMED:37465606 Case Report: Antifreeze Ingestion and Urine Fluorescence. This case report presents a patient with ethylene glycol intoxication from antifreeze ingestion. Ethylene glycol is an active ingredient in antifreeze, traditionally causing an anion gap metabolic acidosis with a high osmolality gap. In our emergency department, serum ethylene glycol is a send-out test requiring hours for a result. The patient presented here had an initial acidotic venous blood gas with an elevated serum osmolality and osmolality gap, and a normal anion gap. Quick bedside analysis of the patient's urine using ultraviolet fluorescence gave supportive evidence for the ingested substance while the serum ethylene glycol level was still pending. The patient was promptly treated with fomepizole, pyridoxine, thiamine, and sodium bicarbonate. Several hours after admission, the ethylene glycol level had resulted as 636 mg/dL. Due to the quick initiation of treatment, the patient had no complications or signs of end-organ damage during admission. Topics: Ethylene glycol, fomepizole, toxicology, ultraviolet fluorescence. abstract_id: PUBMED:34538505 Magnet ingestion knows no borders: A threat for Latin American children. Introduction And Aims: The ingestion of foreign bodies, such as magnets, is a potentially lethal accident that affects children and is associated with bleeding and gastrointestinal perforation, as well as death. There are no Latin American reports in the literature on cases of magnet ingestion in children. Our aim was to establish whether said ingestion has been seen by pediatric endoscopists and gastroenterologists in Latin America, to determine the scope of that potential threat in their patient populations. Materials And Methods: We collected data regarding endoscopies performed on children in Latin America, within the time frame of 2017-2019, through questionnaires that were distributed to pediatric endoscopists at the 2nd World Congress of Gastrointestinal Endoscopy (ENDO 2020). The questionnaires provided information on foreign body location, the presence and number of ingested magnets, and the description of complications and surgical interventions. Results: Our cohort from 12 Latin American countries reported 2,363 endoscopies due to foreign body ingestion, 25 (1.05%) of which were the result of having swallowed one or more magnets. Mean patient age was 5.14years (SD2.5) and 10 (40%) of the cases were girls. Three (12%) of the patients presented with severe complications and 2 (8%) cases required surgery. Conclusions: Our preliminary study suggests that the ingestion of magnets is not common in Latin American countries, but said cases are frequently associated with complications. Constant monitoring of the incidence of such cases is extremely important, so that through education and awareness of those events, life-threatening complications in children can be prevented. abstract_id: PUBMED:9354167 Chemical adjuvant cryosurgery with antifreeze proteins. Background And Objectives: Imaging monitored cryosurgery is emerging as an important minimally invasive surgical technique for treatment of cancer. Although imaging allows excellent control over the process of freezing itself, recent studies show that at high subzero temperatures cells survive freezing. Antifreeze proteins (AFP) are chemical compounds that modify ice crystals to needle-like shapes that can destroy cells in cellular suspensions. The goal of this study was to determine whether these antifreeze proteins can also destroy cells in frozen tissue and serve as chemical adjuvants to cryosurgery. Methods: Livers from six rats were excised, perfused with solutions of either phosphate-buffered saline (PBS) or PBS with 10 mg/ml AFP-I, and frozen with a special cryosurgery apparatus. Lobes were frozen with one or two freeze-thaw cycles and the cell viability was examined with a two stain fluorescent dye test and histological assessment. Results: A significant percentage of hepatocytes survive freezing on the margin of a frozen cryolesion. AFP significantly increase cellular destruction in that region apparently through formation of intracellular ice. Conclusions: This preliminary study demonstrates that antifreeze proteins may be effective chemical adjuvants to cryosurgery. abstract_id: PUBMED:20853841 Compound ice-binding site of an antifreeze protein revealed by mutagenesis and fluorescent tagging. By binding to the surface of ice crystals, type III antifreeze protein (AFP) can depress the freezing point of fish blood to below that of freezing seawater. This 7-kDa globular protein is encoded by a multigene family that produces two major isoforms, SP and QAE, which are 55% identical. Disruptive mutations on the ice-binding site of type III AFP lower antifreeze activity but can also change ice crystal morphology. By attaching green fluorescent protein to different mutants and isoforms and by examining the binding of these fusion proteins to single-crystal ice hemispheres, we show that type III AFP has a compound ice-binding site. There are two adjacent, flat, ice-binding surfaces at 150° to each other. One binds the primary prism plane of ice; the other, a pyramidal plane. Steric mutations on the latter surface cause elongation of the ice crystal as primary prism plane binding becomes dominant. SP isoforms naturally have a greatly reduced ability to bind the prism planes of ice. Mutations that make the SP isoforms more QAE-like slow down the rate of ice growth. On the basis of these observations we postulate that other types of AFP also have compound ice-binding sites that enable them to bind to multiple planes of ice. abstract_id: PUBMED:12829706 Ice nucleation inhibition: mechanism of antifreeze by antifreeze protein. The effect of antifreeze protein type III (one type of fish antifreeze protein) on ice crystallization was examined quantitatively based on a "micro-sized ice nucleation" technique. It was found for the first time that antifreeze proteins can inhibit the ice nucleation process by adsorbing onto both the surfaces of ice nuclei and dust particles. This leads to an increase of the ice nucleation barrier and the desolvation kink kinetics barrier, respectively. Based on the latest nucleation model, the increases in the ice nucleation barrier and the kink kinetics barrier were measured. This enables us to quantitatively examine the antifreeze mechanism of antifreeze proteins for the first time. abstract_id: PUBMED:24829643 Clinical evaluation of disc battery ingestion in children. BACKGROUND The purpose of this study was to evaluate the characteristics, management, and outcomes of disc battery ingestion in children. METHODS We reviewed the medical records of children admitted to Mofid Children's Hospital due to disc battery ingestion from January 2006 to January 2010. Clear history, clinical symptoms and results of imaging studies revealed diagnosis of disc battery ingestion in suspected patients. The clinical data reviewed included age, gender, clinical manifestation, radiologic findings, location of disc battery, duration of ingestion, endoscopic results and surgical treatment. RESULTS We found 22 cases (11 males and 11 females) of disc battery ingestion with a mean age of 4.3 years (range: 9 months to 12 years). Common symptoms were vomiting, cough, dysphagia, and dyspnea. The mean duration of ingestion was 2.7 days (4 hours to 1.5 months). A total of 19 patients had histories of disc battery ingestion, but three cases referred with the above symptoms, and the batteries were accidentally found by x-ray. Only three cases had batteries impacted in the esophagus. Twelve batteries were removed endoscopically, 6 batteries spontaneously passed through the gastrointestinal (GI) tract within 5 to 7 days, and 4 patients underwent surgery due to complications: 3 due to tracheo-esophageal fistula (TEF) and 1 due to intestinal perforation. There was no mortality in our study. CONCLUSION Most cases of disc battery ingestion run uneventful courses, but some may be complicated. If the battery lodges in the esophagus, emergency endoscopic management is necessary. However, once in the stomach, it will usually pass through the GI tract. Answer: No, fluorescent urine does not reliably indicate antifreeze ingestion by children. A study that tested urine specimens from hospitalized children for fluorescence under a Wood's lamp found that the majority of urine specimens from children are fluorescent, and there is variation in the interpretation of urine fluorescence among observers. The type of container used may also influence the finding of fluorescence. The study concluded that fluorescent urine is not an indicator of ethylene glycol antifreeze ingestion by children (PUBMED:11134443). Another study evaluating urine fluorescence as a diagnostic tool using a Wood lamp found that the prevalence of urine fluorescence was 100%, suggesting that determination of urine fluorescence using a Wood lamp is a poor screening tool for suspected antifreeze ingestion in children (PUBMED:16182989). Additionally, a study assessing physicians' ability to detect sodium fluorescein in urine, which is present after ingestion of a toxic amount of commercial automotive antifreeze, found that Wood's lamp determination of urine fluorescence is of limited diagnostic utility (PUBMED:11423812).
Instruction: Worse outcomes among uninsured general surgery patients: does the need for an emergency operation explain these disparities? Abstracts: abstract_id: PUBMED:24953267 Worse outcomes among uninsured general surgery patients: does the need for an emergency operation explain these disparities? Background: We hypothesize that lack of access to care results in propensity toward emergent operative management and may be an important factor in worse outcomes for the uninsured population. The objective of this study is to investigate a possible link to worse outcomes in patients without insurance who undergo an emergent operation. Methods: A retrospective cross-sectional analysis was performed using the Nationwide Inpatient Sample (NIS) 2005-2011 dataset. Patients who underwent biliary, hernia, and colorectal operations were evaluated. Multivariate analyses were performed to assess the associations between insurance status, urgency of operation, and outcome. Covariates of age, sex, race, and comorbidities were controlled. Results: The uninsured group had greatest odds ratios of undergoing emergent operative management in biliary (OR 2.43), colorectal (3.54), and hernia (3.95) operations, P &lt; .001. Emergent operation was most likely in the 25- to 34-year age bracket, black and Hispanic patients, men, and patients with at least one comorbidity. Postoperative complications in emergencies, however, were appreciated most frequently in the populations with government coverage. Conclusion: Although the uninsured more frequently underwent emergent operations, patients with coverage through the government had more complications in most categories investigated. Young patients also carried significant risk of emergent operations with increased complication rates. Patients with government insurance tended toward worse outcomes, suggesting disparity for programs such as Medicaid. Disparity related to payor status implies need for policy revisions for equivalent health care access. abstract_id: PUBMED:25617241 Regarding "worse outcomes among uninsured general surgery patients: does the need for an emergency operation explain these disparities?". N/A abstract_id: PUBMED:37922643 A National Analysis of Racial and Sex Disparities Among Interhospital Transfers for Emergency General Surgery Patients and Associated Outcomes. Introduction: Studies focusing on Emergency General Surgery (EGS) and Interhospital Transfer (IHT) and the association of race and sex and morbidity and mortality are yet to be conducted. We aim to investigate the association of race and sex and outcomes among IHT patients who underwent emergency general surgery. Methods: A retrospective review of adult patients who were transferred prior to EGS procedures using the National Surgery Quality Improvement Project from 2014 to 2020. Multivariable logistic regression models were used to compare outcomes (readmission, major and minor postoperative complications, and reoperation) between interhospital transfer and direct admit patients and to investigate the association of race and sex for adverse outcomes for all EGS procedures. A secondary analysis was performed for each individual EGS procedure. Results: Compared to patients transferred directly from home, IHT patients (n = 28,517) had higher odds of readmission [odds ratio (OR): 1.004, 95% confidence interval (CI) (1.002-1.006), P &lt; 0.001], major complication [adjusted OR: 1.119, 95% CI (1.117-1.121), P &lt; 0.001), minor complication [OR: 1.078, 95% CI (1.075-1.080), P &lt; 0.001], and reoperation [OR: 1.014, 95% CI (1.013-1.015), P &lt; 0.001]. In all EGS procedures, Black patients had greater odds of minor complication [OR 1.041, 95% CI (1.023-1.060), P &lt; 0.001], Native Hawaiian and Pacific Islander patients had greater odds of readmission [OR 1.081, 95% CI (1.008-1.160), P = 0.030], while Asian and Hispanic patients had lower odds of adverse outcome, and female patients had greater odds of minor complication [OR 1.017, 95% CI (1.008-1.027), P &lt; 0.001]. Conclusions: Procedure-specific racial and sex-related disparities exist in emergency general surgery patients who underwent interhospital transfer. Specific interventions should be implemented to address these disparities to improve the safety of emergency procedures. abstract_id: PUBMED:26958790 Racial disparities in emergency general surgery: Do differences in outcomes persist among universally insured military patients? Background: Racial disparities in surgical care are well described. As many minority patients are also uninsured, increasing access to care is thought to be a viable solution to mitigate inequities. The objectives of this study were to determine whether racial disparities in 30-/90-/180- day outcomes exist within a universally insured population of military-/civilian-dependent emergency general surgery (EGS) patients and ascertain whether differences in outcomes differentially persist in care received at military versus civilian hospitals and among sponsors who are enlisted service members versus officers. It also considered longer-term outcomes of EGS care. Methods: Five years (2006-2010) of TRICARE data, which provides insurance to active/reserve/retired members of the US Armed Services and dependents, were queried for adults (≥18 years) with primary EGS conditions, defined by the AAST. Risk-adjusted survival analyses assessed race-associated differences in mortality, major acute care surgery-related morbidity, and readmission at 30/90/180 days. Models accounted for clustering within hospitals and possible biases associated with missing race using reweighted estimating equations. Subanalyses considered restricted effects among operative interventions, EGS diagnostic categories, and effect modification related to rank and military- versus civilian-hospital care. Results: A total of 101,011 patients were included: 73.5% white, 14.5% black, 4.4% Asian, and 7.7% other. Risk-adjusted survival analyses reported a lack of worse mortality and readmission outcomes among minority patients at 30, 90, and 180 days. Major morbidity was higher among black versus white patients (hazard ratio [95% confidence interval): 30 days, 1.23 [1.13-1.35]; 90 days, 1.18 [1.09-1.28]; and 180 days, 1.15 [1.07-1.24], a finding seemingly driven by appendiceal disorders (hazard ratio, 1.69-1.70). No other diagnostic categories were significant. Variations in military- versus civilian-managed care and in outcomes for families of enlisted service members versus officers altered associations, to some extent, between outcomes and race. Conclusions: While an imperfect proxy of interventions is directly applicable to the broader United States, the contrast between military observations and reported racial disparities among civilian EGS patients merits consideration. Apparent mitigation of disparities among military-/civilian-dependent patients provides an example for which we as a nation and collective of providers all need to strive. The data will help to inform policy within the Department of Defense and development of disparities interventions nationwide, attesting to important differences potentially related to insurance, access to care, and military culture and values. Level Of Evidence: Prognostic and epidemiologic study, level III. abstract_id: PUBMED:32378096 Pancreatic Cancer Surgery Following Emergency Department Admission: Understanding Poor Outcomes and Disparities in Care. Background: The impact of emergency department admission prior to pancreatic resection on perioperative outcomes is not well described. We compared patients who underwent pancreatic cancer surgery following admission through the emergency department (ED-surgery) with patients receiving elective pancreatic cancer surgery (elective) and outcomes. Study Design: The Nationwide Inpatient Sample database was used to identify patients undergoing pancreatectomy for cancer over 5 years (2008-2012). Demographics and hospital characteristics were assessed, along with perioperative outcomes and disposition status. Results: A total of 8158 patients were identified, of which 516 (6.3%) underwent surgery after admission through the ED. ED-surgery patients were more often socioeconomically disadvantaged (non-White 39% vs. 18%, Medicaid or uninsured 24% vs. 7%, from lowest income area 33% vs. 21%; all p &lt; .0001), had higher comorbidity (Elixhauser score &gt; 6: 44% vs. 26%, p &lt; .0001), and often had pancreatectomy performed at sites with lower annual case volume (&lt; 7 resections/year: 53% vs. 24%, p &lt; .0001). ED-surgery patients were less likely to be discharged home after surgery (70% vs. 82%, p &lt; .0001) and had higher mortality (7.4% vs. 3.5%, p &lt; .0001). On multivariate analysis, ED-surgery was independently associated with a lower likelihood of being discharged home (aOR 0.55 (95%CI 0.43-0.70)). Conclusion: Patients undergoing pancreatectomy following ED admission experience worse outcomes compared with those who undergo surgery after elective admission. The excess of socioeconomically disadvantaged patients in this group suggests factors other than clinical considerations alone drive this decision. This study demonstrates the need to consider presenting patient circumstances and preoperative oncologic coordination to reduce disparities and improve outcomes for pancreatic cancer surgery. abstract_id: PUBMED:31607385 Who's being left behind? Uninsured emergency general surgery admissions after the ACA. Background: The Affordable Care Act (ACA) increased Medicaid coverage of Emergency General Surgery (EGS). We hypothesized that despite the ACA, racial and geographic disparities persisted for EGS admissions. Methods: The Nationwide Inpatient Sample was queried from 2012 through Q3 of 2015 for Non-Medicare patient EGS admissions. Difference-in-Differences analyses (DID) compared payors, complications, mortality and costs in pre-ACA years (2012-2013) and post-ACA years (2014-2015Q3). Results: EGS cases fell 9.1% from 1,711,940 to 1,555,033 NIS-weighted cases. Hispanics were still most likely to be uninsured but had improved coverage (OR 0.92, 95% CI: 0.88-0.96, p &lt; 0.001). Risk of uninsured EGS admissions from the South region persisted (OR 1.52, 95% CI: 1.46-1.58, p &lt; 0.001). Uninsured EGS patients had higher DID increased mortality than insured patients (0.31% higher, P = 0.003). Insured group DID costs increased more rapidly than in self-pay Patients (6.0% higher, P = 0.008) CONCLUSIONS: Post ACA, risk of uninsured EGS admissions remained highest in the South, in males, and Hispanics. abstract_id: PUBMED:25880490 Differences in emergency colorectal surgery in Medicaid and uninsured patients by hospital safety net status. Objectives: We examined whether safety net hospitals reduce the likelihood of emergency colorectal cancer (CRC) surgery in uninsured and Medicaid-insured patients. If these patients have better access to care through safety net providers, they should be less likely to undergo emergency resection relative to similar patients at non- safety net hospitals. Study Design: Using population-based data, we estimated the relationship between safety net hospitals, patient insurance status, and emergency CRC surgery. We extracted inpatient admission data from the Virginia Health Information discharge database and matched them to the Virginia Cancer Registry for patients aged 21 to 64 years who underwent a CRC resection between January 1, 1999, and December 31, 2005 (n = 5488). Methods: We differentiated between medically defined emergencies and those that originated in the emergency department (ED). For each definition of emergency surgery, we estimated the linear probability models of the effects of being treated at a safety net hospital on the probability of having an emergency resection. Results: Safety net hospitals reduce emergency surgeries among uninsured and Medicaid CRC patients. When defining an emergency resection as those that involved an ED visit, these patients were 15 to 20 percentage points less likely to have an emergency resection when treated in a safety net hospital. Conclusions: Our results suggest that these hospitals provide a benefit, most likely through the access they afford to timely and appropriate care, to uninsured and Medicaid-insured patients relative to hospitals without a safety net mission. abstract_id: PUBMED:30278970 Failure to rescue and disparities in emergency general surgery. Background: Racial and socioeconomic disparities are well documented in emergency general surgery (EGS) and have been highlighted as a national priority for surgical research. The aim of this study was to identify whether disparities in the EGS setting are more likely to be caused by major adverse events (MAEs) (e.g., venous thromboembolism) or failure to respond appropriately to such events. Methods: A retrospective cohort study was undertaken using administrative data. EGS cases were defined using International Classification of Diseases, Ninth Revision, Clinical Modification diagnostic codes recommended by the American Association for the Surgery of Trauma. The data source was the National Inpatient Sample 2012-2013, which captured a 20%-stratified sample of discharges from all hospitals participating in the Healthcare Cost and Utilization Project. The outcomes were MAEs, in-hospital mortality, and failure to rescue (FTR). Results: There were 1,345,199 individual patient records available within the National Inpatient Sample. There were 201,574 admissions (15.0%) complicated by an MAE, and 12,006 of these (6.0%) resulted in death. The FTR rate was therefore 6.0%. Uninsured patients had significantly higher odds of MAEs (adjusted odds ratio, 1.16; 95% confidence interval, 1.13-1.19), mortality (1.28, 1.16-1.41), and FTR (1.20, 1.06-1.36) than those with private insurance. Although black patients had significantly higher odds of MAEs (adjusted odds ratio, 1.14; 95% confidence interval, 1.13-1.16), they had lower mortality (0.95, 0.90-0.99) and FTR (0.86, 0.80-0.91) than white patients. Conclusions: Uninsured EGS patients are at increased risk of MAEs but also the failure of health care providers to respond effectively when such events occur. This suggests that MAEs and FTR are both potential targets for mitigating socioeconomic disparities in the setting of EGS. abstract_id: PUBMED:33939501 Cancer Outcomes Among Medicare Beneficiaries And Their Younger Uninsured Counterparts. Proposals for expanding Medicare insurance coverage to uninsured Americans approaching the Medicare eligibility age of sixty-five has been the subject of intense debate. We undertook this study to assess cancer survival differences between uninsured patients younger than age sixty-five and older Medicare beneficiaries by using data from the National Cancer Database from the period 2004-16. The main outcomes were survival at one, two, and five years for sixteen cancer types in 1,206,821 patients. We found that uninsured patients ages 60-64 were nearly twice as likely to present with late-stage disease and were significantly less likely to receive surgery, chemotherapy, or radiotherapy than Medicare beneficiaries ages 66-69, despite lower comorbidity among younger patients. Compared with older Medicare patients, younger uninsured patients had strikingly lower five-year survival across cancer types. For instance, five-year survival in younger uninsured patients with late-stage breast or prostate cancer was 5-17 percent lower than that among older Medicare patients. We conclude that survival after a diagnosis of cancer is considerably lower in younger uninsured patients than in older Medicare patients. Expanding comprehensive health insurance coverage to people approaching Medicare age eligibility may improve cancer outcomes in the US. abstract_id: PUBMED:30463721 The Early Impact of Medicaid Expansion on Uninsured Patients Undergoing Emergency General Surgery. Background: Under the Affordable Care Act, eligibility for Medicaid coverage was expanded to all adults with incomes up to 138% of the federal poverty level in states that participated. We sought to examine the national impact Medicaid expansion has had on insurance coverage for patients undergoing emergency general surgery (EGS) and the cost burden to patients. Materials And Methods: The National Inpatient Sample (NIS) was used to identify adults ≥18 y old who underwent the 10 most burdensome EGS operations (defined as a combination of frequency, cost, and morbidity). Distribution of insurance type before and after Medicaid expansion and charges to uninsured patients was evaluated. Weighted averages were used to produce nationally representative estimates. Results: A total of 6,847,169 patients were included. The percentage of uninsured EGS patients changed from 9.4% the year before Medicaid expansion to 7.0% after (P &lt; 0.01), whereas the percentage of patients on Medicaid increased from 16.4% to 19.4% (P &lt; 0.01). The cumulative charges to uninsured patients for EGS decreased from $1590 million before expansion to $1211 million after. Conclusions: In the first year of Medicaid expansion, the number of uninsured EGS patients dropped by 2.4%. The cost burden to uninsured EGS patients decreased by over $300 million. Answer: The need for an emergency operation is indeed an important factor in the worse outcomes observed among uninsured general surgery patients. Uninsured patients have been found to have greater odds of undergoing emergent operative management in biliary, colorectal, and hernia operations (PUBMED:24953267). This propensity towards emergent operations is most likely in younger patients, black and Hispanic patients, men, and patients with at least one comorbidity. However, it is noteworthy that while the uninsured more frequently underwent emergent operations, patients with government coverage, such as those on Medicaid, experienced more postoperative complications in most categories investigated (PUBMED:24953267). Additionally, disparities in emergency general surgery (EGS) outcomes are not solely explained by the need for emergency operations. Racial and sex disparities have been observed among interhospital transfer patients who underwent EGS, with Black patients having greater odds of minor complications and Native Hawaiian and Pacific Islander patients having greater odds of readmission (PUBMED:37922643). Moreover, in the military population, where there is universal insurance coverage, major morbidity was higher among black patients, particularly for appendiceal disorders, suggesting that disparities persist even within universally insured populations (PUBMED:26958790). Furthermore, uninsured patients undergoing pancreatectomy following emergency department admission experience worse outcomes compared to those who undergo surgery after elective admission, indicating that factors beyond clinical considerations may drive these disparities (PUBMED:32378096). Despite the Affordable Care Act's expansion of Medicaid coverage, racial and geographic disparities persisted for EGS admissions, with Hispanics and patients from the South region remaining most likely to be uninsured (PUBMED:31607385). Safety net hospitals have been shown to reduce the likelihood of emergency surgeries among uninsured and Medicaid CRC patients, suggesting that access to timely and appropriate care at these hospitals benefits these patient populations (PUBMED:25880490). However, uninsured EGS patients are at increased risk of major adverse events and failure of healthcare providers to respond effectively when such events occur, indicating that both medical events and the healthcare system's response are potential targets for mitigating disparities (PUBMED:30278970). In summary, while the need for emergency operations is a significant factor, it does not fully explain the disparities in outcomes among uninsured general surgery patients. Other contributing factors include racial and sex disparities, access to care, hospital safety net status, and systemic issues within the healthcare system's response to adverse events.
Instruction: Should we abstain from Gleason score 2-4 in the diagnosis of prostate cancer? Abstracts: abstract_id: PUBMED:26843845 Gleason score stratification according to age at diagnosis in 1028 men. Aim Of The Study: Gleason score stratification according to age at diagnosis has been retrospectively evaluated in 1028 men with biopsy-proven prostate cancer (PCa). Material And Methods: From January 2006 to December 2014, 2435 Caucasian men aged between 37 and 92 years underwent transperineal prostate biopsy for suspicion of PCa. The indications were as follows: abnormal digital rectal examination (DRE), PSA values &gt; 10 ng/ml or between 4.1-10 or 2.6-4 ng/ml, with free/total PSA &lt; 25% and &lt; 20%, respectively. Results: In 1028 (42.2%) patients with median PSA of 9.6 ng/ml a PCa was found (median age 62.3 years; range: 42-92 years); 757 (73.7%) vs. 271 (26.3%) men had a T1c vs. T2 clinical stage, respectively. Median Gleason score was 7 (range: 6-10). The Gleason score progressively increased with the age of the patients at diagnosis, and a significantly correlation between Gleason score ≥ 8 and men older than 80 years was demonstrated (p = 0.0001). Conclusions: The detection rate of aggressiveness of PCa progressively increased with the age at diagnosis; Gleason score ≥ 8 was more frequently diagnosed in men older than 80 years with PSA values &gt; 10 ng/ml (about 80% of the cases) and abnormal DRE (about 60% of the cases). abstract_id: PUBMED:32869150 Role of multiparametric magnetic resonance imaging to predict postoperative Gleason score upgrading in prostate cancer with Gleason score 3 + 4. Background: To evaluate the role of multiparametric magnetic resonance imaging (mpMRI) in Gleason score (GS) 3 + 4 prostate cancer (PCa) and evaluate independent factors in mpMRI that can predict GS upgrading, we compared the outcomes of GS upgrading group and GS non-upgrading group. Patients And Methods: We analyzed the data of 539 patients undergoing radical prostatectomy (RP) for biopsy GS 3 + 4 PCa from two tertiary referral centers. Univariate and multivariate analyses were performed to determine significant predictors of GS upgrading. GS upgrading, the study outcome, was defined as GS ≥ 4 + 3 at definitive pathology at RP specimen. Results: GS upgrading rate was 35.3% and biochemical recurrence (BCR) rate was 8.0%. GS upgrading group was significantly older (p = 0.015), had significantly higher prebiopsy serum prostate-specific antigen (PSA) level (p = 0.001) and PSA density (p = 0.003), had a higher number of prostate biopsy (p = 0.026). There were 413 lesions (76.6%) of PI-RADS lesion ≥ 4, 236 (57.1%) for PI-RADS 4 and 177 (42.9%) for PI-RADS 5 lesion. Multivariate logistic regression analysis revealed that age (p = 0.045), initial prebiopsy PSA level (p = 0.002) and presence of PI-RADS lesion ≥ 4 (p = 0.044) are independent predictors of GS upgrading. Conclusion: MpMRI can predict postoperative Gleason score upgrading in prostate cancer with Gleason score 3 + 4. Especially, presence of clinically significant PI-RADS lesion ≥ 4, the significant predictor of GS upgrading, in preoperative mpMRI needs to be paid attention and can be helpful for patient counseling on prostate cancer treatment. abstract_id: PUBMED:36816145 The highest percentage of Gleason Pattern 4 is a predictor in intermediate-risk prostate cancer. Objectives: This study aims to clarify the clinicopathological significance of several novel pathological markers, including the percentage of Gleason pattern 4 and small/non-small cribriform pattern, in intermediate-risk Gleason score 3 + 4 = 7 prostate cancer. Subjects And Methods: Two-hundred and twenty-eight patients with Gleason score 3 + 4 = 7 intermediate-risk prostate cancer who underwent radical prostatectomy between 2009 and 2019 at our institute were selected. Preoperative clinicopathological characteristics, including serum prostate-specific antigen level, clinical T stage, percentage of cancer-positive cores at biopsy, small/non-small cribriform pattern, the highest percentage of Gleason pattern 4, the total length of Gleason pattern 4 and percentage of Gleason score 7 cores were examined in univariate/multivariate logistic regression analysis to determine their predictive value for postoperative adverse pathological findings, defined as an upgrade to Gleason score 4 + 3 = 7 or higher, pN1 or pT3b disease. Results: Fifty-four cases (23.7%) showed adverse pathological findings. Although a non-small cribriform pattern, highest Gleason pattern 4 percentage and total length of Gleason pattern 4 were predictive of adverse pathological findings in univariate analysis, only the highest Gleason pattern 4 percentage was an independent predictive factor in multivariate analysis (odds ratio: 1.610; 95% confidence interval: 1.260-2.070; P = 0.0002). Conclusion: The highest Gleason pattern 4 percentage was a potent predictive parameter for Gleason score 3 + 4 = 7 intermediate-risk prostate cancer and should be considered in the risk classification scheme for prostate cancer. abstract_id: PUBMED:31380282 Nomograms Predict Survival Advantages of Gleason Score 3+4 Over 4+3 for Prostate Cancer: A SEER-Based Study. Background: Different proportions of Gleason pattern 3 and Gleason pattern 4 lead to various prognosis of prostate cancer with Gleason score 7. The objective of this study was to compare the survival outcomes of Gleason score 3+4 and 4+3 based on data from the Surveillance, Epidemiology, and End Results cancer registry database, and to investigate independent prognosis-associated factors and develop nomograms for predicting survival in Gleason score 7 prostate cancer patients. Methods: A retrospective study was conducted on 69,116 cases diagnosed as prostate adenocarcinoma with Gleason score 7 between 2004 and 2009. Prognosis-associated factors were evaluated using univariate and multivariate Cox regression analysis, and a 1:1 ratio paired cohort by propensity score matching with the statistical software IBM SPSS, to evaluate prognostic differences between Gleason score 3+4 and 4+3. The primary cohort was randomly divided into training set (n = 48,384) and validation set (n = 20,732). Based on the independent factors of prognosis, nomograms for prognosis were established by the training group and validated by the validation group using R version 3.5.0. Results: After propensity score matching, Cox regression analysis showed that Gleason 4+3 had an increased mortality risk both for overall survival (HR: 1.235, 95% CI: 1.179-1.294, P &lt; 0.001) and cancer-specific survival (HR: 1.606, 95% CI: 1.468-1.762, P &lt; 0.001). Nomograms for overall survival and cancer-specific survival were established with C-index 0.786 and 0.842, respectively. The calibration plot indicated an optimal agreement between the actual observation and nomogram prediction for overall survival and cancer-specific survival probability at 5 or 10 year. Conclusions: Prostate cancer with Gleason score 4+3 had worse overall survival and cancer-specific survival than Gleason score 3+4. Nomograms were formulated to predict 5-year and 10-year OS and CSS in patients with prostate cancer of Gleason score 7. abstract_id: PUBMED:26207642 Gleason score 5 + 3 = 8 prostate cancer: much more like Gleason score 9? Objective: To determine whether patients with Gleason score 5 + 3 = 8 prostate cancer have outcomes more similar to other patients with Gleason score 8 disease or to patients with Gleason score 9 disease. Patients And Methods: The Surveillance, Epidemiology and End Results (SEER) database was used to study 40 533 men diagnosed with N0M0 Gleason score 8 or 9 prostate cancer from 2004 to 2011. Using Gleason score 4 + 4 = 8 as the referent, Fine and Gray competing risks regression analyses modelled the association between Gleason score and prostate cancer-specific mortality (PCSM). Results: The 5-year PCSM rates for patients with Gleason score 4 + 4 = 8, 3 + 5 = 8, 5 + 3 = 8, and 9 disease were 6.3%, 6.6%, 13.5%, and 13.9%, respectively (P &lt; 0.001). Patients with Gleason score 5 + 3 = 8 or 9 disease had up to a two-fold increased risk of PCSM (adjusted hazard ratio [AHR] 1.89, 95% confidence interval [CI] 1.50-2.38, P &lt; 0.001; and AHR 2.17, 95% CI 1.99-2.36, P &lt; 0.001, respectively) compared with the referent group of patients (Gleason score 4 + 4 = 8). There was no difference in PCSM between patients with Gleason score 5 + 3 = 8 vs 9 disease (P = 0.25). Conclusions: Gleason score 8 disease represents a heterogeneous entity with PCSM outcomes distinguishable by the primary Gleason pattern. The PCSM of Gleason score 3 + 5 = 8 and Gleason 4 + 4 = 8 disease are similar, but patients with Gleason score 5 + 3 = 8 have a risk of PCSM that is twice as high as other patients with Gleason score 8 disease and should be considered to have a similar poor prognosis as patients with Gleason score 9 disease. Such patients should be allowed onto trials seeking the highest-risk patients in which to test novel aggressive treatment strategies. abstract_id: PUBMED:32997878 Combination of total length of Gleason pattern 4 and number of Gleason score 3 + 4 = 7 cores detects similar outcome group to Gleason score 6 cancers among cases with ≥5% of Gleason pattern 4. Expanding the inclusion criteria for active prostate cancer surveillance to include cases with a Gleason score (GS) of 3 + 4 = 7 has been discussed. GS 3 + 4 = 7 cases with a percentage of Gleason pattern 4 (%GP4) &lt;5% were shown to be associated with similar outcomes with those of GS 6 cases. We examined the clinicopathological significance of %GP4 ≥5% with a limited amount of GP4. A total of 315 radical prostatectomy cases with GS 6 or 3 + 4 = 7 in a prior biopsy, were reviewed. The cases with the highest %GP4 ≥5% were subcategorized using the total length of GP4 (GP4-TL) and number of GS 3 + 4 = 7 cores. As outcome measures, the frequency of adverse pathology (AP) and the risk of biochemical recurrence (BCR) were compared between the GS 6 and 3 + 4 = 7 subgroups. In the %GP4 ≥5% subgroup, only cases with both GP4-TL &lt;0.5 mm and 1 core of GS 3 + 4 = 7 showed similar outcome measures with those of GS 6 cancers. However, all other subgroups showed a higher frequency of AP and/or risk of BCR than GS 6 cancers. Our results suggest that cases with %GP4 ≥5% with a limited amount of GP4 should be considered for inclusion in the active surveillance category. abstract_id: PUBMED:15046476 Evidence of the radical prostatectomy Gleason score in the biopsy Gleason score Introduction: The Gleason score (Gs) for prostatic cancer has a good prognosis correlation after radical prostatectomy, for this reason its correlation with the Gs in the biopsy can be useful. Patients And Methods: Two hundred fifteen patients with blind evaluation among three pathologists of their Gs in biopsy and in the corresponding radical prostatectomy specimen are presented. Results: The exact coincidence is present in 49.7% of cases, 38.6% of cases are under graded in the biopsy and 11.6% of them over graded in the biopsy. No cases of Gs 2 in the biopsy are found. Any case with Gs 3 and 4 in the biopsy are reproduced in the radical prostatectomy specimen. The exact coincidence for biopsy Gs 5, 6, 7, 8 and 9 are 25%, 45%, 72.7%, 36.6% and 60% respectively (kappa 0.32 +/- 0.047, p&lt;0.0001 in Gs 5 to 8). The Gleason pattern 4 is the less diagnosed in prostate biopsies [40% of cases with this pattern in the excision specimen it is missing in the biopsy). Conclusions: The Gs in the needle prostatic biopsy has a good correspondence with the Gs in the radical prostatectomy specimen. For an increase of the reproducibility it is recommendable avoid the diagnosis of Gs 2, 3 and 4 in biopsy and a scrupulous search for the patterns 4 and 5. abstract_id: PUBMED:34543668 Prognostic value of cribriform size, percentage, and intraductal carcinoma in Gleason score 7 prostate cancer with cribriform Gleason pattern 4. Cribriform Gleason pattern 4 (CGP4) is an indicator of poor prognosis in Gleason Score 7 prostate cancer; however, the significance of the size and percentage of this pattern and the presence of concomitant intraductal carcinoma (IDC) in these patients is unclear. To study the significance of these parameters in radical prostatectomy specimens, 165 cases with CGP4 were identified and reviewed (2017-2019). The size and percentage cribriform pattern and presence of IDC were noted and correlated with adverse pathological features and biochemical recurrence (BCR)-free survival. On review, 156 cases had CGP4 (Grade Group 2: 87 and Grade Group 3: 69). Large cribriform pattern and cribriform percentage of &gt;20% showed significant association with extraprostatic extension, surgical margin positivity, and presence of IDC, whereas the presence of IDC was associated with all the analyzed adverse pathological features. BCR was seen in 22 of 111 (20%) patients after a median follow-up of 11 months, and of these, 21 had large cribriform pattern. On univariate analysis, all parameters had significant predictive values for BCR-free survival except for tertiary Gleason pattern 5. On multivariate analysis, while &gt;20% cribriform pattern was trending to be an independent predictor, only lymphovascular invasion was statistically significant. Large cribriform pattern, &gt;20% cribriform, and presence of IDC are additional pathologic parameters of potential value in identifying patients with high risk for early BCR. abstract_id: PUBMED:20535286 Upgrading of Gleason score on radical prostatectomy specimen compared to the pre-operative needle core biopsy: an Indian experience. Objectives: To assess the accuracy of Gleason grading/scoring on preoperative needle core biopsy (NCB) compared to the radical prostatectomy (RP) specimen. Materials And Methods: Data of NCB and RP specimens was analyzed in 193 cases. Gleason grade/scoring was done on both NCB and RP specimens. Sixteen cases were excluded for various reasons. The Gleason scores of the two sets of matched specimens were compared and also correlated with the PSA, age, and number of needle biopsy cores. The overall change was also correlated with the initial score on NCB. Results: The mean age and PSA were 63.3+/-2(5.27) years and 18.48+/-2(28.42) ng/ml, respectively. The average Gleason score increased from 5.51 +/- 2(1.52) to 6.2 +/- 2(1.42) (P&lt;0.02). The primary grade increased in 57 (32.2%) cases. Overall, 97 (54.8%) cases had an increase in Gleason score. Five other cases had a change from 3 + 4 = 7 to 4 + 3 = 7. Change in Gleason score was significantly more if the score on NCB was &lt;/=6 or number of needle cores was &lt;/=6. Besides, 28 cases had perineural invasion, 16 had capsular invasion (pT3(a)), and 4 had vascular invasion on RP specimen. Conclusions: There is a significant upgrading of Gleason score on RP specimens when compared with NCB. This trend may be correlated positively with lower initial Gleason score on preoperative biopsy and the lower number of cores taken. abstract_id: PUBMED:29719335 Incidence of metastasis and prostate-specific antigen levels at diagnosis in Gleason 3+4 versus 4+3 prostate cancer. Aims: The aim is to assess for a difference in the incidence of metastasis (IM) and prostate-specific antigen (PSA) levels at diagnosis in patients with Gleason score (GS) 3+4 versus 4+3 prostate cancer using a large veterans affairs database. Subjects And Methods: A retrospective review of 1402 medical records from 5 VA hospitals was conducted. The study period was from 2009 to 2014. Primary endpoints were IM and PSA levels at diagnosis. A secondary endpoint was overall survival. Statistical Analysis Used: Chi-square tests for categorical variables, Student's t-test for continuous, normally distributed variables, and rank sum tests for continuous nonnormally distributed variables. Results: There were 1050 patients with GS3+4 and 352 with GS4+3. There were no differences in sociodemographic and clinical characteristics of the study population. PSA at the time of diagnosis was significantly higher in the GS4+3 patients compared to GS3+4 (18.0 vs. 11.4, respectively; P &lt; 0.001). The IM at diagnosis was higher in the GS4+3 patients (10/352) compared to GS3+4 (9/1041) (2.8% vs. 0.9%; P = 0.005). In an adjusted model, GS4+3 was associated with higher PSA, higher IM at diagnosis. There was no difference in overall survival between the 2 groups though a 23% reduction in overall survival in the GS4+3 was noted (P = 0.53). Conclusions: Our results indicate that patients with GS4+3 prostate cancers have higher PSA levels at diagnosis. GS4+3 is associated with 3-fold increased risk of IM at diagnosis than GS3+4 though the overall incidence is low. Further research is needed to assess whether GS4+3 patients need routine staging imaging investigations at the time of diagnosis similar to patients with higher Gleason scores (GS ≥8). Answer: Yes, it is recommended to avoid the diagnosis of Gleason score 2-4 in prostate cancer biopsies. The evidence suggests that the Gleason score in needle prostate biopsies has a good correspondence with the Gleason score in radical prostatectomy specimens, but for an increase in reproducibility, it is advisable to avoid the diagnosis of Gleason scores 2, 3, and 4 in biopsy. This is because these lower scores are rarely found and may not accurately represent the cancer's aggressiveness. The Gleason pattern 4 is often underdiagnosed in prostate biopsies, with 40% of cases with this pattern in the excision specimen missing in the biopsy (PUBMED:15046476). Therefore, a more scrupulous search for patterns 4 and 5 is recommended to improve the accuracy of prostate cancer grading and subsequent treatment decisions.
Instruction: Prehospital point-of-care testing for troponin: are the results reliable? Abstracts: abstract_id: PUBMED:26395570 Point-of-care-testing in general practice Point-of-care testing (POCT) is being used increasingly in general practice and other healthcare contexts outside the hospital. Recommendations for correct and safe use of POCT in Dutch general practice have been laid down in the guideline 'Point-of-care testing in general practice'. The recommendations in this guideline are based on existing regulations and guidelines, both national and international, and respect the different roles and responsibilities within the healthcare chain. abstract_id: PUBMED:26567809 Review of point-of-care testing and biomarkers of cardiovascular diseases in emergency and prehospital medicine. Medical reasoning must be constructed on clinical evidence-based biology and follow a process of a priori assumptions. The introduction of a solution of point-of-care testing must result from any work involving clinicians, biologists, and administration. Several solutions of point-of-care testing allow the dosage of cardiac enzymes (CPK, myoglobin, and troponin) or BNP in less than half an hour time. The point-of-care testing saves time in obtaining the results earlier. It seems to allow timesaving on the overall care of the patient and the duration of his stay in the emergency department. By its technique and the relevance of its results, point-of-care testing is suitable for prehospital use. abstract_id: PUBMED:32782792 Prehospital point-of-care ultrasound: A transformative technology. Point-of-care ultrasound at the bedside has evolved into an essential component of emergency patient care. Current evidence supports its use across a wide spectrum of medical and traumatic diseases in a variety of settings. The prehospital use of ultrasound has evolved from a niche technology to impending widespread adoption across emergency medical services systems internationally. Recent technological advances and a growing evidence base support this trend. However, concerns regarding feasibility, education, and quality assurance must be addressed proactively. This topical review describes the history of prehospital ultrasound, initial training needs, ongoing skill maintenance, quality assurance and improvement requirements, available devices, and indications for prehospital ultrasound. abstract_id: PUBMED:29945284 Point-of-care Coagulation Testing in Neurosurgery Disorders of the coagulation system can seriously impact the clinical course and outcome of neurosurgical patients. Due to the anatomical location of the central nervous system within the closed skull, bleeding complications can lead to devastating consequences such as an increase in intracranial pressure or enlargement of intracranial hematoma. Point-of-care (POC) devices for the testing of haemostatic parameters have been implemented in various fields of medicine. Major advantages of these devices are that results are available quickly and that analysis can be performed at the bedside, directly affecting patient management. POC devices allow identification of increased bleeding tendencies and therefore may enable an assessment of hemorrhagic risks in neurosurgical patients. Although data regarding the use of POC testing in neurosurgical patients are limited, they suggest that coagulation testing and hemostatic therapy using POC devices might have beneficial effects in this patient population. This article provides an overview of the application of point-of-care coagulation testing in clinical practice in neurosurgical patients. abstract_id: PUBMED:34182857 Haemolysis in prehospital blood samples. The increasing use of Point Of Care Testing (POCT) in the prehospital setting demands a high and consistent quality of blood samples. We have investigated the degree of haemolysis in 779 prehospital blood samples and found a significant increase in haemolysis compared to intrahospital samples. The degree of haemolysis was within acceptable limits for current analyses. However, haemolysis should be taken into account when implementing future analyses in the prehospital field. abstract_id: PUBMED:37325996 Prehospital seizures: Short-term outcomes and risk stratification based in point-of-care testing. Background: Information for treatment or hospital derivation of prehospital seizures is limited, impairing patient condition and hindering patients risk assessment by the emergency medical services (EMS). This study aimed to determine the associated factors to clinical impairment, and secondarily, to determine risk factors associated to cumulative in-hospital mortality at 2, 7 and 30 days, in patients presenting prehospital seizures. Methods: Prospective, multicentre, EMS-delivery study involving adult subjects with prehospital seizures, including five advanced life support units, 27 basic life support units and four emergency departments in Spain. All bedside variables: including demographic, standard vital signs, prehospital laboratory tests and presence of intoxication or traumatic brain injury (TBI), were analysed to construct a risk model using binary logistic regression and internal validation methods. Results: A total of 517 patients were considered. Clinical impairment was present in 14.9%, and cumulative in-hospital mortality at 2, 7 and 30-days was 3.4%, 4.6% and 7.7%, respectively. The model for the clinical impairment indicated that respiratory rate, partial pressure of carbon dioxide, blood urea nitrogen, associated TBI or stroke were risk factors; higher Glasgow Coma Scale (GCS) scores mean a lower risk of impairment. Age, potassium, glucose, prehospital use of mechanical ventilation and concomitant stroke were risk factors associated to mortality; and oxygen saturation, a high score in GCS and haemoglobin were protective factors. Conclusion: Our study shows that prehospital variables could reflect the clinical impairment and mortality of patients suffering from seizures. The incorporation of such variables in the prehospital decision-making process could improve patient outcomes. abstract_id: PUBMED:24618924 Point-of-care testing in preclinical emergency medicine Background: Measurement of biological signals directly at the patient (point-of-care testing, POCT) is an established standard in emergency medicine when test results are needed quickly and within a reliable time frame or if external testing requires a disproportionate effort. Objectives: Currently, the rapid test for β-HCG in urine and POCT measurement of lactate, blood gases, cardiac tropinin, haemoglobin, and hematocrit are well established in emergency medicine. POCT of copeptin, fatty acid-binding proteins (FABP), procalcitonin, coagulation values, natriuretic peptides, D-dimer, and toxicological substances are of future interest. In this article, the appropriate use of point-of-care testing in prehospital emergency medicine is discussed. Results: Application of POCT is dependent of the underlying conditions, the availability of appropriate devices, and of suitable reference methods in a central laboratory. In addition, economical and quality aspects play an important role. Conclusion: In emergency departments, POCT is currently developing into a standard measuring method for a number of markers because hospital laboratories are increasingly being merged and consequently reduce their emergency-analytic services. In countries with a high density of hospitals, however, preclinical POCT should be reduced to the minimum necessary. abstract_id: PUBMED:37169442 Point-of-Care Testing for Sexually Transmitted Infections. Point-of-care testing for sexually transmitted infections is essential for controlling transmission and preventing sequelae in high-risk populations. Since the World Health Organization published the ASSURED criteria, point-of-care testing has improved for use in large population screening and rapid testing that prevents loss of clinical follow-up. Recent advancements have been advantageous for low-resource areas allowing testing at a minimal cost without reliable electricity or refrigeration. Point-of-care nucleic acid detection and amplification techniques are recommended, but are often inaccessible in low-resource areas. Future advancements in point-of-care diagnostic testing should focus on improving antibody-based assays, monitoring viral loads, and detecting antimicrobial resistance. abstract_id: PUBMED:30115777 BET 1: Prehospital cardiac troponin testing to 'rule out' acute coronary syndromes using point of care assays. A shortcut review of the literature was carried out to establish whether prehospital point of care (POC) troponin tests are reliable and accurate enough to detect acute coronary syndrome (ACS) in adult patients.Nine papers were found to be relevant to the clinical question following the below-described search strategies. The author, date and country of publication, patient group studied, study type, relevant outcomes, results and study weaknesses of those best papers are tabulated. It is concluded that based on the currently available evidence, POC troponin assays are insufficiently sensitive to 'rule out' ACS in the prehospital environment. abstract_id: PUBMED:31985326 Point-of-Care Troponin Testing during Ambulance Transport to Detect Acute Myocardial Infarction. Objective: Use of point-of-care (POC) troponin (cTn) testing in the Emergency Department (ED) is well established. However, data examining POC cTn measurement in the prehospital setting, during ambulance transport, are limited. The objective of this study was to prospectively test the performance of POC cTn measurement by paramedics to detect myocardial infarction (MI) among patients transported to the ED for acute chest pain. Methods: A prospective cohort study of adults with non-traumatic chest pain was conducted in three Emergency Medical Services agencies (December 2016 to January 2018). Patients with ST-elevation MI on ECG were excluded. During ambulance transport paramedics initiated intravenous access, collected blood, and used a POC device (i-STAT; Abbott Laboratories) to measure cTn. Following ED arrival, participants received standard evaluations including clinical blood draws for cTn measurement in the hospital central lab (AccuTnI +3 assay; Beckman Coulter, or cTnI-Ultra assay; Siemens). Blood collected during ambulance transport was also analyzed for cTn in the central lab. Index visit MI was adjudicated by 3 experts using central lab cTn measures from the patient's clinical blood draws. Test characteristics (sensitivity, specificity, and predictive values) for detection of MI were calculated for POC and central lab cTn measurement of prehospital blood and compared with McNemar's test. Results: During the study period prehospital POC cTn results were obtained on 421 patients, of which 5.0% (21/421) had results &gt;99th percentile upper reference limit. MI was adjudicated in 16.2% (68/421) during the index visit. The specificity and positive predictive value of the POC cTn measurement were 99.2% (95% CI 97.5-99.8%) and 85.7% (95% CI 63.7-97.0%) for MI. However, the sensitivity and NPV of prehospital POC cTn were 26.5% (95% CI 16.5-38.6%) and 87.5% (95% CI 83.9-90.6%). Compared to POC cTn, the central lab cTn measurement of prehospital blood resulted in a higher sensitivity of 67.9% (95% CI 53.7-80.1%, p &lt; 0.0001), but lower specificity of 92.4% (95% CI 88.4-95.4%, p = 0.0001). Conclusions: Prehospital POC i-STAT cTn measurement in patients transported with acute chest pain was highly specific for MI but had low sensitivity. This suggests that prehospital i-STAT POC cTn could be useful to rule-in MI, but should not be used to exclude MI. Answer: Prehospital point-of-care testing (POCT) for troponin has been investigated to determine its reliability and accuracy in detecting acute coronary syndromes (ACS) during ambulance transport. The evidence suggests that while POCT for troponin in the prehospital setting is highly specific for myocardial infarction (MI), its sensitivity is low. A study examining the performance of POCT troponin measurement by paramedics found that the specificity and positive predictive value of the POCT troponin measurement were high, at 99.2% and 85.7% respectively. However, the sensitivity and negative predictive value were only 26.5% and 87.5%, indicating that while a positive POCT troponin result could be useful to rule-in MI, it should not be used to exclude MI due to the low sensitivity (PUBMED:31985326). Another review of the literature concluded that based on the currently available evidence, POCT troponin assays are insufficiently sensitive to 'rule out' ACS in the prehospital environment (PUBMED:30115777). This indicates that while POCT for troponin can provide rapid results, it may not be reliable enough to definitively exclude the diagnosis of ACS without further testing. In summary, prehospital POCT for troponin can provide specific results that may aid in the early detection of MI, but due to its low sensitivity, it cannot be solely relied upon to rule out ACS. Further evaluation in the hospital setting is necessary to confirm or exclude the diagnosis.
Instruction: Ultrasound-Guided Cervical Nerve Root Block: Does Volume Affect the Spreading Pattern? Abstracts: abstract_id: PUBMED:27009293 Ultrasound-Guided Cervical Nerve Root Block: Does Volume Affect the Spreading Pattern? Objective: Ultrasound-guided cervical nerve root block (US-CRB) is considered a safe and effective method for the treatment of radicular pain. However, previous studies on the spreading pattern of injected solution in US-CRB have reported conflicting results. The aim of this study was to investigate the spreading pattern in relation to injection volume. Design: An institutional, prospective case series. Setting: A university hospital. Subjects: Fifty-three patients diagnosed with mono-radiculopathy in C5, 6, or 7. Methods: US-CRB with fluoroscopic confirmation was performed. After the cervical roots were identified in ultrasound imaging, a needle was gently introduced toward the posterior edge of the root using an in-plane approach. The spread of 1 mL and 4 mL contrast medium, each injected in the same needle position, was examined with anteroposterior and lateral fluoroscopic views. After contrast injection, a mixture of local anesthetic and corticosteroid was injected. Clinical outcome was assessed using a numeric rating scale before and 2 weeks after the procedure. Results: Contrast medium did not spread into the epidural space in any patients with 1 mL contrast medium injection, but it did spread into the intraforaminal epidural space in 13 patients (24.5%) with 4 mL. Pain improved in all patients. There was no significant difference in pain relief according to the spreading pattern. Conclusion: The spreading pattern of injected solution in US-CRB could be partially affected by the injectant volume. However, further studies are needed to assess the importance of other factors, such as needle position and physiological effects. abstract_id: PUBMED:36387391 Ultrasound-guided injection technique of the equine cervical nerve roots. Radiculopathy in horses is often a diagnosis of exclusion because of the non-specific clinical signs related to neck pain and possible forelimb lameness. There are no reported treatment options in the equine veterinary literature. The purpose of the study was to describe an ultrasound-guided injection of the cervical nerve root C3 to C8, to evaluate accuracy, time and safety and to anticipate possible complications on clinical cases. Under general anesthesia and with ultrasound guidance, five horses were injected from C3 to C8 with 1.5mL mix of contrast and latex. Immediately after euthanasia, the necks were taken for CT examination and then dissection was performed 3 days later. Data regarding the accuracy of injection, the presence of injectate in the nerve root, vertebral vessel or vertebral canal were recorded from both CT and dissection. The time of injection and ability to visualize the nerve root prior to injection were also recorded. Out of 60 intended injections, 55 (CT images) and 57 (dissection) led to injectate deposited within the target zone with direct contact between contrast/latex and cervical nerve roots noted in 76.4% and 73.7%, respectively. Presence of contrast/latex injectate within nerves (≤11%), vertebral vessels (&lt;4%) and canal (&lt;4%) were rarely encountered. No variation on success rate or safety noted based on the site of injection. The technique described has excellent accuracy, with injectate deposition in direct contact (≈75%) or close vicinity (≈25%) of C3-C8 cervical nerve roots. Injectate diffusion is likely to further improve success rate. Rare presence of injectate within nerve/sheath, vertebral vessels/canal along with diffusion warrants caution when performing this procedure in clinical cases. abstract_id: PUBMED:34934598 Ultrasound-Guided Procedures in the Cervical Spine. Cervical pain is a common symptom among the general population. When conservative strategies fail to provide pain relief, cervical spine injections may be considered. Compared with cervical surgery, cervical injections have low major complications and, with the right indication, have demonstrated good results. Traditionally, these types of procedures have been performed under fluoroscopy; however, in recent years, ultrasound has become a more common imaging modality to guide spinal injections. Although ultrasound presents an excellent quality image for soft tissue and allows ​the observation of vascular tissues, nerves, and the contour of bone surfaces, the cervical region has a complicated neurovascular network and a comprehensive understanding of the cervical sonoanatomy should remain as the basis before one can plan cervical ultrasound-guided intervention. This paper aims to show the advantages of ultrasound in facilitating the performance of cervical spine procedures, including facet joint injections, medial branch blocks, and selective nerve root blocks; analyze the sonoanatomy and landmarks of commonly intervened cervical structures; and illustrate how these procedures can be performed safely and precisely under ultrasound guidance. abstract_id: PUBMED:36908928 Evaluating the Extent of Ultrasound-Guided Cervical Selective Nerve Root Block in the Lower Cervical Spine: Evidence Based on Computed Tomography Images. Objective: To verify the injectate dispersal patterns (IDP) and therapeutic outcome of ultrasound-guided cervical selective nerve root block (UG-SCNRB) in treating cervical radiculopathy (CR). Methods: Overall, 18 CR patients were recruited to undergo UG-SCNRB in the CT room. Following placement of the puncture needle tip between the target nerve root and posterior tubercle, 3 mL of the drug was administered per root (0.33% lidocaine 0.5 mL + Compound betamethasone injection 0.5mL + methylcobalamin injection 1mL + iohexol 1mL). Subsequently, the IDP was assessed on postintervention CT scan images. Results: In all, 18 participants were analyzed. We injected 21 target cervical nerve roots, namely, 1 C4 nerve, 9 C5 nerves, and 11 C6 nerves. Among the IDPs on postintervention CT scan images, two IDPs were most prevalent, namely, the contrast spread into the extraforaminal spaces (Zone I, the interscalene) in 100% (21/21) of cases, and the foraminal space spread (Zone II) in 61.90% (13/21) of cases. The injectate spread into the epidural spaces (Zone III) in only 2 out of 21 cases (9.52%). The pain relief was significantly improved two hours after surgery, compared to the preoperative VAS pain scores (2 hours, 1.39±0.50 vs VAS at baseline, P&lt;0.01). The VAS pain scores during follow-up were significantly lower than preoperation (1 weeks, 1.94±0.54 vs VAS at baseline; 2 weeks, 2.61±0.70, P&lt;0.01 vs VAS at baseline; 4 weeks, 2.67±0.59, P&lt;0.01 vs VAS at baseline). Conclusion: We verified, via CT imaging, that the UG-SCNRB drug diffusion was within safe range (the injectate mainly spread to the extraforaminal spaces), and without any serious complications, such as, intravascular drug injection, extensive diffusion of the epidural space, and general spinal anesthesia. abstract_id: PUBMED:33991730 Coblation Discoplasty Alleviates Cervical Chest Pain After Positive Ultrasound-Guided Nerve Root Block: A Retrospective Study. Objective: Cervical chest pain (CCP), as 1 atypical symptom associated with cervical spondylosis, often overlaps with other chest-related diseases. CCP obviously relieved after ultrasound-guided cervical nerve root block near a herniated disc should be considered as a potential pathologic source. The purpose of this study is to investigate whether coblation discoplasty can alleviate CCP after positive ultrasound-guided nerve root block. Methods: From August 2016 to September 2019, 21 patients with high suspicion of CCP experienced over 50% pain relieve after ultrasound-guided diagnostic nerve root block. Through 12 months of follow-up, the primary efficacy was assessed with visual analogue scale (VAS) of CCP, and secondary outcomes included: neck pain VAS, neck disability index (NDI), the proportion of significant CCP relief, the rating of CCP alleviation, the patient satisfaction index (PSI), and analgesic consumption. Adverse events were recorded to evaluate safety. Results: Following postoperative 12 months, a time-course analysis confirmed a robust decline in VAS of CCP (P &lt; 0.0001), and a similar recovery trend was shown in VAS of neck pain and NDI (P &lt; 0.0001). After treatment, the number of patients taking analgesics decreased (P &lt; 0.0001), and around 60% of patients reported notable relief and satisfaction with treatment. No serious complications were observed. Conclusions: After positive ultrasound-guided nerve root block, coblation discoplasty can provide up to 12 months of relief for intractable CCP. abstract_id: PUBMED:27222157 Combined fluoroscopic and ultrasound guided cervical nerve root injections. Purpose: To assess the technical feasibility, safety and initial clinical efficacy of a combined ultrasound and fluoroscopy imaging approach to cervical nerve root blocks. Fluoroscopic guided cervical transforaminal and selective nerve root injections are often used in the investigation or treatment of radicular symptoms, although rare but serious complications including death have been reported. We report a combined technique developed to increase safety of selective nerve root injections, including the safety and early efficacy of this novel technique in our initial patient cohort. Methods: We retrospectively reviewed a consecutive cohort of injections performed in 149 patients by a single consultant radiologist between December 2010 and August 2012. For all patients the outcome was assessed both immediately following the procedure and at six weeks. Primary outcome was reduction in radicular symptom level. Duration of symptoms were also assessed and all complications were recorded. Results: One hundred and forty nine patients underwent injection at either one or two cervical levels. No patients experienced any complications during the follow-up period, and 72 % had an initial positive response to the injection. Of these, 42 % were discharged to the care of their General Practitioner, 23 % went on to have surgery, 18 % were actively monitored in a specialist clinic, 10 % were referred to our pain management service and 4 % had the injection repeated after symptoms recurred. Conclusion: Using this combined image guided technique cervical nerve root blocks appear both safe and effective in the investigation and management of radicular symptoms from the cervical spine. abstract_id: PUBMED:32039292 Vascular Evaluation around the Cervical Nerve Roots during Ultrasound-Guided Cervical Nerve Root Block. Introduction: To carry out ultrasound-guided cervical nerve root block (CNRB) safely, we investigated the frequency of risky blood vessels around the target nerve root and within the imaginary needle pathway in the actual injecting position. Methods: 30 patients (20 men, 10 women) with cervical radiculopathy who received ultrasound-guided CNRB were included in this study. We defined a risky blood vessel as an artery existing within 4 mm from the center of the target nerve root or located in the range of 2 mm above or below the imaginary needle pathway. Results: Using the color Doppler method, the frequency of a risky blood vessel existing around 4 mm from the center of the C5 nerve root was 3.3% (1/30), whereas it was 3.3% (1/30) for the C6 nerve root and 23.3% (7/30) for the C7 nerve root. Hence, the C7 level had more blood vessels close to the target nerve root compared to the C5 and C6 levels, but there was no significant difference (p = 0.0523). On the other hand, the frequency of a risky blood vessel existing within 2 mm above and below the imaginary needle pathway was 3.3% (1/30) for the C5 nerve root, whereas it was 3.3% (1/30) for the C6 nerve root and 10.0% (3/30) for the C7 nerve root. The C7 level had more blood vessels within the needle pathway compared to the C5 and C6 levels, but there was no significant difference (p = 0.301). Conclusions: To reduce the risk of unintended intravascular injections, more careful checking for the presence or absence of blood vessels at the C7 level using color Doppler is necessary. abstract_id: PUBMED:36164681 An open-label non-inferiority randomized trail comparing the effectiveness and safety of ultrasound-guided selective cervical nerve root block and fluoroscopy-guided cervical transforaminal epidural block for cervical radiculopathy. Object: To compare therapeutic efficacy and safety of ultrasound (US)-guided selective nerve root block (SNRB) and fluoroscopy (FL)-guided transforaminal epidural steroid injection (TFESI) for cervical spine radiculopathy (CSR). Method: 156 patients with CSR randomly received US-guided SNRB verified by FL or FL-guided TFESI. We hypothesised that the accuracy rate of contrast dispersion into epidural or intervertebral foraminal space in the US group was not inferior to that in the FL group with a margin of clinical unimportance of -15%. Pain intensity assessed by Numeric Rating Scales (NRS) and functional disability estimated by neck disability index (NDI) were compared before treatment, at 1, 3 and 6 months after the intervention. Puncture time and complication frequencies were also reported. Results: 88.7% and 90.3% accuracy ratings were respectively achieved in the US and FL groups with a treatment difference of -1.6% (95%CI: -9.7%, 6.6%) revealing that the lower limit was above the non-inferiority margin. Both NRS and NDI scores illustrated improvements at 1, 3 and 6 months after intervention with no statistically significant differences between the two groups (all p &gt; .05). Additionally, shorter administration duration was observed in the US group (p &lt; .001). No severe complications were observed in both group. Conclusion: Compared with the FL group, the US group provided a non-inferior accuracy rate of epidural/foraminal contrast pattern. For the treatment of CSR, the US technique provided similar pain relief and functional improvements while facilitating distinguishing critical vessels adjacent to the foramen and requiring a shorter procedure duration without exposure to radiation. Therefore, it was an attractive alternative to the conventional FL method.Key messagesWe conducted a prospective, open-label, randomised and non-inferiority clinical trial to estimate a hypothesis that the precisely accurate delivery through ultrasound (US)-guided cervical selective nerve root block (SNRB) was non-inferior to that using FL-guided transforaminal epidural steroid injection. Additionally, US-guided SNRB was as effective as FL-guided TFESI in the treatment effect on pain relief and function improvements. Notably, the US technique might be an alternative to the conventional FL method due to the ability to prevent inadvertent vascular puncture (VP) and intravascular injection (IVI) with a shorter administration time and absence of radiation exposure. abstract_id: PUBMED:26740490 Ultrasound-Guided Lower Cervical Nerve Root Injectate Volumes Associated With Dorsal Root Ganglion and Epidural Spread. Objectives: We aimed to estimate the spread of injections for ultrasound-guided cervical nerve root blocks and to determine the optimal injectate volume required in this procedure. Methods: A total of 32 ultrasound-guided injections (C5-C8) were made in 4 fresh cadavers. The target on each cervical root was the space between the posterior tubercle and the cervical root at the most proximal location possible on the sonogram. After ultrasound-guided needle insertion, 0.5 mL of a contrast medium was injected 4 times. The dye flow patterns were confirmed with fluoroscopy each time, and we recorded whether the contrast medium reached the dorsal root ganglion level or the epidural space. After the injections, the needle tip location was determined by computed tomography and image reconstruction. Results: All injections produced typical neurograms. The contrast medium reached the dorsal root ganglion in 29 of 32 (90.6%) injections (mean ± SD, 0.84 ± 0.42 mL of contrast medium) and the epidural space in 10 of 32 (31.3%) injections (1.30 ± 0.54 mL of contrast medium). The mean distance between the needle tip and neural foramen was 9.64 ± 3.68 mm, and this distance correlated positively with the volume of contrast medium necessary to reach the dorsal root ganglion or the epidural space. Conclusions: Ultrasound-guided cervical nerve root blocks show potential utility for targeting an anesthetic into the cervical root area. This study may be helpful for deciding the most appropriate volume for the procedure. abstract_id: PUBMED:21692976 Ultrasound-guided cervical nerve root block: spread of solution and clinical effect. Objectives: We investigated the clinical effects and accuracy of ultrasound-guided cervical nerve root block. Additionally, spinal level and spread of injected solution were confirmed by anatomic dissection of fresh cadavers. DESIGN SETTING, PATIENTS, AND INTERVENTIONS: Twelve patients diagnosed with mono-radiculopathy between C5-7 underwent ultrasound-guided nerve root block. An insulated needle was advanced with an in-plane approach using nerve stimulation and 2 mL of 0.375% ropivacaine with 4 mg of dexamethasone was injected using nerve stimulation. Ultrasound-guided C5-7 nerve root block was also performed in ten fresh cadavers. Blue dye (2 mL) was injected onto each nerve root and anatomic dissection was performed to confirm the exact spinal level and spread pattern of the dye. Results: Pain score before the procedure (65 [46-80], median [interquartile range]) was decreased to 25 [3-31] at 24 hours (P = 0.003) and 40 [28-66] at 30 days (P = 0.02) after the root block. Obvious side effects were not seen. All target nerve roots in patients and cadavers were correctly identified by ultrasound imaging. The needle tip did not reach the pedicle of the vertebral arch in the anteroposterior view of fluoroscopy, and spread pattern of contrast medium was extraforaminal and extraneural. Conclusions: This study suggests that injected solution by ultrasound-guided cervical nerve root block mainly spreads to the extraforaminal direction compared with conventional fluoroscopic technique. Therefore, present clinical study involves possibility of safer selective nerve root block with sufficient analgesic effects by ultrasound guidance, despite the absence of intraforaminal epidural spread of solution. Answer: The spreading pattern of the injected solution in ultrasound-guided cervical nerve root block (US-CRB) can be affected by the volume of the injectant. A study investigating the spread of 1 mL and 4 mL contrast medium injected in the same needle position during US-CRB found that with 1 mL, the contrast medium did not spread into the epidural space in any patients. However, with 4 mL, the contrast medium did spread into the intraforaminal epidural space in 13 out of 53 patients (24.5%) (PUBMED:27009293). This suggests that a larger volume of injectant may increase the likelihood of the solution spreading into the epidural space. Another study evaluating the extent of ultrasound-guided cervical selective nerve root block (UG-SCNRB) in the lower cervical spine using computed tomography images found that the injectate mainly spread to the extraforaminal spaces, with the contrast spreading into the foraminal space in 61.90% of cases and into the epidural spaces in only 9.52% of cases (PUBMED:36908928). This indicates that the spreading pattern can be assessed and verified using imaging techniques such as CT scans. Furthermore, a study on ultrasound-guided lower cervical nerve root injectate volumes associated with dorsal root ganglion and epidural spread found that the volume of contrast medium necessary to reach the dorsal root ganglion or the epidural space correlated positively with the distance between the needle tip and neural foramen (PUBMED:26740490). This suggests that the volume required for the injectate to spread to specific anatomical locations may vary depending on the needle tip's proximity to the target area. In summary, the volume of the injectant used in US-CRB does affect the spreading pattern, with larger volumes increasing the likelihood of the solution reaching the epidural space. However, the optimal volume for the procedure may also depend on the needle tip's position relative to the target nerve root and the desired spread to specific anatomical locations.
Instruction: Partial status epilepticus in benign childhood epilepsy with centrotemporal spikes: are independent right and left seizures a risk factor? Abstracts: abstract_id: PUBMED:12181016 Partial status epilepticus in benign childhood epilepsy with centrotemporal spikes: are independent right and left seizures a risk factor? Purpose: To describe an association between continuous simple partial seizures and independent right and left partial seizures in children with benign childhood epilepsy with centrotemporal spikes (BCECTS). Methods: Three children with BCECTS and episodes of continuous simple partial seizures are described. Results: All three children had a history of typical rolandic seizures occurring on the right and left sides of the body on different occasions. Conclusions: The occurrence of independent right and left rolandic seizures in children with BCECTS may be a predisposing factor for the development of partial status epilepticus. abstract_id: PUBMED:25667840 Centrotemporal spikes during NREM sleep: The promoting action of thalamus revealed by simultaneous EEG and fMRI coregistration. Benign childhood epilepsy with centrotemporal spikes (BECTS) has been investigated through EEG-fMRI with the aim of localizing the generators of the epileptic activity, revealing, in most cases, the activation of the sensory-motor cortex ipsilateral to the centrotemporal spikes (CTS). In this case report, we investigated the brain circuits hemodynamically involved by CTS recorded during wakefulness and sleep in one boy with CTS and a language disorder but without epilepsy. For this purpose, the patient underwent EEG-fMRI coregistration. During the "awake session", fMRI analysis of right-sided CTS showed increments of BOLD signal in the bilateral sensory-motor cortex. During the "sleep session", BOLD increments related to right-sided CTS were observed in a widespread bilateral cortical-subcortical network involving the thalamus, basal ganglia, sensory-motor cortex, perisylvian cortex, and cerebellum. In this patient, who fulfilled neither the diagnostic criteria for BECTS nor that for electrical status epilepticus in sleep (ESES), the transition from wakefulness to sleep was related to the involvement of a widespread cortical-subcortical network related to CTS. In particular, the involvement of a thalamic-perisylvian neural network similar to the one previously observed in patients with ESES suggests a common sleep-related network dysfunction even in cases with milder phenotypes without seizures. This finding, if confirmed in a larger cohort of patients, could have relevant therapeutic implication. abstract_id: PUBMED:37331959 The Prevalence and Risk Factors of Electrical Status Epilepticus During Slow-Wave Sleep in Self-Limited Epilepsy With Centrotemporal Spikes. Objective. To investigate the prevalence and risk factors for electrical status epilepticus during slow-wave sleep (ESES) in patients with self-limited epilepsy with centrotemporal spikes (SeLECTS). Methods. The clinical and follow-up data of children with SeLECTS were collected between 2017 and 2021. Patients were divided into typical ESES, atypical ESES, and non-ESES groups according to spike-wave indices (SWI). Clinical and electroencephalography characteristics were retrospectively analyzed. Logistic regression was used to identify risk factors for ESES. Results. A total of 95 patients with SeLECTS were enrolled. Seven patients (7.4%) developed typical ESES, 30 (31.6%) developed atypical ESES, 25 (26.3%) developed ESES at the first visit, and 12 (12.6%) developed ESES during treatment and follow-up. Multivariate logistic regression analysis showed that the risk factors for SeLECTS combined with ESES were Rolandic double or multiple spikes (OR = 8.626, 95% CI: 2.644-28.147, P &lt; .001) and Rolandic slow waves (OR = 53.550, 95% CI: 6.339-452.368, P &lt; .001). There were no significant differences in seizure characteristics, electroencephalogram (EEG) findings, or cognitive impairment between the atypical and typical ESES groups. Conclusion. More than one-third of the SeLECTS patients combined with ESES. Both atypical and typical ESES scores can affect cognitive function. On electroencephalography, interictal Rolandic double/multiple spikes and slow-wave abnormalities may indicate SeLECTS with ESES. abstract_id: PUBMED:34025572 Treatment for the Benign Childhood Epilepsy With Centrotemporal Spikes: A Monocentric Study. Background and Purpose: To date, there is no specific treatment guideline for the benign childhood epilepsy with centrotemporal spikes (BECTS). Several countries recommend levetiracetam, carbamazepine, sodium valproate, oxcarbazepine, and lamotrigine as first-line drugs. Nevertheless, some of these drugs are associated with cognitive decline. Available studies that investigated the efficacy of levetiracetam and sodium valproate on BECTS involved small sample sizes. This study aimed to evaluate the efficacy of levetiracetam and sodium valproate on cognition, and to investigate the prognostic factors for BECTS as whole. Methods: Clinical data and treatment status of all patients with BECTS at Xiangya Hospital, Central South University followed from 2008 to 2013 were analyzed retrospectively. Since electrical status epilepticus in sleep (ESES) has been confirmed to play a role in cognitive deterioration, in order to evaluate the response to drugs and their cognitive effects, we created two groups of patients according to the levels of spike wave index (SWI): group 1; 0-50% SWI and group 2; &gt;50% SWI at the last follow up. Results: A total of 195 cases were enrolled: 49.7% received monotherapies, 24.1% duotherapies and 27.2% polytherapies. Medications included; levetiracetam plus other drug (s) (75.9%), levetiracetam alone (32.8%), sodium valproate plus other drug (s) (31.3%), and sodium valproate alone (5.1%). After 2 years of treatment and follow up, 71% of the cases had a good seizure outcome, 15.9% had an improvement of SWI, and 91.7% had a normal DQ/IQ. Sodium valproate combined with levetiracetam, and sodium valproate alone correlated with good improvement of SWI, whereas, focal spikes were linked with poor improvement. For both groups (group 1 and group 2): monotherapy, levetiracetam alone, and a normal DQ/IQ at seizure onset correlated with good cognitive outcomes, in contrast, polytherapy, sodium valproate plus other drug (s), levetiracetam plus sodium valproate, an initial SWI of ≥85%, and multifocal spikes were linked to cognitive deterioration. Conclusions: Monotherapy, particularly levetiracetam seems to be a good first-line therapy which can help in normalizing the electroencephalograph and preventing cognitive decline. Polytherapy, mostly the administration of sodium valproate seems to relate with poor cognition, therefore, it is recommended to avoid it. abstract_id: PUBMED:37388546 Self-limited childhood epilepsies are disorders of the perisylvian communication system, carrying the risk of progress to epileptic encephalopathies-Critical review. "Sleep plasticity is a double-edged sword: a powerful machinery of neural build-up, with a risk to epileptic derailment." We aimed to review the types of self-limited focal epilepsies..."i.e. keep as two separate paragraphs" We aimed to review the types of self-limited focal epilepsies: (1) self-limited focal childhood epilepsy with centrotemporal spikes, (2) atypical Rolandic epilepsy, and (3) electrical status epilepticus in sleep with mental consequences, including Landau-Kleffner-type acquired aphasia, showing their spectral relationship and discussing the debated topics. Our endeavor is to support the system epilepsy concept in this group of epilepsies, using them as models for epileptogenesis in general. The spectral continuity of the involved conditions is evidenced by several features: language impairment, the overarching presence of centrotemporal spikes and ripples (with changing electromorphology across the spectrum), the essential timely and spatial independence of interictal epileptic discharges from seizures, NREM sleep relatedness, and the existence of the intermediate-severity "atypical" forms. These epilepsies might be the consequences of a genetically determined transitory developmental failure, reflected by widespread neuropsychological symptoms originating from the perisylvian network that have distinct time and space relations from secondary epilepsy itself. The involved epilepsies carry the risk of progression to severe, potentially irreversible encephalopathic forms. abstract_id: PUBMED:2507302 Prolonged intermittent drooling and oromotor dyspraxia in benign childhood epilepsy with centrotemporal spikes. Prolonged isolated sialorrhea of epileptic origin was described by Penfield and Jasper (1954) in a patient with a lesional epilepsy. A child with prolonged but intermittent drooling, lingual dyspraxia, and other clinical and electroencephalographic (EEG) features compatible with benign childhood epilepsy with centrotemporal spikes (BCECS) is described. The fluctuant course of the symptomatology and correlation with the intensity of the paroxysmal discharges on EEG are consistent with an epileptic dysfunction located in the lower rolandic fissure. No lesion was demonstrated by magnetic resonance imaging (MRI). Our case bears analogies with the recently reported status epilepticus of BCECS and the "acquired aphasia-epilepsy syndrome." abstract_id: PUBMED:32086099 Focal cortical hypermetabolism in atypical benign rolandic epilepsy. Objective: Atypical benign rolandic epilepsy (BRE) is an underrecognized and poorly understood manifestation of a common epileptic syndrome. Most consider it a focal epileptic encephalopathy in which frequent, interictal, centrotemporal spikes lead to negative motor seizures and interfere with motor and sometimes speech and cognitive abilities. We observed focal cortical hypermetabolism on PET in three children with atypical BRE and investigated the spatial and temporal relationship with their centrotemporal spikes. Methods: EEG, MRI and PET were performed clinically in three children with atypical BRE. The frequency and source localization of centrotemporal spikes was determined and compared with the location of maximal metabolic activity on PET. Results: Cortical hypermetabolism on thresholded PET t-maps and current density reconstructions of centrotemporal spikes overlapped in each child, in the central sulcus region, the distances between the "centers of maxima" being 2 cm or less. Hypermetabolism was not due to recent seizures or frequent centrotemporal spikes at the time of FDG uptake. Significance: The findings suggest that localized, increased cortical activity, in the region of the EEG focus, underlies the negative clinical manifestations of atypical BRE. Similar findings are reported in the broader group of epileptic encephalopathies associated with electrical status epilepticus in sleep. abstract_id: PUBMED:11071484 Panayiotopoulos-type benign childhood occipital epilepsy: a prospective study. Objective: To characterize the clinical and EEG features of the syndrome of benign childhood partial seizures with ictal vomiting and EEG occipital spikes (Panayiotopoulos syndrome [PS]). Methods: Prospective study of children with normal general and neurologic examinations who had seizures with ictal vomiting and EEG with occipital spikes. Results: From February 1990 to 1997, the authors found 66 patients with PS and 145 children with benign childhood epilepsy with centrotemporal spikes. Peak age at onset of PS was 5 years. Ictal deviation of the eyes and progression to generalized seizures were common. One-third had partial status epilepticus. During sleep, all had seizures. While awake, one-third also had seizures. Five children with PS had concurrent symptoms of rolandic epilepsy and another five developed rolandic seizures after remission of PS. Prognosis was excellent: one-third had a single seizure, one-half had two to five seizures, and only 4.5% had frequent seizures. Conclusions: Panayiotopoulos-type benign childhood occipital epilepsy is less common than benign childhood epilepsy with centrotemporal spikes but is well defined and recognizable by clinical and EEG features. abstract_id: PUBMED:28734769 Influence of epileptic activity during sleep on cognitive performance in benign childhood epilepsy with centrotemporal spikes. Background: Benign childhood epilepsy with centrotemporal spikes is benign childhood epilepsy, presenting between 4 and 10 years of age, characterized by typical clinical and EEG findings. Despite excellent prognosis, there are reports of mild cognitive, language, fine motor and behavioral difficulties. In its atypical form - electrical status epilepticus during slow wave sleep, continuous epileptiform activity during sleep lead to severe neurocognitive deterioration. Our objective was to investigate the influence of abundant sleep epileptiform activity, not fulfilling the criteria for electrical status epilepticus during Slow Wave Sleep, discovered randomly in children without overt intellectual impairment. Methods: We retrospectively reviewed the charts and EEG's of 34 children with benign childhood epilepsy with centrotemporal spikes, who underwent neurocognitive evaluation. The neurocognitive battery included items in the following domains: attention span, memory, language, fine motor and behavior. Patients were divided into two groups according to the spike wave index on sleep EEG, with a cut-off point of 50%. The groups were compared regarding to neurocognitive performance. Outcomes: Children with epileptiform activity of more than 50%, were diagnosed at a significantly younger age (5.13 ± 1.94 years vs. 7.17 ± 2.45, p = 0.014 T test), had less controlled seizures and received more antiepileptic drugs. However, there was no difference in neurocognitive performance, except in fine motor tasks (Pegboard), where children with more abundant activity were scored lower (-0.79 ± 0.96 vs. 0.20 ± 1.05, p = 0.011, T test). Conclusion: Our study did not show negative cognitive effect of abundant epileptiform activity discovered randomly in children with benign childhood epilepsy with centrotemporal spikes, warranting aggressive treatment. abstract_id: PUBMED:33126368 Continuous epileptic negative myoclonus as the first seizure type in atypical benign epilepsy with centrotemporal spikes. To figure out which diagnosis is more suitable and which antiepileptic drugs are more sensitive to epileptic negative myoclonus (ENM) as the first seizure type in atypical benign epilepsy with centrotemporal spikes.We reviewed the electroencephalogram (EEG) database of Linyi People's Hospital Affiliated to Shandong University and medical records of patients with ENM onset. The characteristics of epileptic seizures, onset age, treatment process, growth and development history, past disease history, family history, degree of mental deterioration, cranial imaging, and video-EEG were studied retrospectively and followed up.There were 4 cases with ENM onset and 1 with continuous ENM, 3 males and 1 female. The onset age was from 2 years 3 months to 8 years 7 months. The cranial magnetic resonance imaging (MRI) and developmental quotient, as well as the family, personal, and past disease history, were normal. Frequent falls and drops were the main clinical manifestations. Five months after the onset of ENM, case 1 had focal seizures in sleep. ENM was the first and only manifestation in all the other 3 children. Discharges of interictal EEG were in bilateral rolandic areas, especially in midline areas (Cz, Pz), electrical status epilepticus in sleep was found in 3 cases. One child was sensitive to levetiracetam, the other 3 were sensitive to clonazepam.ENM can affect the upper or lower extremities. ENM as the first or only symptom was a special phenomenon in benign epilepsy with centrotemporal spikes (BECTS) variants. Ignorance of midline spikes mainly in Cz or Pz in BECTS might lead to missed diagnosis of ENM. Whether benzodiazepines are viable as a choice of BECTS variants with electrical status epilepticus in sleep when ENM is the first symptom still needs a large sample evidence-based observation. Answer: Yes, the occurrence of independent right and left rolandic seizures in children with benign childhood epilepsy with centrotemporal spikes (BCECTS) may be a predisposing factor for the development of partial status epilepticus. This association is described in a study where three children with BCECTS who experienced episodes of continuous simple partial seizures had a history of typical rolandic seizures occurring on both the right and left sides of the body on different occasions (PUBMED:12181016).
Instruction: Is there an outcome difference between posterior urethral valves diagnosed prenatally and postnatally at the time of antenatal screening? Abstracts: abstract_id: PUBMED:23352308 Is there an outcome difference between posterior urethral valves diagnosed prenatally and postnatally at the time of antenatal screening? Purpose: Posterior urethral valves (PUV) diagnosed during childhood have classically been associated with a better outcome than antenatally diagnosed PUV. The aim of our study was to compare long-term outcome of these two patients' groups. Material And Methods: We retrospectively reviewed the medical records of boys with PUV managed between 1990 and 2010. Patient demographics, clinical background, radiographic data (including prenatal ultrasonography data when available), renal and bladder functional outcomes, surgical procedures and urinary tract infections (UTI) were abstracted. Impaired renal function (IRF) was defined as glomerular filtration rate less than 90 mL/min/1.73 m(2) at last follow-up. Results: We identified 69 patients with confirmed PUV. Thirty-eight were diagnosed prenatally (group 1) at 30.5 weeks of gestation and 31 had a delayed diagnosis (group 2) at a median age of 6.31 years. At diagnosis, 20 patients in group 1 had renal insufficiency versus two in group 2 (P&lt;0.05). At the end of mean follow-up of 7.2 ± 0.5 years, in group 1, 26.3% developed IRF versus 6.3% in group 2 (mean follow-up 2.3 years). Mean age at last follow-up was 7.3 years in group 1 versus 8.3 in group 2 (P&gt;0.05). In group 1, 27% had voiding dysfunction versus 30% in group 2 (NS). In group 1, 35% had UTI during follow-up versus 10% (P=0.01). Conclusion: During the follow-up, the patients with delayed diagnosis VUP have developed fewer complications related to the initial obstruction than the population who was detected antenatally and managed from the early hours of life. However, the rate of IRF and voiding disorders in our study, associated with the data of the literature, highlights the potential persistence and worsening of these conditions. That is why, whatever the age at diagnosis, VUP patients require a close monitoring. abstract_id: PUBMED:19237812 Causes and outcome of prenatally diagnosed hydronephrosis. Hydronephrosis is the most common abnormal finding in the urinary tract on prenatal screening with ultrasonography (U/S). Hydronephrosis may be obstructive or non-obstructive; obstructive lesions are more harmful to the developing kidneys. The aim of the study was to evaluate the causes of renal pelvic dilatation and the outcome of postnatal treatment in infants with hydronephrosis diagnosed prenatally with U/S. We prospectively studied 67 (60 males) newborns with hydronephrosis diagnosed prenatally and confirmed postnatally with U/S from Sept. 2005 to Oct. 2007. The patients were allocated to three groups based on the mea-surement of the anteroposterior renal pelvic diameter (APRPD) in transverse plane: mild (6-9.9 mm), moderate (10-14.9 mm) and severe (&gt; or =15 mm) hydronephrosis. Voiding cystourethrography (VCUG) was obtained in all of the patients to rule out vesicoureteral reflux (VUR). In cases with negative VUR, Diethylenetriamine-pentaacetic acid (DTPA) scan with diuretic renography was performed to detect ureteropelvic joint obstruction (UPJO). Twenty two cases (32.8%) had mild, 20 (29.9%) had moderate, and 25 (37.3%) had severe hydronephrosis. The causes of hydroneph-rosis were VUR (40.2%), UPJO (32.8%), posterior urethral valves (PUVs) (13.4 %), and transient hydronephrosis (13.4 %). The lesion was obstructive in 37 (55.2%) infants. Totally, 33 (49.2%) patients with hydronephrosis (9 mild, 9 moderate, and 15 severe) subsequently developed com-plications such as UTI and renal insufficiency, or required surgery. Associated abnormalities were observed in 15 (22.4%) patients. We conclude that every newborn with any degree of hydro-nephrosis should be assessed postnatally for specific diagnosis and treatment. abstract_id: PUBMED:17010011 Posterior urethral valve: Outcome of antenatal intervention. Introduction And Aim: Antenatal treatment of obstructive uropathy, although widely performed, remains controversial. This study evaluated the long-term outcome of managing patients with posterior urethral valves (PUV), highlighting the effect of antenatal vesicoamniotic shunt placement for patients who underwent fetal surgery. Methods: The medical records of 58 patients with PUV were retrospectively reviewed from June 1998 to June 2004. On the basis of prenatal assessment of sonographic findings and serial urinary electrolytes and protein measurements, patients were divided into two groups: group 1 comprised patients who had antenatal vesicoamniotic shunt placement whereas group 2 comprised patients who underwent postnatal surgical correction of PUV. Their outcomes and long-term results were evaluated. Results: Patients were followed up from 6 months to 6(1/2) years (mean 3.9 years). Group 1 included 12 patients who had vesicoamniotic shunt placement and were confirmed postnatally to have PUV. Four patients out of 12 died (33.3%); three out of the eight living patients had perinatal complications. Of the eight living patients, three (37.5%) underwent valve ablation and five (62.5%) underwent urinary diversion (three vesicostomies and two cutaneous ureterostomies). Renal function returned to normal in only four patients (50%). Radiological abnormalities (hydronephrosis and/or reflux) resolved in three (37.5%) patients, was downgraded in one (12.5%) patient and persisted in four patients (50%). Group 2 included 46 patients who were treated postnatally. Thirty-five patients (76%) underwent primary valve ablation, while 11 (24%) underwent urinary diversion (seven vesicostomies, four cutaneous ureterostomy and one pyelostomy). Renal function returned to normal in all patients who underwent valve ablation, except in three, while renal function returned to normal in only three of 11 patients who underwent urinary diversion. Radiological hydronephrosis and/or reflux resolved in 28 patients (60.9%), was downgraded in six patients (13%) and persisted in 12 patients (26.1%). Conclusions: Antenatal vesicoamniotic shunt placement makes no difference to the outcome and long-term results of patients with PUV and debate about its efficacy on renal outcome remains. Primary valve ablation is the keystone of treatment for patients with PUV that might achieve the primary goal of nephron preservation. The lowest creatinine concentration in the first year of life is the most appropriate predictor of future renal function. abstract_id: PUBMED:18006017 Long-term outcome of prenatally detected posterior urethral valves: single center study of 65 cases managed by primary valve ablation. Purpose: Management of posterior urethral valves is significantly modified by the prenatal diagnosis. Our aim was to assess long-term outcome of children with prenatally detected posterior urethral valves treated at our institution by primary valve ablation without routine urinary drainage or diversion. Materials And Methods: A total of 79 cases of posterior urethral valves were detected prenatally at our hospital between 1987 and 2004. Of these cases 65 were managed postnatally, while pregnancy was terminated in 14. We studied the prenatal parameters of gestational age at diagnosis, renal parenchyma on ultrasound and amniotic fluid volume. Fetal urine was analyzed when indicated. Long-term outcome was assessed. Results: Primary valve ablation was done in all cases except 2. Median followup was 6.8 years (range 1 to 14.3). At the end of followup there were 11 cases of renal failure (17%) with 5 detected before 24 weeks of gestation, 6 cases of oligohydramnios and 9 cases of abnormal parenchyma. Gestational age at diagnosis and oligohydramnios were statistically significant predictors of final renal outcome (p = 0.003 and p = 0.02, respectively), while renal parenchymal changes were not (p = 0.23). When fetal urinalysis detected good prognosis (12 cases) renal failure developed in none, compared to 2 of the 3 cases with a bad prognosis. Continence was achieved in 42 of 55 toilet trained children (76%), 3 had nocturnal enuresis and 10 (18%) were incontinent. Conclusions: Our long-term results of prenatally detected posterior urethral valves confirm that early valve ablation can be considered as the primary treatment in the majority of patients, without the need for preoperative drainage or diversion. Gestational age at diagnosis and volume of amniotic fluid are significant predictors of postnatal renal outcome. abstract_id: PUBMED:27864598 Impact of fetal counseling on outcome of antenatal congenital surgical anomalies. Aim: To analyze the impact of counseling on antenatal congenital surgical anomalies (ACSA). Methods: Cases presenting with ACSA for fetal counseling and those presenting in post-natal period following diagnosis of ACSA (PACSA) for surgical opinion were analyzed for spectrum, presentation and outcome. Results: 117 cases including ACSA(68);PACSA(49) were analyzed. Gestational age at diagnosis of ACSA;PACSA was 17-37;17-39 weeks (median 24;32 weeks). Diagnoses in ACSA;PACSA included urological (26;31), neurological (10;5), congenital diaphragmatic hernia (CDH)(5;1), gastrointestinal (5;5), lung and chest anomalies (5;1), intraabdominal cysts (4;1), abdominal wall defects (4;0), tumors (3;3), limb anomaly (1;1), esophageal atresia (1;1), conjoint twins (1;0), hepatomegaly (1;0), and major cardiac anomalies (2;0). Two antenatal interventions were done for ACSA; vesicoamniotic shunt and amnioinfusion for oligohydramnios. 17;24 ACSA;PACSA required early surgical intervention in post-natal period. Nine ACSA underwent medical termination of pregnancy and 4 had intrauterine demise. Nine ACSA babies died including two CDH, one gastroschisis, one duodenal atresia, one conjoint twins, one megacystitis with motility disorder and three posterior urethral valves. All PACSA babies survived. Conclusion: Fetal counseling for CSA portrays true outcome of ACSA with 32.3% (22/68) mortality versus 0% for PACSA due to selection bias. However, fetal counseling ensures optimal perinatal care. abstract_id: PUBMED:33153550 ACR Appropriateness Criteria® Antenatal Hydronephrosis-Infant. Antenatal hydronephrosis is the most frequent urinary tract anomaly detected on prenatal ultrasonography. It occurs approximately twice as often in males as in females. Most antenatal hydronephrosis is transient with little long-term significance, and few children with antenatal hydronephrosis will have significant obstruction, develop symptoms or complications, and require surgery. Some children will be diagnosed with more serious conditions, such as posterior urethral valves. Early detection of obstructive uropathy is necessary to mitigate the potential morbidity from loss of renal function. Imaging is an integral part of screening, diagnosis, and monitoring of children with antenatal hydronephrosis. Optimal timing and appropriate use of imaging can reduce the incidence of late diagnoses and prevent renal scarring and other complications. In general, follow-up neonatal ultrasound is recommended for all cases of antenatal hydronephrosis, while further imaging, including voiding cystourethrography and nuclear scintigraphy, is recommended for moderate or severe cases, or when renal parenchymal or bladder wall abnormalities are suspected. The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed annually by a multidisciplinary expert panel. The guideline development and revision include an extensive analysis of current medical literature from peer reviewed journals and the application of well-established methodologies (RAND/UCLA Appropriateness Method and Grading of Recommendations Assessment, Development, and Evaluation or GRADE) to rate the appropriateness of imaging and treatment procedures for specific clinical scenarios. In those instances where evidence is lacking or equivocal, expert opinion may supplement the available evidence to recommend imaging or treatment. abstract_id: PUBMED:25829668 Study of prognostic significance of antenatal ultrasonography and renin angiotensin system activation in predicting disease severity in posterior urethral valves. Aims: Study on prognostic significance of antenatal ultrasonography and renin angiotensin system activation in predicting disease severity in posterior urethral valves. Materials And Methods: Antenatally diagnosed hydronephrosis patients were included. Postnatally, they were divided into two groups, posterior urethral valve (PUV) and non-PUV. The studied parameters were: Gestational age at detection, surgical intervention, ultrasound findings, cord blood and follow up plasma renin activity (PRA) values, vesico-ureteric reflux (VUR), renal scars, and glomerular filtration rate (GFR). Results: A total of 25 patients were included, 10 PUV and 15 non-PUV. All infants with PUV underwent primary valve incision. GFR was less than 60 ml/min/1.73 m(2) body surface area in 4 patients at last follow-up. Keyhole sign, oligoamnios, absent bladder cycling, and cortical cysts were not consistent findings on antenatal ultrasound in PUV. Cord blood PRA was significantly higher (P &lt; 0.0001) in PUV compared to non-PUV patients. Gestational age at detection of hydronephrosis, cortical cysts, bladder wall thickness, and amniotic fluid index were not significantly correlated with GFR while PRA could differentiate between poor and better prognosis cases with PUV. Conclusions: Ultrasound was neither uniformly useful in diagnosing PUV antenatally, nor differentiating it from cases with non-PUV hydronephrosis. In congenital hydronephrosis, cord blood PRA was significantly higher in cases with PUV compared to non-PUV cases and fell significantly after valve ablation. Cord blood PRA could distinguish between poor and better prognosis cases with PUV. abstract_id: PUBMED:10437866 Long-term outcome in children after antenatal intervention for obstructive uropathies. Background: Antenatal intervention has been done for fetal obstructive uropathy for over a decade, yet little is known about long-term outcomes. To assess the long-term implications of fetal intervention, we reviewed the outcomes of children who underwent vesicoamniotic shunt placement. Methods: We reviewed the clinical outcomes of 14 children who underwent vesicoamniotic shunt placement at our institution and who survived beyond 2 years of age. Findings: In 1987-96, 34 patients underwent vesicoamniotic shunt placement. 13 died and 21 survived, of whom 17 are now more than 2 years old. Three survivors were lost to follow-up. Mean age at follow-up was 54.3 months (range 25-114). Final diagnoses included prune belly syndrome (seven cases), posterior urethral valves (four), urethral atresia (one), vesicoureteral reflux (one), and megacystis (one). Height was below the 25th percentile in 12 (86%) with seven (50%) below the 5th percentile. Five (36%) had renal failure and had successful transplantation, three (21%) have renal insufficiency, and six (43%) have normal renal function. Seven (50%) are acceptably continent, five (36%) have not yet begun toilet-training, and two (14%) are incontinent. Three of four children with valves needed bladder augmentation. Interpretation: Antenatal intervention may help those fetuses with the most severe forms of obstructive uropathy, usually associated with a fatal neonatal course. Intervention achieves outcomes similar to less severe cases that are usually diagnosed postnatally. abstract_id: PUBMED:1613851 Prognosis for patients with prenatally diagnosed posterior urethral valves. Children in whom posterior urethral valves are diagnosed shortly after birth are at higher risk for renal failure than children in whom posterior urethral valves are diagnosed later in life. The influence of prenatal diagnosis of posterior urethral valves on clinical outcome has not been established. We collected data on children with posterior urethral valves treated since birth at our hospital between 1975 and 1990. The clinical outcomes for 8 patients diagnosed prenatally and 15 diagnosed neonatally were compared. Of the 8 patients in the prenatal group 5 (64%) had renal failure compared to 5 of 15 (33%) in the neonatal group (p greater than 0.05). Nadir creatinine of more than 1.2 mg./dl. correlated with the development of renal failure in all patients in the neonatal and prenatal groups. There was 1 death in the prenatal group. In our experience prenatal diagnosis of posterior urethral valves has grave implications, including a 64% incidence of progressive renal failure and a 64% incidence of transient pulmonary failure. Oligohydramnios and postnatal pulmonary insufficiency are predictive of progressive renal failure. Earlier diagnosis and treatment of children with posterior urethral valves did not improve the clinical prognosis. abstract_id: PUBMED:26805407 Outcome after prenatal diagnosis of congenital anomalies of the kidney and urinary tract. Unlabelled: Congenital anomalies of the kidney and urinary tract are common findings on fetal ultrasound. The aim of this prospective observational study was to describe outcome and risk factors in 115 patients born 1995-2001. All prenatally diagnosed children were stratified into low- and high-risk group and followed postnatally clinically and by imaging at defined endpoints. Risk factors were evaluated using odds ratios. Neonatal diagnosis included pelvi-ureteric junction obstruction (n = 33), vesicoureteral reflux (n = 27), solitary mild pelvic dilatation (postnatal anteroposterior diameter 5-10 mm; n = 25), and further diagnosis as primary obstructive megaureter, unilateral multicystic dysplastic kidney, renal dysplasia and posterior urethral valves. In 38 children with prenatal isolated hydronephrosis, ultrasound normalized at median age of 1.2 years (range 0.1-9). Surgery was performed in 34 children at median age of 0.4 years (0.1-10.8). Persistent renal anomalies without surgery were present in 43 children and followed in 36 for median time of 16 years (12.2-18). Oligohydramnios and postnatal bilateral anomalies were significantly associated with surgery and impaired renal function. Conclusion: The majority of children had a favourable postnatal outcome, in particular children with prenatally low risk, i.e. isolated uni- or bilateral hydronephrosis. Oligohydramnios and postnatal bilateral anomalies were risk factors for non-favourable outcome. What Is Known: • In congenital anomalies of the kidney and urinary tract significantly poorer outcome is known in patients with bilateral renal hypoplasia or solitary kidney associated with posterior urethral valves. • Other factors as proteinuria and vesicoureteral reflux were associated with a higher risk of progression to chronic renal failure in these patients. What is New: • Unlike other studies giving us above-mentioned information, we included all patients with any kind of prenatally diagnosed congenital anomalies of the kidney and urinary tract. Our study shows long-term follow up (median 16 years, range 12.2-18 years), especially in patients not needing surgery, but with persistent anomalies. • During postnatal long-term follow up (median 2.2 years, range 0.1-18 years) one third each showed normalization, need of surgery or persistence of anomalies without need of surgery. Our study revealed a good prognosis in the majority of these children, in particular with prenatally low risk, i.e. isolated uni- or bilateral hydronephrosis, and revealed oligohydramnios and postnatal bilateral anomalies as risk factors for a non-favourable outcome, defined as need of surgery, persistent anomalies with impaired renal function, end stage renal failure or death. Answer: Yes, there is an outcome difference between posterior urethral valves (PUV) diagnosed prenatally and postnatally at the time of antenatal screening. Studies have shown that PUV diagnosed during childhood have traditionally been associated with a better outcome than those diagnosed antenatally. For instance, one study found that at the end of a mean follow-up of 7.2 years, 26.3% of patients diagnosed prenatally developed impaired renal function (IRF) versus 6.3% of those with a delayed diagnosis (PUBMED:23352308). Additionally, patients with delayed diagnosis developed fewer complications related to the initial obstruction than those detected antenatally (PUBMED:23352308). Another study highlighted that antenatal vesicoamniotic shunt placement did not make a significant difference to the outcome and long-term results of patients with PUV, and primary valve ablation was considered the keystone of treatment for preserving nephrons (PUBMED:17010011). Furthermore, a single-center study of 65 cases managed by primary valve ablation showed that early valve ablation could be considered as the primary treatment in the majority of patients with prenatally detected PUV, without the need for preoperative drainage or diversion (PUBMED:18006017). However, it is important to note that the prognosis for patients with prenatally diagnosed PUV can be grave, with a high incidence of progressive renal failure and transient pulmonary failure reported in some cases (PUBMED:1613851). Additionally, oligohydramnios and postnatal pulmonary insufficiency have been predictive of progressive renal failure (PUBMED:1613851). Overall, the literature suggests that while prenatal diagnosis of PUV allows for early intervention, it does not necessarily improve the clinical prognosis compared to postnatal diagnosis, and close monitoring and appropriate treatment are crucial regardless of the timing of diagnosis (PUBMED:23352308; PUBMED:17010011; PUBMED:18006017; PUBMED:1613851).
Instruction: Are the changes in postural control associated with low back pain caused by pain interference? Abstracts: abstract_id: PUBMED:15951650 Are the changes in postural control associated with low back pain caused by pain interference? Background: Voluntary limb movements are associated with involuntary and automatic postural adjustments of the trunk muscles. These postural adjustments occur prior to movement and prevent unwanted perturbation of the trunk. In low back pain, postural adjustments of the trunk muscles are altered such that the deep trunk muscles are consistently delayed and the superficial trunk muscles are sometimes augmented. This alteration of postural adjustments may reflect disruption of normal postural control imparted by reduced central nervous system resources available during pain, so-called "pain interference," or reflect adoption of an alternate postural adjustment strategy. Methods: We aimed to clarify this by recording electromyographic activity of the upper (obliquus externus) and lower (transversus abdominis/obliquus internus) abdominal muscles during voluntary arm movements that were coupled with painful cutaneous stimulation at the low back. If the effect of pain on postural adjustments is caused by pain interference, it should be greatest at the onset of the stimulus, should habituate with repeated exposure, and be absent immediately when the threat of pain is removed. Sixteen patients performed 30 forward movements of the right arm in response to a visual cue (control). Seventy trials were then conducted in which arm movement was coupled with pain ("pain trials") and then a further 70 trials were conducted without the pain stimulus ("no pain trials"). Results: There was a gradual and increasing delay of transversus abdominis/obliquus internus electromyograph and augmentation of obliquus externus during the pain trials, both of which gradually returned to control values during the no pain trials. Conclusion: The results suggest that altered postural adjustments of the trunk muscles during pain are not caused by pain interference but are likely to reflect development and adoption of an alternate postural adjustment strategy, which may serve to limit the amplitude and velocity of trunk excursion caused by arm movement. abstract_id: PUBMED:35744075 A Comprehensive Review of Pain Interference on Postural Control: From Experimental to Chronic Pain. Motor control, movement impairment, and postural control recovery targeted in rehabilitation could be affected by pain. The main objective of this comprehensive review is to provide a synthesis of the effect of experimental and chronic pain on postural control throughout the available literature. After presenting the neurophysiological pathways of pain, we demonstrated that pain, preferentially localized in the lower back or in the leg induced postural control alteration. Although proprioceptive and cortical excitability seem modified with pain, spinal modulation assessment might provide a new understanding of the pain phenomenon related to postural control. The literature highlights that the motor control of trunk muscles in patient presenting with lower back pain could be dichotomized in two populations, where the first over-activates the trunk muscles, and the second under-activates the trunk muscles; both generate an increase in tissue loading. Taking all these findings into account will help clinician to provide adapted treatment for managing both pain and postural control. abstract_id: PUBMED:16311036 Changes in coordination of postural control during dynamic stance in chronic low back pain patients. The human postural system operates on the basis of integrated information from three independent sources: vestibular, visual and somatosensory. It is conceivable that a derangement of any of these systems will influence the overall output of the postural system. The peripheral proprioceptive system or the central processing of proprioceptive information may be altered in chronic low back pain (CLBP). We therefore investigated whether patients with CLBP exhibited an altered postural control during quiet standing. Dynamic posturography was performed by 12 CLBP patients and 12 age-matched controls. Subject's task was to stand quietly on a computer-controlled movable platform under six sensory conditions that altered the available visual and proprioceptive information. While the control of balance was comparable between the two groups across stabilized support surface conditions (1-3), CLBP patients oscillated much more than controls in the anterior-posterior (AP) direction in platform sway-referenced conditions (4-6). Control experiments ruled out that increased sway was due to pain interference. In CLBP patients, postural stability under challenging conditions is maintained by an increased sway in AP direction. This change in postural strategy may underlie a dysfunction of the peripheral proprioceptive system or the central integration of proprioceptive information. abstract_id: PUBMED:23526750 Effects of acute low back pain on postural control. Objective: To evaluate the changes in static and dynamic postural control after the development of acute low back pain. Methods: Thirty healthy right-handed volunteers were divided into three groups; the right back pain group, the left back pain group, and the control group. 0.5 mL of 5% hypertonic saline was injected into L4-5 paraspinal muscle for 5 seconds to cause muscle pain. The movement of the center of gravity (COG) during their static and dynamic postural control was measured with their eyes open and with their eyes closed before and 2 minutes after the injection. Results: The COGs for the healthy adults shifted to the right quadrant and the posterior quadrant during their static and dynamic postural control test (p&lt;0.05). The static and dynamic instability index while they had their eyes closed was significantly increased than when they had their eyes open with and without acute back pain. After pain induction, their overall and anterior/posterior instability was increased in both the right back pain group and the left back pain group during the static postural control test (p&lt;0.05). A right deviation and a posterior deviation of the COG still remained, and the posterior deviation was greater in the right back pain group (p&lt;0.05). Conclusion: The static instability, particularly the anterior/posterior instability was increased in the presence of acute low back pain, regardless of the visual information and the location of pain. abstract_id: PUBMED:23391751 Persons with lower-limb amputation have impaired trunk postural control while maintaining seated balance. Abnormal mechanics of movement resulting from lower-limb amputation (LLA) may increase stability demands on the spinal column and/or alter existing postural control mechanisms and neuromuscular responses. A seated balance task was used to investigate the effects of LLA on trunk postural control and stability, among eight males with unilateral LLA (4 transtibial, 4 transfemoral), and eight healthy, non-amputation controls (matched by age, stature, and body mass). Traditional measures derived from center of pressure (COP) time series, and measures obtained from non-linear stabilogram diffusion analyses, were used to characterize trunk postural control. All traditional measures of postural control (95% ellipse area, RMS distance, and mean velocity) were significantly larger among participants with LLA. Non-linear stabilogram diffusion analyses also revealed significant differences in postural control among persons with LLA, but only in the antero-posterior direction. Normalized trunk muscle activity was also larger among participants with LLA. Larger COP-based sway measures among participants with LLA during seated balance suggest an association between LLA and reduced trunk postural control. Reductions in postural control and spinal stability may be a result of adaptations in functional tissue properties and/or neuromuscular responses, and may potentially be caused by repetitive exposure to abnormal gait and movement. Such alterations could then lead to an increased risk for spinal instability, intervertebral motions beyond physiological limits, and pain. abstract_id: PUBMED:38476964 Kinematic changes of the trunk and lower limbs during voluntary lateral sway postural control in adults with low back pain. Introduction: Voluntary lateral weight shifting is essential for gait initiation. However, kinematic changes during voluntary lateral weight shifting remain unknown in people with low back pain (LBP). This study aims to explore the differences in kinematics and muscle activation when performing a voluntary lateral weight shifting task between patients with LBP and asymptomatic controls without pain. Methods: Twenty-eight participants volunteered in this study (14 in both the LBP group and the control group). The Sway Discrimination Apparatus (SwayDA) was used to generate a postural sway control task, mimicking lateral weight shifting movements when initiating gait. Kinematic parameters, including range of motion (ROM) and standard deviation of ROM (Std-ROM) of the lumbar spine, pelvis, and lower limb joints, were recorded using a motion capture system during lateral weight shifting. The electroactivity of the trunk and lower limb muscles was measured through surface electromyography using root mean square (RMS). The significant level was 0.05. An independent t-test was employed to compare kinematic parameters, and muscle activation between the LBP group and the control group. A paired-sample t-test, adjusted with Bonferroni correction (significant level of 0.025), was utilized to examine differences between the ipsilateral weight shifting towards side (dominant side) and the contralateral side. Results: The results of kinematic parameters showed significantly decreased ROM and std-ROM of the ipsilateral hip in the transverse plane (tROM = -2.059, p = 0.050; tstd-ROM = -2.670, p = 0.013), as well as decreased ROM of the ipsilateral knee in the coronal plane (t = -2.148, p = 0.042), in the LBP group compared to the control group. For the asymptomatic controls, significantly larger ROM and ROM-std were observed in the hip and knee joints on the ipsilateral side in contrast to the contralateral side (3.287 ≤ t ≤ 4.500, 0.001 ≤ p≤ 0.006), but no significant differences were found between the two sides in the LBP group. In addition, the LBP group showed significantly lower RMS of the biceps femoris than the control group (tRMS = -2.186, p = 0.044). Discussion: Patients with LBP showed a conservative postural control pattern, characterized by reduced ROM of ipsilateral joints and diminished activation of the biceps femoris. These findings suggested the importance of voluntary postural control assessment and intervention to maximize recovery. abstract_id: PUBMED:37925241 The association between pain-related psychological variables and postural control in low back pain: A systematic review and meta-analysis. Background: Alterations in postural control have been found in individuals with low back pain (LBP), particularly during challenging postural tasks. Moreover, higher levels of negative pain-related psychological variables are associated with increased trunk muscle activity, reduced spinal movement, and worse maximal physical performance in individuals with LBP. Research Question: Are pain-related psychological variables associated with postural control during static bipedal standing tasks in individuals with LBP? Methods: A systematic review and meta-analysis were conducted. Pubmed, Web of Science, and PsycINFO were searched until March 2023. Studies were included if they evaluated postural control during static bipedal standing in individuals with LBP by measuring center of pressure (CoP) variables, and reported at least one pain-related psychological variable. Correlation coefficients between pain-related psychological variables and CoP variables were extracted. Study quality was assessed with the "Quality In Prognosis Studies" tool (QUIPS). Random-effect models were used to calculate pooled correlation coefficients for different postural tasks. Sub-analyses were performed for positional or dynamic CoP variables. Certainty of evidence was assessed with an adjusted "Grading of Recommendations, Assessment, Development, and Evaluations" tool (GRADE). The protocol was registered on PROSPERO (CRD42021241739). Results: Sixteen studies (n = 723 participants) were included. Pain-related fear (16 studies) and pain catastrophizing (three studies) were the only reported pain-related psychological variables. Both pain-related fear (-0.04 &lt; pooled r &lt; 0.14) and pain catastrophizing (0.28 &lt; pooled r &lt; 0.29) were weakly associated with CoP variables during different postural tasks. For all associations, the certainty of evidence was very low. Significance: Pain-related fear and pain catastrophizing are only weakly associated with postural control during static bipedal standing in individuals with LBP, regardless of postural task difficulty. Certainty of evidence is very low thus it is conceivable that future studies accounting for current study limitations might reveal different findings. abstract_id: PUBMED:22436337 Pain relief is associated with decreasing postural sway in patients with non-specific low back pain. Background: Increased postural sway is well documented in patients suffering from non-specific low back pain, whereby a linear relationship between higher pain intensities and increasing postural sway has been described. No investigation has been conducted to evaluate whether this relationship is maintained if pain levels change in adults with non-specific low back pain. Methods: Thirty-eight patients with non-specific low back pain and a matching number of healthy controls were enrolled. Postural sway was measured by three identical static bipedal standing tasks of 90 sec duration with eyes closed in narrow stance on a firm surface. The perceived pain intensity was assessed by a numeric rating scale (NRS-11). The patients received three manual interventions (e.g. manipulation, mobilization or soft tissue techniques) at 3-4 day intervals, postural sway measures were obtained at each occasion. Results: A clinically relevant decrease of four NRS scores in associated with manual interventions correlated with a significant decrease in postural sway. In contrast, if no clinically relevant change in intensity occurred (≤ 1 level), postural sway remained similar compared to baseline. The postural sway measures obtained at follow-up sessions 2 and 3 associated with specific NRS level showed no significant differences compared to reference values for the same pain score. Conclusions: Alterations in self-reported pain intensities are closely related to changes in postural sway. The previously reported linear relationship between the two variables is maintained as pain levels change. Pain interference appears responsible for the altered sway in pain sufferers. This underlines the clinical use of sway measures as an objective monitoring tool during treatment or rehabilitation. abstract_id: PUBMED:29909228 Gender differences in postural control in people with nonspecific chronic low back pain. Background: Many studies have reported that there are several differences between genders which may result in altered neuromuscular control. Although the existing evidence suggests that low back pain (LBP) affects the ability to control posture, there is little evidence the gender differences in postural control in people with nonspecific chronic LBP. Research Question: Are there any gender differences in postural control and correlations between postural control, pain, disability, and fear of movement in people with nonspecific chronic LBP? Methods: Static and dynamic postural control were evaluated using a computerized postural control assessment tool including assessments for limits of stability (LOS), unilateral stance, and modified clinical test of sensory interaction on balance. Pain intensity and fear of movement were assessed using a visual analogue scale and the Tampa Scale of Kinesiophobia, respectively. Results: This cross-sectional study included 51 people (25 females and 26 males) with nonspecific chronic LBP. Mean reaction time in the LOS test was significantly less in male participants compared with females when adjusted for pain intensity and disability level, F(1,45) = 4.596, p = .037, ηp2 = 0.093. There was no significant difference in the remaining LOS variables as well as unilateral stance, and modified clinical test of sensory interaction on balance variables between the genders (p &gt; .05). Many correlations were observed between the LOS variables, pain intensity, and Tampa Scale of Kinesiophobia score in female participants (p &lt; .05). The Tampa Scale of Kinesiophobia score was also correlated with the movement velocity and endpoint excursion in the LOS test in the male participants (p &lt; .05). Significance: This study suggests that there is no difference in most of the static and dynamic postural control variables between females and males; however, higher fear of movement, and pain intensity during activity are more associated with impaired dynamic balance in females with nonspecific chronic LBP. abstract_id: PUBMED:19478663 Effect of dual-tasking on postural control in subjects with nonspecific low back pain. Study Design: Three factors mixed-design with 1 between-subject and 2 within-subject factors. Objective: To compare the main effects and interactions of postural and cognitive difficulty on quiet stance between subjects with and without nonspecific low back pain (LBP). Summary Of Background Data: The interference between postural control and cognitive tasks depends on factors such as sensorimotor/cognitive integrity. Changes in peripheral sensory and muscular systems as well as cognitive processes have been observed in LBP patients. It was hypothesized that the effect of cognitive task on postural performance might be different in subjects with nonspecific LBP as compared with healthy individuals. To the authors' knowledge this has not been investigated before. Methods: Postural stability was measured by center of pressure parameters while nonspecific LBP (n = 22) and healthy (n = 22) subjects randomly performed quiet standing task with 3 levels of difficulty (rigid-surface eyes open, rigid-surface eyes-closed, and foam-surface eyes-closed) in isolation or concurrently with an easy or difficult digits backward cognitive task. Results: Subjects with nonspecific LBP had less postural sway than healthy subjects, while postural sway decreased with increase in the level of cognitive difficulty. Nonspecific LBP and healthy subjects had larger postural sway at more difficult sensory conditions such as rigid-surface eyes-closed and foam-surface eyes-closed. The response to dual-tasking was not significantly different between the 2 groups. Conclusion: The dual-tasking did not change the postural performance of nonspecific LBP subjects with low level of pain and disability differently compared to healthy subjects. Answer: The changes in postural control associated with low back pain are not solely caused by pain interference. The study by Hodges et al. (PUBMED:15951650) suggests that altered postural adjustments of the trunk muscles during pain are likely to reflect the development and adoption of an alternate postural adjustment strategy, rather than being caused by pain interference. This strategy may serve to limit the amplitude and velocity of trunk excursion caused by arm movement. The gradual and increasing delay of deep trunk muscles and augmentation of superficial trunk muscles during pain trials, which returned to control values during no pain trials, supports this conclusion. Additionally, a comprehensive review (PUBMED:35744075) indicates that pain, particularly localized in the lower back or leg, induces postural control alteration. However, it also highlights that motor control of trunk muscles in patients with lower back pain could be dichotomized into two populations: those who over-activate and those who under-activate the trunk muscles, both leading to increased tissue loading. This suggests that pain may influence postural control but does not necessarily imply that pain interference is the primary cause. Furthermore, research on chronic low back pain patients (PUBMED:16311036) showed that increased sway in the anterior-posterior direction under challenging conditions may underlie a dysfunction of the peripheral proprioceptive system or the central integration of proprioceptive information, rather than pain interference. In the context of acute low back pain (PUBMED:23526750), static instability, particularly anterior/posterior instability, was increased in the presence of acute low back pain. This indicates that pain can affect postural control but does not clarify whether this is due to pain interference or other mechanisms. For persons with lower-limb amputation (PUBMED:23391751), larger sway measures during seated balance suggest an association between limb loss and reduced trunk postural control, which may be due to adaptations in functional tissue properties and/or neuromuscular responses, potentially caused by abnormal gait and movement. In adults with low back pain (PUBMED:38476964), patients showed a conservative postural control pattern with reduced range of motion of ipsilateral joints and diminished activation of the biceps femoris, suggesting that pain may lead to a conservative strategy in postural control.
Instruction: Should physicians screen for depression in elderly medical inpatients? Abstracts: abstract_id: PUBMED:8270355 Should physicians screen for depression in elderly medical inpatients?: Results of a decision analysis. Objective: We wish to determine whether or not elderly medical inpatients should be screened for depressive disorder using either 1) a self-rated depression scale (Geriatric Depression Scale), 2) "usual clinical assessment," or 3) neither, assuming that treatment with tricyclic antidepressants (TCAs) is the primary mode of intervention. Method: Based on recent data from epidemiological studies on the prevalence and course of depression, the test characteristics of available screening tests, and the efficacy and side-effects of traditional antidepressants, decision analysis is used to help decide whether or not clinicians should screen for depression in this setting. Results: These calculations indicate that if screening is done solely to identify depressed patients for treatment with TCAs, then the highest utility lies in not screening; however, the difference in utilities between that decision and the decisions to either screen with GDS or screen by usual clinical assessment was only .04 units on a 0 to 100 scale, making the decision virtually a toss-up. Furthermore, even a small variation in one of several clinical factors or test characteristics could give screening a higher utility. In particular, if psychotherapy is considered as the primary intervention, then the utility of screening exceeds that of not screening. Conclusion: Characteristics of the screening test, clinical setting, types and safety of available treatments, each impact on the usefulness of screening and must be kept in mind when diagnosing and treating depressed medically ill elders hospitalized in acute care settings. abstract_id: PUBMED:7759664 The prevalence of depression in elderly medical inpatients. To estimate the point-prevalence of major depression in elderly medical inpatients according to a computerized diagnostic system, a two-phase design was carried out. A consecutive series of 198 elderly medical inpatients completed two self-rating scales for depression (Beck Depression Inventory, Geriatric Depression Scale) and the Mini-Mental State Examination. According to these screening instruments, 69 'probable cases' were identified and were referred for psychiatric evaluation using the Geriatric Mental State Schedule. Only 10 patients were identified as diagnostic cases of depression according to the GMS-AGECAT package. The estimated prevalence rate for depression according to AGECAT in this population was 5.9% (95% confidence limits 2.3-9.3%). This is lower than has been found in previous studies in elderly medical inpatients. Possible reasons for this finding are discussed. abstract_id: PUBMED:9347776 Depression in elderly medical inpatients: a meta-analysis of outcomes. Objective: To determine the prognosis of elderly medical inpatients with depression. Data Sources: A MEDLINE search for relevant articles published from January 1980 to September 1996 and a search of the PSYCH INFO database for articles published from January 1984 to September 1996. The bibliographies of identified articles were searched for additional references. Study Selection: Eight reports (involving 265 patients with depression) met the following 5 inclusion criteria: original research, published in English or French, population of general medical inpatients, mean age of depressed patients 60 years and over, and affective state reported as an outcome. The validity of the studies was assessed according to the criteria for prognostic studies described by the Evidence-Based Medicine Working Group. Data Extraction: Information about the patient population, the proportion of cases detected and treated by attending physicians, the length of follow-up, the affective outcome and the prognostic factors was abstracted from each report. Data Synthesis: All of the studies had some methodologic limitations. A meta-analysis of outcomes at 3 months or less indicated that 18% of patients were well, 43% were depressed and 22% were dead. At 12 months or more, 19% were well, 29% were depressed and 53% were dead. Factors associated with worse outcomes included more severe depression, more serious physical illness and symptoms of depression before admission. Conclusions: Elderly medical inpatients who are depressed appear to have a very poor prognosis: the recovery rate among these patients is low and the mortality rate high. abstract_id: PUBMED:18378558 RRS-4: short version of the Retardation Rating Scale to screen for depression in elderly inpatients. Objectives: To develop a short version of the Retardation Rating Scale (RRS), an observer scale recently validated in geriatric inpatients. Methods: A neuropsychologist used a structured interview to assess 165 geriatric medical inpatients with the observer-rated Hamilton Depression Rating Scale, Montgomery and Asberg Rating Scale and RRS, and completed the 30-item Geriatric Depression Scale; 107 met Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition clinical criteria for depression according to a gerontopsychiatrist's independent evaluation. A statistical model was applied to ascertain the most relevant RRS items; the psychometric properties of the four retained (RRS-4) were compared with standard scales. Results: RRS-4 psychometric properties were good: internal consistency (Cronbach alpha-coefficient = 0.81), positive concurrent validity with each of the standard depression scales (Spearman's r = 0.68-0.82) and the total RRS score (Pearson's r = 0.93). Considering clinical evaluation the "gold standard" for depression, a threshold of three yielded: 88% positive-predictive value, 68% negative-predictive value, with 88% accuracy for predicting depression exceeding the standard observer depression scales by 23%. Conclusion: RRS-4 is a brief and easy-to-use observer scale that improves depression identification in elderly medical inpatients. abstract_id: PUBMED:17432028 Liaison psychiatry and depression in medical inpatients. Objective: To assess the frequency of depression among hospitalized patients, the socio-demographic variables associated with depression and the number of cases referred by physicians to Psychiatry. Methods: A cross-sectional study was carried out at the Aga Khan University Hospital Karachi. An anonymous Urdu version of the WHO-developed self-reporting questionnaire (SRQ) was administered to inpatients meeting the inclusion criteria. Data was analyzed by SPSS version 13.0. Result: Of the 225 patients approached, 178 completed the questionnaire (men= 45.2%, women = 54.8%). The mean age of the sample was 45.2 years. Out of the total 30.5% of patients were identified as having probable depression, among which housewives were more likely to be depressed compared to others (p=0.031). Among variable comparison, there with secondary school education or below and those with psychiatric co-morbidities, showed significantly greater prevalence of depression (p=0.003) and (p=0.005) respectively. Attending physicians correctly diagnosed 7 (13%) patients and referred only 3 patients to Psychiatry over the previous month. Conclusion: The prevalence of depression among inpatients is comparable to that in the general population. Being a housewife, level at or below secondary school education and having a past psychiatric history are significant factors associated with depression in medical inpatients. A very small number of depressed cases were referred to a psychiatrist. abstract_id: PUBMED:36350482 Loneliness in Elderly Inpatients. Purpose: Loneliness among the elderly is a widespread phenomenon and is connected to various negative health outcomes. Nevertheless, loneliness among elderly inpatients, especially those with a psychiatric diagnosis, has hardly been examined. Our study assessed loneliness in elderly inpatients, identified predictors, and compared levels of loneliness between inpatients on psychiatric and somatic wards. Methods: N = 100 elderly inpatients of a somatic and psychiatric ward were included. Levels of loneliness were assessed, as were potential predictors such as depression, psychological resilience, severity of mental illness, well-being, daily functioning, and psychiatric diagnosis. Analyses of group differences and hierarchical multiple regression analysis were conducted. Results: 37% of all inpatients reported elevated levels of loneliness. Significant predictor variables were self-reported depressive symptoms, well-being, severity of mental illness, being single and living with a caregiver. Hierarchical multiple regression analysis revealed that the full model explained 58% of variance in loneliness. Psychiatric inpatients' loneliness was significantly higher than loneliness in somatic inpatients. When analyzing group differences between inpatients with different main psychiatric diagnoses, highest levels were found in patients with an affective disorder, followed by those treated for organic mental disorder. Since the study took place during the COVID-19 pandemic, potential influence of different measurement points (lockdown vs. no lockdown) were analyzed: Differences in loneliness depending on the phase of the pandemic were non-significant. Conclusion: Elderly inpatients experience high levels of loneliness, especially those with a mental disorder. Interventions to reduce loneliness in this population should address predictors of loneliness, preferably through multiprofessional interventions. abstract_id: PUBMED:35130234 Knowledge of primary health care physicians regarding depression in elderly. Objective: To assess the knowledge of primary healthcare physicians regarding elderly depression, and to study the association of different variables with the knowledge score. Methods: The cross-sectional study was conducted from March 1 to June 30, 2019, in 30 primary health care centres under the Al-Karkh Health Directorate, Baghdad, Iraq, and comprised all physicians present at the time. Data were collected using a self-administered questionnaire about the knowledge regarding depression in elderly patients aged &gt;65 years along with risk factors, diagnosis, treatment and prevention. Data were analysed using SPSS 24. Results: Among the 149 participants, 69(46.3%) were aged &gt;40 years, 116(77.9%) were females, 97(65.1%) were family physicians, and 96(65%) had good score regarding overall knowledge. Family physicians, specialists and those aged &gt;40 years had significantly better knowledge level (p&lt;0.05). Conclusions: The physicians had a good knowledge about elderly depression. abstract_id: PUBMED:34219655 Limitations of Screening for Depression as a Proxy for Suicide Risk in Adult Medical Inpatients. Background: Medically ill hospitalized patients are at elevated risk for suicide. Hospitals that already screen for depression often use depression screening as a proxy for suicide risk screening. Extant research has indicated that screening for depression may not be sufficient to identify all patients at risk for suicide. Objective: The present study aims to determine the effectiveness of a depression screening tool, the Patient Health Questionnaire-9, in detecting suicide risk among adult medical inpatients. Methods: Participants were recruited from inpatient medical/surgical units in 4 hospitals as part of a larger validation study. Participants completed the Patient Health Questionnaire-9 and 2 suicide risk measures: the Ask Suicide-Screening Questions and the Adult Suicidal Ideation Questionnaire. Results: The sample consisted of 727 adult medical inpatients (53.4% men; 61.8% white; mean age 50.1 ± 16.3 years). A total of 116 participants (116 of 727 [16.0%]) screened positive for suicide risk and 175 (175 of 727 [24.1%]) screened positive for depression. Of the 116 patients who screened positive for suicide risk, 36 (31.0%) screened negative for depression on the Patient Health Questionnaire-9. Of 116, 73 (62.9%) individuals who were at risk for suicide did not endorse item 9 (thoughts of harming oneself or of being better off dead) on the Patient Health Questionnaire-9. Conclusion: Using depression screening tools as a proxy for suicide risk may be insufficient to detect adult medical inpatients at risk for suicide. Asking directly about suicide risk and using validated tools is necessary to effectively and efficiently screen for suicide risk in this population. abstract_id: PUBMED:12242204 Elderly medical inpatients screening positive for depression may show spontaneous improvement after discharge. Objective: To see whether elderly medical in-patients screening positive for depression on the Geriatric Depression Score show any change on discharge home and whether their scores predict this. Design: A prospective study. Setting: A large outer London district general hospital with acute wards for all medical admissions of people aged over 75 years. Participants: We studied 179 consecutive patients admitted to the acute wards with an abbreviated mental test score of &gt; or = 7 who were resident within the London borough of Waltham Forest. Main Outcome Measure: Geriatric Depression Scores in hospital and at home after discharge. Account was taken of subsequent psychiatric treatment. Results: 55 inpatients screened positive for depression (15x point Geriatric Depression Score of 5 or more). Repeat screening of these patients, after discharge, resulted in 24 scoring &lt; 5 and only 19 scoring &gt; or = 5. Three patients out of the 55 were admitted into psychiatric care with depression prior to follow-up screening. The 15 question Geriatric Depression Score was highly sensitive and the 4 question Geriatric Depression Score highly specific in predicting for depression after discharge. Conclusion: Many patients screen positive for depression on acute elderly medical admission wards and nearly half improve spontaneously upon returning home. abstract_id: PUBMED:20440101 Major depression in elderly medical inpatients in Greece, prevalence and identification. Background: The aim of this study was to report the prevalence of episodes of DSM-IV major depression, as well as their identification rates, in elderly inpatients in a general hospital in Greece. Methods: 200 selected patients, 65 years old and over, hospitalized in Surgery and Internal Medicine Departments, were assessed for major depression over a period of 12 months (October 2006-November 2007) by means of SCID-I/P, HADS, BDI and GDS-15. During the same period, liaison calls from the same departments were evaluated and findings were compared. Results: When psychiatric screening was performed, 28 patients (14%) were diagnosed as suffering from a major depressive episode. During the same period, there were only 20 liaison calls from the same departments for patients over 65 years old, from which 4 patients were found to be suffering from major depression. Comparison between the two periods showed significant underestimation of depression. All psychometric scales detected depression sufficiently. Conclusions: In general hospital elderly inpatients, depression still remains underestimated. Depression symptom scales could be used as routine tests for screening major depression. Answer: Based on the provided abstracts, there is evidence to suggest that physicians should screen for depression in elderly medical inpatients. Depression in this population is associated with poor prognosis, including low recovery rates and high mortality (PUBMED:9347776). The prevalence of depression among elderly medical inpatients has been reported, with one study finding a point-prevalence of 5.9% (PUBMED:7759664), while another study found that 30.5% of inpatients were identified as having probable depression (PUBMED:17432028). Despite this, depression often remains underestimated in general hospital settings (PUBMED:20440101). Screening tools such as the Geriatric Depression Scale (GDS) and the Retardation Rating Scale (RRS-4) have been shown to be effective in identifying depression in elderly inpatients (PUBMED:8270355, PUBMED:18378558). The RRS-4, in particular, was found to have good psychometric properties and improved identification of depression compared to standard observer depression scales (PUBMED:18378558). However, the decision to screen can be influenced by various factors, including the characteristics of the screening test, the clinical setting, and the types and safety of available treatments (PUBMED:8270355). For instance, if psychotherapy is considered as the primary intervention, the utility of screening exceeds that of not screening (PUBMED:8270355). Additionally, knowledge of primary healthcare physicians regarding elderly depression plays a role in the effectiveness of screening and subsequent management (PUBMED:35130234). It is also important to note that depression screening tools may not be sufficient proxies for suicide risk, as some patients at risk for suicide may not screen positive for depression (PUBMED:34219655). Therefore, direct questioning about suicide risk using validated tools is necessary. In conclusion, the evidence suggests that screening for depression in elderly medical inpatients is beneficial and can lead to better identification and management of depression. However, the approach to screening should be tailored to the specific clinical context and available interventions, and it should be complemented by direct assessment of suicide risk when appropriate.
Instruction: Do cervical cancer screening patient information leaflets meet the HPV information needs of women? Abstracts: abstract_id: PUBMED:18372144 Do cervical cancer screening patient information leaflets meet the HPV information needs of women? Objective: New human papillomavirus (HPV) DNA technologies for the detection and prevention of cervical cancer have led to exciting changes in cervical cancer screening worldwide. Their introduction, however, has left many women with unanswered medical and psychosocial HPV questions. This study considered the degree to which women's own HPV questions were addressed in Australian cervical cancer screening patient information leaflets. Methods: Based on previous qualitative research that asked women to identify their own HPV information needs, categories of interest were identified and a coding framework was developed. Manifest content analysis was conducted by counting the number of times a category of interest was stated in the text of the patient information leaflets (n=75). Latent content analysis methodology was employed to assess the underlying and embedded meaning within the leaflets. Results: Women's medical questions were addressed more frequently than psychosocial ones. Leaflets were designed for specific target audiences (Aboriginal, lesbian, older women, women with disabilities, HPV-specific, cervical cancer-specific and general Pap screening) and the type and amount of HPV information varied by group. Merging the manifest and latent results, we identified three broad themes for discussion: the medicalisation of women's cervical screening experience, the purpose and target audience of cervical screening leaflets and HPV as a community versus women's health issue. Conclusions: Women's questions on HPV were inconsistently and often inadequately answered. Practice Implications: In order that women's information needs are met, more accurate and balanced representations of medical and psychosocial HPV information should be provided in patient information leaflets. abstract_id: PUBMED:17362569 Information and cervical screening: a qualitative study of women's awareness, understanding and information needs about HPV. Objectives: To explore women's attitudes towards the information about human papilloma virus (HPV) provided during cervical screening and to describe women's HPV information needs. Setting: Women with a range of screening results (normal, inadequate, borderline and abnormal) were identified by three screening centres in England. Two consecutive samples of women attending for colposcopy for the first time following screening were also approached. Methods: Seven focus groups were conducted between May 2005 and April 2006 with 38 women who had recently been for cervical screening or had attended a colposcopy appointment. Results: Most women had no prior awareness of HPV. Many women queried the importance of being informed about HPV as no preventive advice or treatment is available. The HPV information included in the UK national screening programme abnormal result leaflet left women with more questions than answers (a list of unanswered questions is included with the results). Further information was requested about HPV detection, infection and transmission as well as the natural history and progression of cervical cancer. No consensus was reached regarding the best time to provide HPV information. Conclusions: Clear communication of the complicated issues surrounding HPV infection and the natural history of cervical cancer is a considerable educational challenge for screening providers. As awareness of HPV becomes more widespread and HPV testing is explored as a triage during cervical screening, women are likely to require more information about the virus and the implications of infection. Consideration should be given to the production of a separate national screening programme HPV leaflet. abstract_id: PUBMED:34779742 "I'm neither here, which would be bad, nor there, which would be good": the information needs of HPV+ women. A qualitative study based on in-depth interviews and counselling sessions in Jujuy, Argentina. The objective of this qualitative study was to explore the information needs of HPV+ women. We conducted 38 in-depth interviews with HPV+ women in the province of Jujuy, Argentina. The interviews included a counselling session to respond to women's concerns and questions. Women perceived the information provided as good, despite having several doubts and misconceptions after receiving results of an HPV+ test. They expressed difficulties in formulating questions during the consultation due to shame, excess of information provided or lack of familiarity with technical language. They valued emotional support and being treated kindly by professionals. The perceived information needs that emerged as most important were: (1) the meaning of an HPV+ result and its relationship with cervical cancer evolution and severity; (2) continuity and timing of the care process; (3) information on the sexual transmission of the virus; (4) explanation of the presence or absence of symptoms. Women's primary unperceived information needs were: (1) detailed information about colposcopy, biopsy and treatments and their effects (including fertility consequences); and (2) deconstructing the association of sexual transmission with infidelity. Sources of information included: (1) the health care system; (2) the internet; and (3) social encounters (close friends and relatives). It is crucial to strengthen the processes for delivering results, with more thorough information, improved emotional support and active listening focused on the patient, as well as to conceive new formats to provide information in stages and/or gradually, in order to facilitate women's access to the health care system and the information they need. abstract_id: PUBMED:33172415 Young women's autonomy and information needs in the schools-based HPV vaccination programme: a qualitative study. Background: Until 2019, the English schools-based human papillomavirus (HPV) vaccination programme was offered to young women (but not young men) aged 12 to 13 years to reduce HPV-related morbidity and mortality. The aim of this study is to explore the extent to which young women were able to exercise autonomy within the HPV vaccination programme. We consider the perspectives of young women, parents and professionals and how this was influenced by the content and form of information provided. Methods: Recruitment was facilitated through a healthcare organisation, schools and community organisations in a local authority in the south-west of England. Researcher observations of HPV vaccination sessions were carried out in three schools. Semi-structured interviews took place with 53 participants (young women, parents of adolescent children, school staff and immunisation nurses) during the 2017/18 and 2018/19 programme years. Interviews were recorded digitally and transcribed verbatim. Thematic analysis was undertaken, assisted by NVivo software. Results: Young women's active participation and independence within the HPV vaccination programme was constrained by the setting of vaccination and the primacy of parental consent procedures. The authoritarian school structure influenced the degree to which young women were able to actively participate in decisions about the HPV vaccination programme. Young women exercised some power, either to avoid or receive the vaccine, by intercepting parental consent forms and procedures. Reliance on leaflets to communicate information led to unmet information needs for young women and their families. Communication may be improved by healthcare professional advocacy, accessible formats of information, and delivery of educational sessions. Conclusions: Strategies to improve communication about the HPV vaccine may increase young people's autonomy in consent procedures, clarify young people's rights and responsibilities in relation to their health care services, and result in higher uptake of the HPV vaccination programme. Trial Registration: ISRCTN 49086105 ; Date of registration: 12 January 2018; Prospectively registered. abstract_id: PUBMED:25248873 Influences on human papillomavirus (HPV)-related information needs among women having HPV tests for follow-up of abnormal cervical cytology. Objectives: Testing for human papillomavirus (HPV) infection has recently been introduced into cervical screening programmes. We investigated (1) barriers to accessing and absorbing information and (2) factors that influence information needs among women undergoing HPV tests. Methods: In-depth interviews were conducted with 27 women who had HPV tests performed in a colposcopy clinic as part of follow-up of low-grade abnormal cytology or post-treatment for cervical intraepithelial neoplasia (CIN). Interviews were transcribed verbatim, coded and analysed using Framework Analysis, to identify main themes and sub-themes. Results: Among these women, barriers to accessing and absorbing HPV information were: being overwhelmed with information; context of the HPV test; colposcopy clinic experience(s); women's perceptions of medical professionals' behaviours and attitudes, and information available on the Internet. Factors influencing women's HPV information needs were: concerns surrounding abnormal cytology or diagnosis of CIN; amount of information provided about HPV; awareness HPV is sexually transmitted; previous negative health care experience(s); and the HPV test in relation to other life events. The timing of delivery of HPV information was key to women absorbing or remembering the information given; it was important that information was given in stages rather than altogether. Conclusions: In women undergoing HPV testing during follow-up, the amount and timing of delivery of HPV information requires careful consideration. Significant barriers exist to accessing and absorbing HPV information which, unless addressed, could have serious implications in terms of women's comprehension of HPV tests. Given the expanding use of HPV testing within cervical screening, further research on HPV-related information issues is needed. abstract_id: PUBMED:21726363 What Australian women want and when they want it: cervical screening testing preferences, decision-making styles and information needs. Background: New testing technologies and human papillomavirus (HPV) vaccines have recently brought changes to cervical cancer screening. In 2006, the Australian government also changed the protocol for managing abnormal Pap smears. Australian women's attitudes and preferences to these changes are largely unknown. Quantitative data on information needs and community attitudes to informed decision making in screening in Australia are also limited. Objective: This national study measures women's preferences for testing and management of abnormal screening results, preferred decision-making styles and information needs for cervical cancer screening. Design: A randomly selected sample of Australian women aged 18-70 participated in a structured telephone questionnaire, exploring testing preferences, information and decision-making needs. Results: A total of 1279, of 1571 eligible women, participated in the study with an overall response rate of 81.4%. Half of the women (n = 637) preferred having their Pap smears at least annually, and 85% wanted concurrent HPV testing. A large proportion of women preferred to be involved in decision making for both routine Pap smears (87%) and follow-up for abnormal results (89%). The majority of women wanted information on screening risks (70%) and benefits (77%); of these 81 (85%) wanted this information before screening. However, 63% of women only wanted information about follow-up examinations if they had an abnormal Pap test result. Conclusion: Australian women want to be involved in decision making for cervical cancer screening and require information on the risks and benefits of Pap testing prior to undergoing any screening. abstract_id: PUBMED:33784992 Understanding HPV-positive women's needs and experiences in relation to patient-provider communication issues: a qualitative study. Background: HPV testing has been integrated in cervical cancer screening program. Patient-providers relationship is extremely important to improve cervical cancer screening outcomes. This qualitative study aims to understand HPV-positive women's needs and preferences about HCPs and patient-provider communication based on their experiences of accessing primary and specialized care. Methods: We conducted 40 semi-structured interviews with HPV-positive women. Recorded interviews transcribed and analyzed using conventional content analysis approach. Results: The analysis of the data led to the extraction of three main categories, including: provider's communication and counseling skills, commitment to professional principles, and knowledgeable and competent provider. Women needed understandable discussion about HPV, emotional support and acceptance, receiving HPV-related guidance and advice, and some considerations during clinical appointments. Women needed HCPs to treat them respectfully, gently and with non-judgmental attitude. "Precancerous" and "high-risk" words and watching colposcopy monitor during procedure had made women anxious. Weak referral system and limited interactions among gynecologists and other HCPs highlighted by participants. Conclusion: The results of this study, based on the experiences and perceptions of HPV women receiving health care, contain messages and practical tips to healthcare providers at the primary and specialized levels of care to facilitate patient-provider communication around HPV. Providers need to approach the discussion of HPV with sensitivity and take individual needs and preferences into account to improve the HPV-positive women's healthcare experience. abstract_id: PUBMED:16156944 Australian women's needs and preferences for information about human papillomavirus in cervical screening. Objective: The role of human papillomavirus (HPV) in cervical cancer and developments in medical technology to prevent cervical cancer has changed information needs for women participating in cervical screening. Design: Qualitative face-to-face interviews were conducted with 19 women diagnosed with HPV infection on their Pap smear following routine cervical screening. Setting: Family planning clinics, general practice and specialist gynaecologist practices in Sydney and the surrounding area, Australia. Main Outcome Measures: Women's information needs, preferences and experiences of HPV diagnosis. Results: Women wanted further information on different HPV viral types, transmission, implications for sexual partners, prevalence, latency and regression of HPV, their management options and the implications of infection for cancer risk and fertility. Uncertainty about the key aspects of HPV, the style in which the clinician communicated the result and the mode of delivering the result (letter, telephone or consultation) influenced women's psychological response to the diagnosis of HPV. The delivery of results by letter alone was linked to considerable anxiety among the women interviewed. Women's experience of searching the Internet for further information about HPV was reported as difficult, anxiety provoking and contributing to the stigma of the infection because information was often located in the context of other sexually transmitted infections, with multiple sexual partners highlighted as a risk factor for infection. Conclusion: Women participating in cervical screening need high-quality information about HPV and its role in cervical cancer prior to screening rather than afterwards, when they face an abnormal result. The clinician potentially plays an important role in moderating the effects of diagnosis through the manner and mode in which an HPV diagnosis is delivered. Revision of cervical screening policy and practice in light of the changes in the understanding of HPV is recommended. abstract_id: PUBMED:27845597 Effects of numerical information on intention to participate in cervical screening among women offered HPV vaccination: a randomised study. Objectives: To investigate the effects of different types of information about benefits and harms of cervical screening on intention to participate in screening among women in the first cohorts offered human papilloma virus (HPV) vaccination. Design: Randomised survey study. Setting: Denmark. Subjects: A random sample of women from the birth cohorts 1993, 1994 and 1995 drawn from the general population. Interventions: A web-based questionnaire and information intervention. We randomised potential respondents to one of the following four different information modules about benefits and harms of cervical screening: no information; non-numerical information; and two numerical information modules. Moreover, we provided HPV-vaccinated women in one of the arms with numerical information about benefits and harms in two steps: firstly, information without consideration of HPV vaccination and subsequently information conditional on HPV vaccination. Main Outcome Measure: Self-reported intention to participate in cervical screening. Results: A significantly lower proportion intended to participate in screening in the two groups of women receiving numerical information compared to controls with absolute differences of 10.5 (95% CI: 3.3-17.6) and 7.7 (95% CI: 0.4-14.9) percentage points, respectively. Among HPV-vaccinated women, we found a significantly lower intention to participate in screening after numerical information specific to vaccinated women (OR of 0.38). Conclusions: Women are sensitive to numerical information about the benefits and harms of cervical screening. Specifically, our results suggest that HPV-vaccinated women are sensitive to information about the expected changes in benefits and harms of cervical screening after implementation of HPV vaccination. KEY POINTS Women were less likely to participate in cervical screening when they received numerical information about benefits and harms compared to non-numerical or no information. Specifically, numerical information about the potential impact of the reduced risk of cervical cancer among HPV-vaccinated women reduced the intention to participate among vaccinated women. abstract_id: PUBMED:34961491 A study on women's health information needs in menopausal age. Menopause is a natural event experienced by women in middle age. To help women manage this event, it is important to identify their health information needs. A study specific questionnaire was used to identify menopausal women's health information needs and the resources and challenges related to finding information about menopause. A total of 301 women aged 48-55 years completed the questionnaire. Data were analysed using negative binomial regression and chi-square tests. The most frequently sought information was that related to breast cancer (n = 209, 69.5%), hot flushes (n = 200, 66.5%), cervical cancer (n = 194, 64.5%), non-hormonal therapies for menopausal symptoms (n = 192, 64%), laboratory tests (n = 189, 63%) and joint and muscle pain (n = 188, 62.5%). The main sources of information were audiovisual media (n = 171, 57%), obstetricians (n = 165, 55%), friends (n = 157, 52%), family (n = 157, 52%) and the internet (n = 153, 51%). The two main challenges were not knowing how to correctly access information (n = 115, 38%) and not being aware of reliable sources of information (n = 108, 36%). Therefore, it is essential for policymakers and decision-makers to provide reliable and accurate information to increase awareness and reduce anxiety of women experiencing menopause. Answer: The available abstracts suggest that cervical cancer screening patient information leaflets do not fully meet the HPV information needs of women. Women's medical questions are addressed more frequently than psychosocial ones, and the type and amount of HPV information vary by target audience, leading to inconsistent and often inadequate answers to women's questions on HPV (PUBMED:18372144). Most women have no prior awareness of HPV, and the information provided during cervical screening leaves them with more questions than answers, indicating a need for clear communication and possibly a separate national screening programme HPV leaflet (PUBMED:17362569). Women perceive the information provided as good, but they still have doubts and misconceptions after receiving HPV+ test results. They express difficulties in formulating questions during consultations due to shame, excess information, or lack of familiarity with technical language. They value emotional support and being treated kindly by professionals (PUBMED:34779742). Young women's autonomy and information needs are constrained by the setting of vaccination and the primacy of parental consent procedures, with reliance on leaflets leading to unmet information needs (PUBMED:33172415). Barriers to accessing and absorbing HPV information include being overwhelmed with information, the context of the HPV test, and women's perceptions of medical professionals' behaviors and attitudes. Factors influencing women's HPV information needs include concerns surrounding abnormal cytology or diagnosis of CIN, awareness that HPV is sexually transmitted, and previous negative healthcare experiences (PUBMED:25248873). Australian women want to be involved in decision-making for cervical cancer screening and require information on the risks and benefits of Pap testing prior to undergoing any screening (PUBMED:21726363). HPV-positive women's needs and preferences about healthcare providers and patient-provider communication are influenced by their experiences of accessing primary and specialized care, highlighting the need for understandable discussion about HPV, emotional support, and non-judgmental treatment (PUBMED:33784992). In summary, while some information is provided, it is clear that current patient information leaflets and communication strategies do not fully meet the complex and varied HPV information needs of women. There is a need for more accurate, balanced, and accessible information that addresses both medical and psychosocial aspects of HPV, delivered in a manner that supports women's autonomy and decision-making (PUBMED:18372144; PUBMED:17362569; PUBMED:34779742; PUBMED:33172415; PUBMED:25248873; PUBMED:21726363; PUBMED:33784992).
Instruction: Breast cancer regional radiation fields for supraclavicular and axillary lymph node treatment: is a posterior axillary boost field technique optimal? Abstracts: abstract_id: PUBMED:34434580 A historical literature review on the role of posterior axillary boost field in the axillary lymph node coverage and development of lymphedema following regional nodal irradiation in breast cancer. To elucidate whether (1) a posterior axillary boost (PAB) field is an optimal method to target axillary lymph nodes (LNs); and (2) the addition of a PAB increases the incidence of lymphedema, a systematic review was undertaken. A literature search was performed in the PubMed database. A total of 16 studies were evaluated. There were no randomized studies. Seven articles have investigated dosimetric aspects of a PAB. The remaining 9 articles have determined the effect of a PAB field on the risk of lymphedema. Only 2 of 9 articles have prospectively reported the impact of a PAB on the risk of lymphedema development. There are conflicting reports on the necessity of a PAB. The PAB field provides a good coverage of level I/II axillary LNs because these nodes are usually at a greater depth. The main concern regarding a PAB is that it produces a hot spot in the anterior region of the axilla. Planning studies optimized a traditional PAB field. Prospective studies and the vast majority of retrospective studies have reported the use of a PAB field does not result in increasing the risk of lymphedema development over supraclavicular-only field. The controversies in the incidence of lymphedema suggest that field design may be more important than field arrangement. A key factor regarding the use of a PAB is the depth of axillary LNs. The PAB field should not be used unless there is an absolute indication for its application. Clinicians should weigh lymphedema risk in individual patients against the limited benefit of a PAB, in particular after axillary dissection. The testing of the inclusion of upper arm lymphatics in the regional LN irradiation target volume, and universal methodology measuring lymphedema are all areas for possible future studies. abstract_id: PUBMED:23910694 An optimized posterior axillary boost technique in radiation therapy to supraclavicular and axillary lymph nodes: a comparative study. To assess the advantages of an optimized posterior axillary (AX) boost technique for the irradiation of supraclavicular (SC) and AX lymph nodes. Five techniques for the treatment of SC and levels I, II, and III AX lymph nodes were evaluated for 10 patients selected at random: a direct anterior field (AP); an anterior to posterior parallel pair (AP-PA); an anterior field with a posterior axillary boost (PAB); an anterior field with an anterior axillary boost (AAB); and an optimized PAB technique (OptPAB). The target coverage, hot spots, irradiated volume, and dose to organs at risk were evaluated and a statistical analysis comparison was performed. The AP technique delivered insufficient dose to the deeper AX nodes. The AP-PA technique produced larger irradiated volumes and higher mean lung doses than the other techniques. The PAB and AAB techniques originated excessive hot spots in most of the cases. The OptPAB technique produced moderate hot spots while maintaining a similar planning target volume (PTV) coverage, irradiated volume, and dose to organs at risk. This optimized technique combines the advantages of the PAB and AP-PA techniques, with moderate hot spots, sufficient target coverage, and adequate sparing of normal tissues. The presented technique is simple, fast, and easy to implement in routine clinical practice and is superior to the techniques historically used for the treatment of SC and AX lymph nodes. abstract_id: PUBMED:18805650 Breast cancer regional radiation fields for supraclavicular and axillary lymph node treatment: is a posterior axillary boost field technique optimal? Purpose: To assess whether using an anterior oblique supraclavicular (SCV) field with a posterior axillary boost (PAB) field is an optimal technique for targeting axillary (AX) lymph nodes compared with two computed tomography (CT)-based techniques: (1) an SCV field with an anterior boost field and (2) intensity-modulated radiotherapy (IMRT). Methods And Materials: Ten patients with CT simulation data treated with postmastectomy radiation that included an SCV field were selected for the study. Supraclavicular nodes and AX Level I-III nodes within the SCV field were contoured and defined as the treatment target. Plans using the three techniques were generated and evaluated for each patient. Results: The anterior axillary boost field and IMRT resulted in superior dose coverage compared with PAB. Namely, treatment volumes that received 105%, 80%, and 30% of prescribed dose for IMRT plans were significantly less than those for the anterior axillary boost plans, which were significantly less than PAB. For PAB and anterior axillary boost plans, there was a linear correlation between treatment volume receiving 105% of prescribed dose and maximum target depth. Furthermore, the IMRT technique resulted in better lung sparing and dose conformity to the target than anterior axillary boost, which again was significantly better than PAB. The maximum cord dose for IMRT was small, but higher than for the other two techniques. More monitor units were required to deliver the IMRT plan than the PAB plan, which was more than the anterior axillary boost plan. Conclusions: The PAB technique is not optimal for treatment of AX lymph nodes in an SCV field. We conclude that CT treatment planning with dose optimization around delineated target volumes should become standard for radiation treatments of supraclavicular and AX lymph nodes. abstract_id: PUBMED:29234253 Evaluation of Sentinel Lymph Node Biopsy and Axillary Lymph Node Dissection for Breast Cancer Treatment Concepts - a Retrospective Study of 1,214 Breast Cancer Patients. Background: Most breast cancer patients require lumpectomy with axillary sentinel lymph node biopsy (SLNB) or axillary lymph node dissection (ALND). The ACOSOG Z0011-trial failed to detect significant effects of ALND on disease-free and overall survival among patients with limited sentinel lymph node (SLN) metastases. Intense dose-dense chemotherapy and supraclavicular fossa radiation (SFR) are indicated for patients with extensive axillary metastases. In this multicentered study, we investigated the relevance of ALND after positive SLNB to determine adequate adjuvant therapy. Methods: We retrospectively analyzed data from 1,214 patients with clinically nodal negative T1-T2 invasive breast cancer undergoing surgery at Hanau City Hospital Breast cancer center. Results: 681 patients underwent ALND after SLNB. 20 patients (8.5%) from the group with 1 or 2 SLN metastases (n = 236) showed more than 3 lymph node metastases after ALND. 13 patients (31.7%) from the group with more than 2 SLN metastases (n = 41) were diagnosed with a minimum of 4 axillary lymph node metastases after ALND. Conclusions: In 8.5% of the patients with 1 or 2 SLN metastases, ALND detected more than 3 macrometastases, setting the indication for intense dose-dense chemotherapy and SFR. More than 2 SLN metastases, T stage and grading predict lymph node metastases. abstract_id: PUBMED:26120411 Regional lymph node radiotherapy in breast cancer: single anterior supraclavicular field vs. two anterior and posterior opposed supraclavicular fields. Background: The treatment of lymph nodes engaged in breast cancer with radiotherapy leads to improved locoregional control and enhanced survival rates in patients after surgery. The aim of this study was to compare two treatment techniques, namely single anterior posterior (AP) supraclavicular field with plan depth and two anterior and posterior opposed (AP/PA) supraclavicular fields. In the study, we also examined the relationships between the depth of supraclavicular lymph nodes (SCLNs) and the diameter of the wall of the chest and body mass index (BMI). Methods: Forty patients with breast cancer were analyzed using computed tomography (CT) scans. In planning target volume (PTV), the SCLNs and axillary lymph nodes (AXLNs) were contoured, and, with the attention to PTV, supraclavicular (SC) depth was measured. The dosage that reached the aforementioned lymph nodes and the level of hot spots were investigated using two treatment methods, i.e., 1) AP/PA and 2) AP with three-dimensional (3D) planning. Each of these methods was analyzed using the program Isogray for the 6 MV compact accelerator, and the diameter of the wall of the chest was measured using the CT scan at the center of the SC field. Results: Placing the plan such that 95% of the target volume with 95% or greater of the prescribed dose of 50 Gy (V95) had ≥95% concordance in both treatment techniques. According to the PTV, the depth of SCLNs and the diameter of the wall of the chest were 3-7 and 12-21cm, respectively. Regression analysis showed that the mean SC depth (the mean Plan depth) and the mean diameter of the wall of the chest were related directly to BMI (p&lt;0.0001, adjusted R(2)=0.67) and (p&lt;0.0001, adjusted R(2)=0.71), respectively. Conclusion: The AP/PA treatment technique was a more suitable choice of treatment than the AP field, especially for overweight and obese breast cancer patients. However, in the AP/PA technique, the use of a single-photon, low energy (6 MV) caused more hot spots than usual. abstract_id: PUBMED:26732519 Limited Supraclavicular Radiation Field in Breast Cancer With ≥ 10 Positive Axillary Lymph Nodes. Purpose: The present study was conducted to evaluate the patterns of recurrence and factors related to axillary or supraclavicular recurrence (ASR) and to suggest the probable indications of supraclavicular radiotherapy (SCRT) field modification for breast cancer patients with ≥ 10 axillary lymph node (LN) metastases who had received the current standard systemic management and limited-field SCRT. Materials And Methods: We performed a retrospective study of patients with breast cancer with ≥ 10 axillary LN metastases who had received standard surgery with postoperative RT, including limited SCRT (level III and supraclavicular area) and taxane-based adjuvant chemotherapy (except for neoadjuvant chemotherapy), from January 2000 to June 2012. ASR was defined as recurrence to levels I to III of the axillary or supraclavicular area. Results: The present study included 301 patients with breast cancer with ≥ 10 axillary LN metastases. The median follow-up period was 59.1 months (range, 7.4-167.9 months). Overall, 32 cases (10.6%) of locoregional recurrence were observed, and 27 patients (9.0%) exhibited ASR. Additionally, 16 patients (5.3%) developed recurrence in levels I or II of the axillary area, which are not included in the SCRT field. ASR-free survival was significantly related to the LN ratio (LNR) in both univariate and multivariate analysis. Conclusion: ASR was the most prevalent locoregional recurrence pattern in patients with breast cancer with ≥ 10 axillary LN metastases, and LNR was a significant prognostic factor for the development of ASR. Modification of the SCRT field, including the full axilla, should be considered in patients with a greater LNR. abstract_id: PUBMED:27553957 Trends in axillary treatment for breast cancer patients undergoing sentinel lymph node biopsy as determined by a questionnaire from the Japanese Breast Cancer Society. Background: Sentinel lymph node biopsy (SLNB) alone has been compared with SLNB followed by axillary lymph node dissection (ALND) in sentinel lymph node (SLN)-positive breast cancer patients in randomized phase III trials: the addition of ALND did not further improve the patient's outcome. However, there is still some controversy, regarding the clinical application of SLNB alone. To identify the optimal axillary treatment in the era of SLNB, the Japanese Breast Cancer Society conducted a group study of SLNB in 2014. Methods: A questionnaire on axillary surgery and radiation therapy was sent to 432 Japanese institutes in December 2014, and 309 (72 %) completed the questionnaire. Results: SLNB was performed at 98 % of the institutes, and 77 % offered irradiation for cancer treatment. Regarding breast-conserving surgery (BCS), SLNB alone was indicated at 41 % of the institutes in the cases of SLN with micrometastases. However, in the cases of SLN with macrometastases, ALND was performed at 64 %. The proportion of ALND seemed to be higher in total mastectomy than in BCS regardless of the SLN-positive status. In the cases of SLN with micrometastases, the radiation field was localized in the conserved breast at about half of the institutes. On the other hand, in the cases of SLN with macrometastases, it was extended to axillary and/or supraclavicular lesions beyond the conserved breast at about 70 % of the institutes. Conclusions: Japanese breast physicians were conservative with respect to the omission of ALND in SLN-positive breast cancer, especially in the cases of SLN with macrometastases. abstract_id: PUBMED:38186556 Technetium-99-Guided Axillary Lymph Node Identification: A Case Report of a Novel Technique for Targeted Lymph Node Excision Biopsy for Node Positive Breast Cancer After Neoadjuvant Chemotherapy. Targeted axillary lymph node identification for breast cancer involves localization and removal of previously marked metastatic lymph nodes after the completion of neoadjuvant chemotherapy (NACT), when clinical and radiological complete responses of the axillary nodes are achieved. Traditionally, axillary lymph node dissection is performed for patients with node positive disease, but the high rates of pathological complete responses now seen after NACT have ushered in lower morbidity techniques such as sentinel lymph node excision biopsies, targeted axillary lymph node dissection and targeted axillary lymph node identification (clip node identification) in node positive disease which has converted to clinical/radiologically node negative. The latter two techniques often require the use of expensive seeds and advanced localization techniques. Here we describe the case of a 59-year-old woman who was diagnosed with node positive invasive breast cancer who was sequenced with NACT. We developed a novel technique, where technetium-99m was injected directly into a previously clipped metastatic axillary lymph node which was then localized with the Neoprobe gamma detection system intra-operatively and removed. This is a relatively low-cost technique that can be easily introduced in limited resourced health systems where radio-guided sentinel lymph node biopsies are already being performed. abstract_id: PUBMED:26884657 Role of Combined Sentinel Lymph Node Biopsy and Axillary Node Sampling in Clinically Node-Negative Breast Cancer. Axillary lymph node status is a prognostic marker in breast cancer management, and axillary surgery plays an important role in staging and local control. This study aims to assess whether a combination of sentinel lymph node biopsy (SLNB) using patent blue dye and axillary node sampling (ANS) offers equivalent identification rate to dual tracer technique. Furthermore, we aim to investigate whether there are any potential benefits to this combined technique. Retrospective study of 230 clinically node-negative patients undergoing breast-conserving surgery for single T1-T3 tumours between 2006 and 2011. Axillae were staged using a combined blue dye SLNB/ANS technique. SLNs were localized in 226/230 (identification rate 98.3 %). Three of one hundred ninety-two patients with a negative SLN were found to have positive ANS nodes and 1/4 failed SLNB patients had positive ANS nodes. Thirty-four of two hundred twenty-six patients had SLN metastases and 11/34 (32.4 %) also had a positive non-sentinel lymph node on ANS. Twenty-one of twenty-four (87.5 %) node-positive T1 tumours had single node involvement. Nine of thirty-eight node-positive patients progressed to completion axillary clearance (cALND), and the rest were treated with axillary radiotherapy. Axillary recurrence was nil at median 5 year follow-up. Complementing SLNB with axillary node sampling (ANS) decreases the unavoidable false-negative rate associated with SLNB. Appropriate operator experience and technique can result in an SLN localization rate of 98 %, rivalling a dual tracer technique. The additional insight offered by ANS into the status of non-sentinel nodes has potential applications in an era of less frequent cALND. abstract_id: PUBMED:33673071 Dedicated Axillary MRI-Based Radiomics Analysis for the Prediction of Axillary Lymph Node Metastasis in Breast Cancer. Radiomics features may contribute to increased diagnostic performance of MRI in the prediction of axillary lymph node metastasis. The objective of the study was to predict preoperative axillary lymph node metastasis in breast cancer using clinical models and radiomics models based on T2-weighted (T2W) dedicated axillary MRI features with node-by-node analysis. From August 2012 until October 2014, all women who had undergone dedicated axillary 3.0T T2W MRI, followed by axillary surgery, were retrospectively identified, and available clinical data were collected. All axillary lymph nodes were manually delineated on the T2W MR images, and quantitative radiomics features were extracted from the delineated regions. Data were partitioned patient-wise to train 100 models using different splits for the training and validation cohorts to account for multiple lymph nodes per patient and class imbalance. Features were selected in the training cohorts using recursive feature elimination with repeated 5-fold cross-validation, followed by the development of random forest models. The performance of the models was assessed using the area under the curve (AUC). A total of 75 women (median age, 61 years; interquartile range, 51-68 years) with 511 axillary lymph nodes were included. On final pathology, 36 (7%) of the lymph nodes had metastasis. A total of 105 original radiomics features were extracted from the T2W MR images. Each cohort split resulted in a different number of lymph nodes in the training cohorts and a different set of selected features. Performance of the 100 clinical and radiomics models showed a wide range of AUC values between 0.41-0.74 and 0.48-0.89 in the training cohorts, respectively, and between 0.30-0.98 and 0.37-0.99 in the validation cohorts, respectively. With these results, it was not possible to obtain a final prediction model. Clinical characteristics and dedicated axillary MRI-based radiomics with node-by-node analysis did not contribute to the prediction of axillary lymph node metastasis in breast cancer based on data where variations in acquisition and reconstruction parameters were not addressed. Answer: The posterior axillary boost (PAB) field technique for treating supraclavicular and axillary lymph nodes in breast cancer has been evaluated in several studies, but its optimality remains a subject of debate. One study found that the PAB technique is not optimal for the treatment of axillary lymph nodes in a supraclavicular field. The study compared the PAB technique with an anterior axillary boost field and intensity-modulated radiotherapy (IMRT), concluding that the latter two resulted in superior dose coverage and better sparing of the lung, with IMRT providing the best dose conformity to the target. The PAB technique also required fewer monitor units than IMRT but more than the anterior axillary boost plan (PUBMED:18805650). Another study presented an optimized PAB technique (OptPAB) that combined the advantages of the PAB and anterior to posterior parallel pair (AP-PA) techniques, with moderate hot spots, sufficient target coverage, and adequate sparing of normal tissues. This optimized technique was found to be superior to the historically used techniques for the treatment of supraclavicular and axillary lymph nodes (PUBMED:23910694). However, a historical literature review indicated that there are conflicting reports on the necessity of a PAB. While the PAB field provides good coverage of level I/II axillary lymph nodes, the main concern is the production of a hot spot in the anterior region of the axilla. The review suggested that the use of a PAB field does not result in an increased risk of lymphedema development over a supraclavicular-only field, according to prospective studies and the majority of retrospective studies. It was recommended that the PAB field should not be used unless there is an absolute indication for its application, and clinicians should weigh the risk of lymphedema in individual patients against the limited benefit of a PAB, especially after axillary dissection (PUBMED:34434580). In summary, while the PAB technique can provide good coverage of axillary lymph nodes, it is not considered the optimal technique due to concerns about hot spots and the availability of more advanced techniques like IMRT that offer better target coverage and sparing of normal tissues. The decision to use a PAB should be made on a case-by-case basis, considering the individual patient's risk of lymphedema and the depth of axillary lymph nodes.
Instruction: Grading of age-related maculopathy for epidemiological studies: is digital imaging as good as 35-mm film? Abstracts: abstract_id: PUBMED:12917169 Grading of age-related maculopathy for epidemiological studies: is digital imaging as good as 35-mm film? Purpose: To compare stereo digital images with stereo 35-mm color transparencies as to the quality and reliability of grading age-related maculopathy (ARM) in the context of a multicenter European epidemiologic study (the EUREYE Study). Design: Instrument validation study. Participants: Ninety-one subjects (137 eyes) with varying degrees of ARM, including no ARM. Methods: From both eyes of the participants, 35-mm film and digital stereoscopic fundus images were obtained with two identical Topcon fundus cameras. Two experienced graders classified all signs of ARM according to the International Classification System. Agreement between imaging techniques and between graders was calculated using the weighted kappa statistic. Main Outcome Measures: Signs of ARM (number, size, and morphologic characteristics of drusen; pigmentary changes; geographic atrophy; and neovascular macular degeneration) as well as an overall staging system of increasing ARM severity. Results: The weighted kappa value for between-technique agreement ranged from 0.41 for number of drusen &lt;63 microm to 0.79 for drusen type and total area occupied by drusen. The kappa values for atrophic and neovascular end-stage ARM were 0.87 and 0.94, respectively. The between-technique agreement on stages of ARM was approximately 0.76. The agreement between graders was largely the same for both techniques of imaging. Conclusions: In the described setting, digital images were as good as 35-mm film for the grading of ARM. Considering the practical advantages of digital imaging, this technique may serve well in epidemiologic studies of ARM. abstract_id: PUBMED:32370299 Age-Related Macular Degeneration Staging by Color Fundus Photography vs. Multimodal Imaging-Epidemiological Implications (The Coimbra Eye Study-Report 6). Epidemiology of age-related macular degeneration (AMD) is based on staging systems relying on color fundus photography (CFP). We aim to compare AMD staging using CFP to multimodal imaging with optical coherence tomography (OCT), infra-red (IR), and fundus autofluorescence (FAF), in a large cohort from the Epidemiologic AMD Coimbra Eye Study. All imaging exams from the participants of this population-based study were classified by a central reading center. CFP images were graded according to the International Classification and Grading System for AMD and staged with Rotterdam classification. Afterward, CFP images were reviewed with OCT, IR, and FAF and stage update was performed if necessary. Early and late AMD prevalence was compared in a total of 1616 included subjects. In CFP-based grading, the prevalence was 14.11% for early AMD (n = 228) and 1.05% (n = 17) for late AMD, nine cases (0.56%) had neovascular AMD (nAMD) and eight (0.50%) geographic atrophy (GA). Using multimodal grading, the prevalence increased to 14.60% for early AMD (n = 236) and 1.61% (n = 26) for late AMD, with 14 cases (0.87%) of nAMD and 12 (0.74%) of GA. AMD staging was more accurate with the multimodal approach and this was especially relevant for late AMD. We propose that multimodal imaging should be adopted in the future to better estimate and compare epidemiological data in different populations. abstract_id: PUBMED:23620429 Methods and reproducibility of grading optimized digital color fundus photographs in the Age-Related Eye Disease Study 2 (AREDS2 Report Number 2). Purpose: To establish continuity with the grading procedures and outcomes from the historical data of the Age-Related Eye Disease Study (AREDS), color photographic imaging and evaluation procedures for the assessment of age-related macular degeneration (AMD) were modified for digital imaging in the AREDS2. The reproducibility of the grading of index AMD lesion components and for the AREDS severity scale was tested at the AREDS2 reading center. Methods: Digital color stereoscopic fundus photographs from 4203 AREDS2 subjects collected at baseline and annual follow-up visits were optimized for tonal balance and graded according to a standard protocol slightly modified from AREDS. The reproducibility of digital grading of AREDS2 images was assessed by reproducibility exercises, temporal drift (regrading a subset of baseline annually, n = 88), and contemporaneous masked regrading (ongoing, monthly regrade on 5% of submissions, n = 1335 eyes). Results: In AREDS2, 91% and 96% of images received replicate grades within two steps of the baseline value on the AREDS severity scale for temporal drift and contemporaneous assessment, respectively (weighted Kappa of 0.73 and 0.76). Historical data for temporal drift in replicate gradings on the AREDS film-based images were 88% within two steps (weighted Kappa = 0.88). There was no difference in AREDS2-AREDS concordance for temporal drift (exact P = 0.57). Conclusions: Digital color grading has nearly the same reproducibility as historical film grading. There is substantial agreement for testing the predictive utility of the AREDS severity scale in AREDS2 as a clinical trial outcome. (ClinicalTrials.gov number, NCT00345176.) abstract_id: PUBMED:27296491 Comparison of Short-Wavelength Reduced-Illuminance and Conventional Autofluorescence Imaging in Stargardt Macular Dystrophy. Purpose: To compare grading results between short-wavelength reduced-illuminance and conventional autofluorescence imaging in Stargardt macular dystrophy. Design: Reliability study. Methods: setting: Moorfields Eye Hospital, London (United Kingdom). Patients: Eighteen patients (18 eyes) with Stargardt macular dystrophy. Observation Procedures: A series of 3 fundus autofluorescence images using 3 different acquisition parameters on a custom-patched device were obtained: (1) 25% laser power and total sensitivity 87; (2) 25% laser power and freely adjusted sensitivity; and (3) 100% laser power and freely adjusted total sensitivity (conventional). The total area of 2 hypoautofluorescent lesion types (definitely decreased autofluorescence and poorly demarcated questionably decreased autofluorescence) was measured. Main Outcome Measures: Agreement in grading between the 3 imaging methods was assessed by kappa coefficients (κ) and intraclass correlation coefficients. Results: The mean ± standard deviation area for images acquired with 25% laser power and freely adjusted total sensitivity was 2.04 ± 1.87 mm(2) for definitely decreased autofluorescence (n = 15) and 1.86 ± 2.14 mm(2) for poorly demarcated questionably decreased autofluorescence (n = 12). The intraclass correlation coefficient (95% confidence interval) was 0.964 (0.929, 0.999) for definitely decreased autofluorescence and 0.268 (0.000, 0.730) for poorly demarcated questionably decreased autofluorescence. Conclusions: Short-wavelength reduced-illuminance and conventional fundus autofluorescence imaging showed good concordance in assessing areas of definitely decreased autofluorescence. However, there was significantly higher variability between imaging modalities for assessing areas of poorly demarcated questionably decreased autofluorescence. abstract_id: PUBMED:15534124 Detection of age-related macular degeneration using a nonmydriatic digital camera and a standard film fundus camera. Objective: To compare gradings of lesions associated with age-related macular degeneration (AMD) from digital and stereoscopic film images. Design: Instrument validation study. Participants: Sixty-two subjects (124 eyes) with varying degrees of AMD, including no AMD. Methods: Images of the optic disc and macula were taken using a 45 degrees digital camera (6.3 megapixels) through dark-adapted pupils and pharmacologically dilated pupils. In addition, 30 degrees stereoscopic retinal film images were taken through pharmacologically dilated pupils of the same eyes. All images were graded for drusen size, type, and area; pigmentary abnormalities; geographic atrophy; and neovascular lesions using the modified Wisconsin Age-Related Maculopathy Grading System. Exact agreement and unweighted kappa scores were calculated for paired gradings resulting from digital and film images. Main Outcome Measure: Agreement between gradings obtained from stereoscopic slide transparencies and digital nonstereoscopic images. Results: Exact agreement between gradings of digital and stereoscopic film images taken through pharmacologically dilated pupils was 91% (kappa = 0.85) for the categories of none, early AMD, and late AMD. Exact agreement for gradings of digital images taken through dark-adapted pupils compared with gradings of film images was 80% (kappa = 0.69). Exact agreement for gradings of digital images captured through dark-adapted and pharmacologically dilated pupils was 86% (kappa = 0.78). In addition, kappa scores for agreement between different approaches for individual lesions were moderate to almost perfect. Conclusions: Gradings resulting from high-resolution digital images, especially when the pupil is pharmacologically dilated, are comparable with those resulting from film-based images. We conclude that digital imaging of the retina is useful for epidemiological studies of AMD. abstract_id: PUBMED:17429674 Combined grading for choroidal neovascularisation: colour, fluorescein angiography and autofluorescence images. Background: Patients with age-related macular degeneration (ARMD) have several imaging techniques carried out regularly. In this study we introduce a new grading model of autofluorescence images (AF), compare it with fluorescein angiography (FFA) and digital colour fundus photos (COL) and test for inter- and intraobserver reliability. Methods: A total of 71 eyes of 54 patients with bilateral or unilateral CNV had COL, FFA and AF, fulfilling the inclusion criterion of having all 3 types of imaging carried out on the same day or within 14 days. The grading of COL was performed by a trained grader based on the International ARM classification; FFA and AF images were independently graded by two trained retinal specialists in order to assess inter-observer reliability. Overall, 30% of all images were regraded after at least 14 days interval to assess intra-observer variability. Results: The intergrader agreement was exact for classification of CNV (k = 1.00); almost perfect for FFA features (k = 0.83) and correspondence of decreased AF to COL (k = 0.94); substantial for patterns of decreased and increased AF (k = 0.80, k = 0.78), correspondence of patterns of increased AF to FFA and to COL (k = 0.78, k = 0.74) and background AF (k = 0.72); moderate for CNV diameter in FFA (k = 0.45), FFA pattern (k = 0.43), dimension of increased and decreased AF (k = 0.5, k = 0.56); fair for quality of FFA and AF images (k = 0.21, k = 0.26) respectively. The intragrader agreement varied from exact to substantial for all categories. Diffuse and reticular patterns of decreased AF and reticular pattern of increased AF correlated well, with visual acuity worse than 6/24. Conclusion: The combined grading system was reliable for evaluating the three imaging techniques, and might be suitable for epidemiological studies and therapeutic trials where such grading is warranted. Certain AF patterns seem to predict VA outcome better than one might have predicted based on FFA. Further studies are needed to evaluate its usefulness in clinical settings for predicting outcomes for patients receiving therapy for end-stage disease. abstract_id: PUBMED:27541733 Epidemiology of age-related macular degeneration Age-related macular degeneration (AMD) is the main cause of blindness in industrialized societies. Population-based epidemiological investigations generate important data on prevalence, incidence, risk factors, and future trends. This review summarizes the most important epidemiological studies on AMD with a focus on their transferability to Germany including existing evidence for the main risk factors for AMD development and progression. Future tasks, such as the standardization of grading systems and the use of recent retinal imaging technology in epidemiological studies are discussed. In Germany, epidemiological data on AMD are scarce. However, the need for epidemiological research in ophthalmology is currently being addressed by several recently started population-based studies. abstract_id: PUBMED:34300228 Dysregulated Tear Film Proteins in Macular Edema Due to the Neovascular Age-Related Macular Degeneration Are Involved in the Regulation of Protein Clearance, Inflammation, and Neovascularization. Macular edema and its further complications due to the leakage from the choroidal neovascularization in course of the age-related macular degeneration (AMD) is a leading cause of blindness among elderly individuals in developed countries. Changes in tear film proteomic composition have been reported to occur in various ophthalmic and systemic diseases. There is an evidence that the acute form of neovascular AMD may be reflected in the tear film composition. Tear film was collected with Schirmer strips from patients with neovascular AMD and sex- and age-matched control patients. Two-dimensional electrophoresis was performed followed by MALDI-TOF mass spectrometry for identification of differentially expressed proteins. Quantitative analysis of the differential electrophoretic spots was performed with Delta2D software. Altogether, 11 significantly differentially expressed proteins were identified; of those, 8 were downregulated, and 3 were upregulated in the tear film of neovascular AMD patients. The differentially expressed proteins identified in tear film were involved in signaling pathways associated with impaired protein clearance, persistent inflammation, and neovascularization. Tear film protein analysis is a novel way to screen AMD-related biomarkers. abstract_id: PUBMED:20523357 Agreement between image grading of conventional (45°) and ultra wide-angle (200°) digital images in the macula in the Reykjavik eye study. Purpose: To establish the agreement between image grading of conventional (45°) and ultra wide-angle (200°) digital images in the macula. Methods: In 2008, the 12-year follow-up was conducted on 573 participants of the Reykjavik Eye Study. This study included the use of the Optos P200C AF ultra wide-angle laser scanning ophthalmoscope alongside Zeiss FF 450 conventional digital fundus camera on 121 eyes with or without age-related macular degeneration using the International Classification System. Of these eyes, detailed grading was carried out on five cases each with hard drusen, geographic atrophy and chorioretinal neovascularisation, and six cases of soft drusen. Exact agreement and κ-statistics were calculated. Results: Comparison of the conventional and ultra wide-angle images in the macula showed an overall 96.43% agreement (κ=0.93) with no disagreement at end-stage disease; although in one eye chorioretinal neovascularisation was graded as drusenoid pigment epithelial detachment. Of patients with drusen only, the exact agreement was 96.1%. The detailed grading showed no clinically significant disagreement between the conventional 45° and 200° images. Conclusions: On the basis of our results, there is a good agreement between grading conventional and ultra wide-angle images in the macula. abstract_id: PUBMED:18997640 Grading of age-related maculopathy: slit-lamp biomicroscopy versus an accredited grading center. Purpose: To compare clinical age-related maculopathy (ARM) grading using slit-lamp biomicroscopy (SLB) versus photographic grading of stereoscopically captured fundus photographs (FP) using a high-resolution fundus camera. Methods: A subset (129) of participants in the Carotenoids in ARM study were clinically graded for ARM signs and the corresponding FPs were graded in an accredited reading center. Results: Drusen were said to be present in 192 (94.5%) eyes graded by FP, and in 165 (82.5%) eyes examined by SLB (agreement = 84%, k = 0.23). A good and modest degree of agreement was observed between SLB and FP for quantification of total drusen number (agreement = 81%, k = 0.33) and for classification of drusen subtypes (agreement = 60%, k = 0.33), respectively. Localization of drusen to either the inner zone or the outer zone was comparable for both techniques of ARM grading (inner zone: agreement = 89%; outer zone: agreement = 88%, k = 0.39). Agreement between SLB and FP was 76% (k = 0.46) for identification of any pigment abnormality; however, agreement was less for hypo-pigmentation abnormalities (agreement = 64%, k = 0.21). Conclusions: From the statistical perspective, SLB grading of ARM is generally comparable with findings from standardized grading of FP. However, the lower levels of agreement for classification of drusen subtypes and detection of hypo-pigmentation suggest these features may go unrecorded in a study which relies on SLB for grading ARM. Answer: The comparison between digital imaging and 35-mm film for grading age-related maculopathy (ARM) in epidemiological studies has been explored in several studies. According to the research, digital imaging has been found to be as reliable as 35-mm film for grading ARM. In a study comparing stereo digital images with stereo 35-mm color transparencies, digital images were found to be as good as 35-mm film for grading ARM, with weighted kappa values for between-technique agreement ranging from 0.41 to 0.94 for various ARM signs and an overall staging system agreement of approximately 0.76 (PUBMED:12917169). This suggests that digital imaging can serve well in epidemiological studies of ARM, considering its practical advantages. Another study that aimed to compare AMD staging using color fundus photography (CFP) to multimodal imaging with optical coherence tomography (OCT), infra-red (IR), and fundus autofluorescence (FAF) found that AMD staging was more accurate with the multimodal approach, especially for late AMD (PUBMED:32370299). This indicates that while digital imaging is effective, incorporating additional imaging modalities can enhance the accuracy of AMD staging. Furthermore, the reproducibility of grading optimized digital color fundus photographs in the Age-Related Eye Disease Study 2 (AREDS2) was tested and found to have nearly the same reproducibility as historical film grading, with substantial agreement for testing the predictive utility of the AREDS severity scale in AREDS2 as a clinical trial outcome (PUBMED:23620429). In conclusion, digital imaging is considered to be as good as 35-mm film for the grading of ARM in epidemiological studies, and its use is supported by substantial agreement in grading outcomes and the practical benefits it offers over traditional film-based methods.
Instruction: Pancreatic cancer with paraaortic lymph node metastasis: a contraindication for radical surgery? Abstracts: abstract_id: PUBMED:36187398 Efficacy and safety of laparoscopic radical resection following neoadjuvant therapy for pancreatic ductal adenocarcinoma: A retrospective study. Background: Multiple studies have demonstrated that neoadjuvant chemotherapy (NACT) can prolong the overall survival of pancreatic ductal adenocarcinoma (PDAC) patients. However, most studies have focused on open surgery following NACT. Aim: To investigate the efficacy and safety of laparoscopic radical resection following NACT for PDAC. Methods: We retrospectively analyzed the clinical data of 15 patients with pathologically confirmed PDAC who received NACT followed by laparoscopic radical surgery in our hospital from December 2019 to April 2022. All patients underwent abdominal contrast-enhanced computed tomography (CT) and positron emission tomography-CT before surgery to accurately assess tumor stage and exclude distant metastasis. Results: All 15 patients with pancreatic cancer were successfully converted to surgical resection after NACT, including 8 patients with pancreatic head cancer and 7 patients with pancreatic body and tail cancer. Among them, 13 patients received the nab-paclitaxel plus gemcitabine regimen (gemcitabine 1000 mg/m2 plus nab-paclitaxel 125 mg/m2 on days 1, 8, and 15 every 4 wk) and 2 patients received the modified FOLFIRINOX regimen (intravenous oxaliplatin 68 mg/m2, irinotecan 135 mg/m2, and leucovorin 400 mg/m2 on day 1 and fluorouracil 400 mg/m2 on day 1, followed by 46-h continuous infusion of fluorouracil 2400 mg/m2). After each treatment cycle, abdominal CT, tumor markers, and circulating tumor cell counts were reviewed to evaluate the treatment efficacy. All 15 patients achieved partial remission. The surgical procedures included laparoscopic pancreaticoduodenectomy (LPD, n = 8) and laparoscopic radical antegrade modular pancreatosplenectomy (L-RAMPS, n = 7). None of them were converted to a laparotomy. One patient with pancreatic head carcinoma was found to have portal vein involvement during the operation, and LPD combined with vascular resection and reconstruction was performed. The amount of blood loss and operation times of L-RAMPS vs LPD were 435.71 ± 32.37 mL vs 343.75 ± 145.01 mL and 272.52 ± 49.14 min vs 444.38 ± 68.63 min, respectively. The number of dissected lymph nodes was 16.87 ± 4.10, and 3 patients had positive lymph nodes. One patient developed grade B postoperative pancreatic fistula (POPF) after L-RAMPS, and one patient experienced jaundice after LPD. None of the patients died after surgery. As of April 2022, progressive disease was noted in 4 patients, 2 patients had liver metastasis, and one had both liver metastasis and lymph node metastasis and died during the follow-up period. Conclusion: Laparoscopic radical resection of PDAC after NACT is safe and effective if it is performed by a surgeon with rich experience in LPD and in a large center of pancreatic surgery. abstract_id: PUBMED:35093863 A Systematic Review of Minimally Invasive Versus Open Radical Antegrade Modular Pancreatosplenectomy for Pancreatic Cancer. Background/aim: The aim of this study was to investigate surgical and oncological outcomes of minimally invasive (MI) and open radical antegrade modular pancreatosplenectomy (RAMPS) for the treatment of left-sided pancreatic cancer. Materials And Methods: A systematic literature search and meta-analyses were performed focusing on short-term surgical oncology of MI- and open-RAMPS. Results: A total of seven studies with 423 patients were included in this review. The equivalent short-term and long-term outcomes of the groups were confirmed. The results of meta-analyses found no significant difference in R0 resection rates (OR=1.78, 95%CI=0.76-4.15, p=0.18), although MI-RAMPS was associated with a smaller number of dissected lymph nodes (MD=-3.14, 95%CI=-4.75 - -1.53, p&lt;0.001) and lymph node metastases (OR=0.55, 95%CI=0.31-0.97, p=0.04). Conclusion: MI-RAMPS could provide surgically and oncologically feasible outcomes for well-selected left-sided pancreatic cancer as compared to open-RAMPS. However, further high-level evidence should be needed to confirm survival benefits following MI-RAMPS. abstract_id: PUBMED:22353530 Radical distal gastrectomy in laparoscopic and open surgery: is it necessary for pancreatic capsule resection? Background/aims: To explore the involvement of the pancreatic capsule during radical gastrectomy in gastric cancer. Methodology: Pancreatic capsule samples were collected from the 83 cases (56 men and 27 women) during open radical gastrectomy and laparoscopic resection between January 2007 and July 2008. RT-PCR and immunohistochemistry were applied for tumor detection. There was a 2-year follow-up; the relationship of the pancreatic capsule involvement, tumor stage and survival rate were evaluated. Results from radical distal gastrectomy were combined with those of gastric cancer pancreatic capsule cleaning; clinical data, pathology, immunohistochemistry and RT-PCR were used to confirm the necessity of pancreatic capsule resection in laparoscopic radical gastrectomy. Results: H&amp;E staining of the pancreatic capsule showed no tumor existence in any of the 83 patients but immunohistochemistry showed CK20 positive cells in 20 patients (33.7%), while RT-PCR detected CK20 mRNA positive cells in 42 patients (50.6%). Cases with stage T1 and T2 were negative for CK20 in both RT-PCR and immunohistochemistry and the few cases with T3 and T4 were also negative in both RT-PCR and immunohistochemistry. The metastasis in the pancreatic capsule correlated mainly with the invasive serous membrane, lymph node metastasis and tumor stage (p&lt;0.05) but not with gender and age (p&gt;0.05). Conclusions: For T1 and T2 stage, there was no evidence of pancreatic capsule metastasis, which may facilitate the decision making of the pancreatic capsule resection during radical distal gastrectomy. abstract_id: PUBMED:19621722 Pancreatic cancer with distant metastases: a contraindication for radical surgery? Background/aims: The purpose of this study was to analyze cases of resected pancreatic cancer with distant metastasis (M1) and to review the surgical indication for these patients. Methodology: Between July 1981 and December 2007, 542 patients with pancreatic cancer underwent surgery at the Department of Surgery II, Nagoya University. These patients included 48 cases of paraaortic lymph node metastases, 11 cases of hepatic metastases and 6 cases of peritoneal metastases. The overall survival rates were evaluated using the Kaplan-Meier method. Results: Overall survival in patients stratified by M0 and M1 showed significant differences between M0 and M1 cases. As for hepatic metastases and peritoneal metastases, no significant difference in survival was observed between resected and unresected cases. However, survival in cases of paraaortic lymph node metastases was better than that in unresected cases, although this observation was not statistically significant. Conclusions: Hepatic or peritoneal metastases are contraindications for radical surgery for pancreatic cancer. On the other hand, patients with paraaortic lymph node metastases are relatively promising targets for radical surgery, and radical resection with extended lymphadenectomy remain an option for these patients. abstract_id: PUBMED:19615291 Postoperative adjuvant radiotherapy for pancreatic carcinoma patients after radical resection Objective: To retrospectively investigate the difference in survival of pancreatic adenocarcinoma patients treated by radical surgery with or without adjuvant radiation therapy. Methods: Forty-four patients with pancreatic cancer underwent surgical resection with a curative intent, and were divided into two groups: surgery alone (n = 24) or surgery combined with postoperative external beam radiotherapy (EBRT) (n = 20). Survival as an endpoint was analyzed between the two groups. Results: All 44 patients completed their scheduled treatment. The median survival time of the patients treated with radical resection alone was 379 days versus 665 days for those treated with combined therapy. The 1-, 3-, 5-year survival rates of the patients treated with radical resection alone were 46.3%, 8.3%, 4.2% versus 65.2%, 20.2%, 14.1% for the patients treated with combined therapy, respectively, with a significant difference between the two groups (P = 0.017). The failures in local-regional relapse were significantly lower in the postoperative EBRT group than that in the surgery alone group (P &lt; 0.05), while the additional postoperative radiation therapy did not increase the complication rate (P &gt; 0.05). Conclusion: Postoperative external beam radiation therapy can improve the survival in patients with pancreatic adenocarcinoma. abstract_id: PUBMED:30348713 CD44 Predicts Early Recurrence in Pancreatic Cancer Patients Undergoing Radical Surgery. Background/aim: Pancreatic ductal adenocarcinoma (PDAC) is one of the most aggressive types of digestive cancer. Recurrence within one year after surgery is inevitable in most PDAC patients. Recently, cluster of differentiation 44 (CD44) has been shown to be associated with tumor initiation, metastasis and prognosis. This study aimed to explore the correlation of CD44 expression with clinicopathological factors and the role of CD44 in predicting early recurrence (ER) in PDAC patients after radical surgery. Materials And Methods: PDAC patients who underwent radical resection between January 1999 and March 2015 were enrolled in this study. Tumor recurrence within 6 months after surgery was defined as ER. Immunohistochemical staining was performed with anti-CD44 antibodies. The association between clinicopathological parameters and CD44 expression was analyzed. Predictors for ER were also assessed with univariate and multivariate analyses. Results: Overall, 155 patients were included in this study. Univariate analysis revealed CA19-9 levels (p=0.014), CD44 histoscores (H-scores; p=0.002), differentiation (p=0.010), nodal status (p=0.005), stage (p=0.003), vascular invasion (p=0.007), lymphatic invasion (p&lt;0.001) and perineural invasion (p=0.042) as risk factors for ER. In multivariate analysis, high CA19-9 levels and CD44 H-scores and poor differentiation independently predicted ER. Conclusion: High CA19-9 levels, CD44 H-scores and poor differentiation are independent predictors for ER in PDAC patients undergoing radical resection. Therefore, the determination of CD44 expression might help in identifying patients at a high risk of ER for more aggressive treatment after radical surgery. abstract_id: PUBMED:1857030 The results and problems of extensive radical surgery for carcinoma of the head of the pancreas. Since 1973, 152 patients with pancreatic carcinoma have undergone surgery in our clinic, including 110 with carcinoma of the head of the pancreas. Of these 110 patients, resections were performed on 43 (39.1 per cent), 33 (30 per cent) of whom underwent a curative resection based on macroscopic evidence. Six of the patients who underwent macroscopic curative resection survived for five years, giving a five-year survival rate of 36.5 per cent by the Kaplan-Meier method after excepting 6 operative deaths. We compared the extent of pancreatic cancer by constructing survival curves according to the General Rules published by the Japan Pancreas Society. There was no statistical difference in survival based on tumor size or stage, however, there was a significant difference in the survival curves of so and se, being the absence or presence of the anterior capsule of the pancreas, rpo and rpe, being the absence or presence of invasion of the retroperitoneal tissue; ew(-) and ew(+) being the absence or presence of invasion at the surgical margin of resection, or n0 and n1 being the extent of lymph node metastasis. The results of this comparison suggest that extended radical pancreatectomy may be indicated for the treatment of pancreatic cancer as the standard radical operation for pancreatic cancer may miss tumors which have spread to the retroperitoneum and extrapancreatic nerve plexus. abstract_id: PUBMED:2316288 Correlation of tumor site and tumor volume in radical surgery of cancer of the peripapillary area In a group of 52 partial duodenopancreatectomies (34 cancers of the head of the pancreas, 14 cancers of the papilla and 4 cancers of the common bile duct) the dependence of tumour volume and tumour localisation on carcinomatous lymph node involvement, infiltration of surrounding tissues, infiltration of great visceral vessels and not radical resection was determined. The analysis demonstrates that there is significant difference in volume between radically treated carcinomas of the common bile duct, the papilla of Vater and the head of the pancreas with mean volumes of 463:1,851:11,835 mm3 or 1:4:26! Furthermore the investigation shows that for every analysed parameter median volume is larger for positive observations than for negative ones: lymph node involvement 10,606:6,459 mm3, infiltration of surrounding tissues 10,444:6,703 mm3, infiltration of great visceral vessels 14,923:7,144 mm3, not radical resection 18,130:5,343 mm3. From these results it is concluded, that early stages of periampullary carcinomas with a good chance for cure are only present in radically treated cancers of the papilla and the common bile duct. In cancers of the head of the pancreas the significant larger primary cancers and the more advanced staging results in a dismal prognosis. abstract_id: PUBMED:34228945 Short- and Long-Term Outcomes of Robotic Surgery for Rectal Neuroendocrine Tumor. Background. The optimal radical surgical approach for rectal neuroendocrine tumor (NET) is unknown. Methods. This study evaluated the short- and long-term outcomes of 27 patients who underwent robotic radical surgery for rectal NET between 2011 and 2019. Results. The median distance from the lower border of the tumor to the anal verge was 5.0 cm. The median tumor size was 9.5 mm. Six patients (22%) had lymph node metastasis. The incidences of postoperative complications of grade II and grade III or more according to the Clavien-Dindo classification were 11% and 0%, respectively. All patients underwent sphincter-preserving surgery, and no patients required conversion to open surgery. The median follow-up time was 48.9 months, and both the 3-year overall survival and relapse-free survival rates were 100%. Conclusions. Short- and long-term outcomes of robotic surgery for rectal NET tumor were favorable. Robotic surgery may be a useful surgical approach for rectal NET. abstract_id: PUBMED:1463338 Extensive radical surgery for carcinomas of the head of the pancreas Since 1973, we have applied extensive radical surgery for the head of the pancreas in order to improve not only the resection rate but also the surgical therapeutic results. The main points of our surgical procedure are as follows: (1) complete resection of surrounding connective tissues in retroperitoneum including extrapancreatic neural plexus. (2) extensive lymph node dissection, and (3) combined resection of the portal vein. As a result, our 3-year survival rate has increased to 31.4% and the 5-year survival rate to 27.9% in patients who underwent the macroscopically complete resection. These surgical therapeutic results are not satisfactory, but we have had 10 patients who survived more than 3 years. It is also true, however, that this procedure is extremely invasive and requires careful, long-term nutritional management after surgery. Therefore, the cooperation of not only the patients themselves but also their families is indispensable for the success of the treatment. Answer: Pancreatic cancer with paraaortic lymph node metastasis is not necessarily a contraindication for radical surgery. According to a study that analyzed cases of resected pancreatic cancer with distant metastasis, including paraaortic lymph node metastases, it was found that survival in cases of paraaortic lymph node metastases was better than that in unresected cases, although the difference was not statistically significant. This suggests that patients with paraaortic lymph node metastases may still be considered for radical surgery, and radical resection with extended lymphadenectomy remains an option for these patients (PUBMED:19621722).
Instruction: Vancouver and the 2010 Olympic Games: physical activity for all? Abstracts: abstract_id: PUBMED:24385473 Vancouver and the 2010 Olympic Games: physical activity for all? Background: The City of Vancouver, British Columbia strategically designed and implemented a municipal health promotion Policy--the Vancouver Active Communities policy--to leverage the 2010 Olympic Games. The goal of the policy was to increase physical activity participation among Vancouver residents by 2010. Methods: In this paper, we conduct a critical policy analysis of health promotion policy documents that were available on the City of Vancouver's website. Results: We elaborate on the background to the policy and more specifically we examine its content: the problem definition, policy goals, and policy instruments. Discussion: Our analysis showed inconsistency within the policy, particularly because the implemented policy instruments were not designed to address needs of the identified target populations in need of health promotion efforts, which were used to legitimize the approval of funding for the policy. Inconsistency across municipal policies, especially in terms of promoting physical activity among low-income residents, was also problematic. Conclusions: If other municipalities seek to leverage health promotion funding related to hosting sport mega-events, the programs and services should be designed to benefit the target populations used to justify the funding. Furthermore, municipalities should clearly indicate how funding will be maintained beyond the life expectancy of the mega-event. abstract_id: PUBMED:19717101 Traveling to Canada for the Vancouver 2010 Winter Olympic and Paralympic Games. The 21st Winter Olympic Games will be held in Vancouver, British Columbia, Canada from February 12 to 28, 2010. Following the Winter Olympic Games, the Winter Paralympic Games will be held from March 12 to 21, 2010. There will be 86 winter sporting events hosted in Vancouver with 5500 athletes staying in two Olympic Villages. Another 2800 members of the media, 25,000 volunteers, and 1 million spectators are expected in attendance. This paper reviews health and safety issues for all travelers to Canada for the 2010 Vancouver Winter Olympic Games with a specific focus on pre-travel planning, road and transportation safety in British Columbia, natural and environmental hazards, Olympic medical facilities, safety and security, and infectious disease. abstract_id: PUBMED:21393259 Fit for the fight? Illnesses in the Norwegian team in the Vancouver Olympic Games. Background: The development of strategies to prevent illnesses before and during Olympic Games provides a basis for improved health and Olympic results. Objective: (1) To document the efficacy of a prevention programme on illness in a national Olympic team before and during the 2010 Vancouver Olympic Winter Games (OWG), (2) to compare the illness incidence in the Norwegian team with Norwegian incidence data during the Turin 2006 OWG and (3) to compare the illness incidence in the Norwegian team with illness rates of other nations in the Vancouver OWG. Methods: Information on prevention measures of illnesses in the Norwegian Olympic team was based on interviews with the Chief Medical Officer (CMO) and the Chief Nutrition and Sport Psychology Officers, and on a review of CMO reports before and after the 2010 OWG. The prevalence data on illness were obtained from the daily reports on injuries and illness to the International Olympic Committee. Results: The illness rate was 5.1% (five of 99 athletes) compared with 17.3% (13 out of 75 athletes) in Turin (p=0.008). A total of four athletes missed one competition during the Vancouver Games owing to illness, compared with eight in Turin. The average illness rate for all nations in the Vancouver OWG was 7.2%. Conclusions Although no definite cause-and-effect link between the implementation of preventive measures and the prevalence of illness in the 2010 OWG could be established, the reduced illness rate compared with the 2006 OWG, and the low prevalence of illnesses compared with other nations in the Vancouver OWG suggest that the preparations were effective. abstract_id: PUBMED:35706535 Olympic rankings based on objective weighting schemes. In this paper, we propose an objective principal components weighting scheme for all-time Winter Olympic gold, silver and bronze medals based solely on the number of medals won. Our results suggest that the approximately equal weights be assigned (or the total medal counts be used regardless of color) if all of the three medal types are retained for ranking purposes. When the proposed methodology is tested against five alternative weighting schemes that have been suggested in the literature using the results for the 2010 Vancouver Winter Olympics, we find a significant agreement in the country rankings. Furthermore, our implementation of principal components variable reduction strategy results in the identification of silver as the best representative medal count for parsimonious Winter Olympics rankings. abstract_id: PUBMED:24302776 A profile of the Youth Olympic Taekwondo Athlete. Our study aims to identify trends in anthropomorphic attributes and competitive strategies of successful (medalists) versus non medalist young Olympic Taekwondo competitors by gender in terms of body mass, body-mass index (BMI) and fighting technique at the Youth Olympic Games 2010. Results were then compared to adult Taekwondo Olympic athletes in 2000, 2004 and 2008. Data on 96 Taekwondo athletes were obtained from the official Youth Olympic website. A LOGIT analysis was performed on the following six independent variables: height, body mass, body mass index, gender, techniques used to score, and warnings obtained during a match. The study did find some differences between winners and non-winners for males and female, although none of the differences were statistically significant. Consequently, training personnel may enhance winning potential of Taekwondo competitors by focusing on offensive versus defensive techniques and improving the quality of punching. abstract_id: PUBMED:25182041 The impact of the Vancouver Winter Olympics on population level physical activity and sport participation among Canadian children and adolescents: population based study. Background: There has been much debate about the potential impact of the Olympics. The purpose of this study was to determine if hosting the 2010 Vancouver Olympic Games (OG) encouraged Canadian children to be physically active. Methods: Children 5-19 years (n = 19862) were assessed as part of the representative Canadian Physical Activity Levels Among Youth surveillance study between August 2007 and July 2011. Parents were asked if the child participated in organized physical activity or sport. In addition, children wore pedometers for 7 days to objectively provide an estimate of overall physical activity. Mean steps/day and percent participating in organized physical activity or sport were calculated by time period within year for Canada and British Columbia. The odds of participation by time period were estimated by logistic regression, controlling for age and sex. Results: Mean steps were lower during the Olympic period compared with Pre- (607 fewer steps/day 95% CI 263-950 steps/day) and Post-Olympic (1246 fewer steps 95% CI 858-1634 steps) periods for Canada. There was no difference by time period in British Columbia. A similar pattern in mean steps by time period was observed across years, but there were no significant differences in activity within each of these periods between years. The likelihood of participating in organized physical activity or sport by time period within or across years did not differ from baseline (August-November 2007). Conclusion: The 2010 Olympic Games had no measurable impact on objectively measured physical activity or the prevalence of overall sports participation among Canadian children. Much greater cross-Government and long-term efforts are needed to create the conditions for an Olympic legacy effect on physical activity. abstract_id: PUBMED:31016215 Data on sentiments and emotions of olympic-themed tweets. Two code files and one dataset related to Olympic Twitter activity are the foundation for this article. Through Twitter's Spritzer streaming API (Application Programming Interface), we collected over 430 million tweets from May 12th, 2016 to September 12th, 2016 windowing the Rio de Janeiro Olympics and Paralympics. We cleaned and filtered these tweets to contain Olympic-related content. We then analyzed the raw data of 21,218,652 tweets including location data, language, and tweet content to distill the sentiment and emotions of Twitter users pertaining to the Olympic Games Kassens-Noor E. et al., 2019. We generalized the original data set to comply with the Twitter's Terms of Service and Developer agreement, 2018. We present the modified dataset and accompanying code files in this article to suggest using both for further analysis on sentiment and emotions related to the Rio de Janeiro Olympics and for comparative research on imagery and perceptions of other Olympic Games. abstract_id: PUBMED:20820057 Sports injuries and illnesses during the Winter Olympic Games 2010. Background: Identification of high-risk sports, including their most common and severe injuries and illnesses, will facilitate the identification of sports and athletes at risk at an early stage. Aim: To analyse the frequencies and characteristics of injuries and illnesses during the XXI Winter Olympic Games in Vancouver 2010. Methods: All National Olympic Committees' (NOC) head physicians were asked to report daily the occurrence (or non-occurrence) of newly sustained injuries and illnesses on a standardised reporting form. In addition, the medical centres at the Vancouver and Whistler Olympic clinics reported daily on all athletes treated for injuries and illnesses. Results: Physicians covering 2567 athletes (1045 females, 1522 males) from 82 NOCs participated in the study. The reported 287 injuries and 185 illnesses resulted in an incidence of 111.8 injuries and 72.1 illnesses per 1000 registered athletes. In relation to the number of registered athletes, the risk of sustaining an injury was highest for bobsleigh, ice hockey, short track, alpine freestyle and snowboard cross (15-35% of registered athletes were affected in each sport). The injury risk was lowest for the Nordic skiing events (biathlon, cross country skiing, ski jumping, Nordic combined), luge, curling, speed skating and freestyle moguls (less than 5% of registered athletes). Head/cervical spine and knee were the most common injury locations. Injuries were evenly distributed between training (54.0%) and competition (46.0%; p=0.18), and 22.6% of the injuries resulted in an absence from training or competition. In skeleton, figure and speed skating, curling, snowboard cross and biathlon, every 10th athlete suffered from at least one illness. In 113 illnesses (62.8%), the respiratory system was affected. Conclusion: At least 11% of the athletes incurred an injury during the games, and 7% of the athletes an illness. The incidence of injuries and illnesses varied substantially between sports. Analyses of injury mechanisms in high-risk Olympic winter sports are essential to better direct injury-prevention strategies. abstract_id: PUBMED:34746779 Olympic Sports Science-Bibliometric Analysis of All Summer and Winter Olympic Sports Research. Introduction: The body of scientific literature on sports and exercise continues to expand. The summer and winter Olympic games will be held over a 7-month period in 2021-2022. Objectives: We took this rare opportunity to quantify and analyze the main bibliometric parameters (i.e., the number of articles and citations) across all Olympic sports to weigh and compare their importance and to assess the structure of the "sport sciences" field. The present review aims to perform a bibliometric analysis of Olympic sports research. We quantified the following topics: (1) the most investigated sports; (2) the main journals in which the studies are published; (3) the main factors explaining sport-specific scientific attractiveness; (4) the influence of being in the Olympic programme, economic weight, and local influences on research output; and (5) which research topic is the most investigated across sports. Methods: We searched 116 sport/exercise journals on PubMed for the 40 summer and 10 winter Olympic sports. A total of 34,038 articles were filtered for a final selection of 25,003 articles (23,334 articles on summer sports and 1,669 on winter sports) and a total of 599,820 citations. Results and Discussion: Nine sports [football (soccer), cycling, athletics, swimming, distance &amp; marathon running, basketball, baseball, tennis, and rowing] were involved in 69% of the articles and 75% of the citations. Football was the most cited sport, with 19.7 and 26.3% of the total number of articles and citations, respectively. All sports yielded some scientific output, but 11 sports (biathlon, mountain biking, archery, diving, trampoline, skateboarding, skeleton, modern pentathlon, luge, bobsleigh, and curling) accumulated a total of fewer than 50 publications. While ice hockey is the most prominently represented winter sport in the scientific literature, winter sports overall have produced minor scientific output. Further analyses show a large scientific literature on team sports, particularly American professional sports (i.e., baseball, basketball, and ice hockey) and the importance of inclusion in the Olympic programme to increasing scientific interest in "recent" sports (i.e., triathlon and rugby sevens). We also found local/cultural influence on the occurrence of a sport in a particular "sport sciences" journal. Finally, the relative distribution of six main research topics (i.e., physiology, performance, training and testing, injuries and medicine, biomechanics, and psychology) was large across sports and reflected the specific performance factors of each sport. abstract_id: PUBMED:26577884 Exploring an Olympic "Legacy": Sport Participation in Canada before and after the 2010 Vancouver Winter Olympics. Guided by the notion of a trickle-down effect, the present study examines whether sport participation in Canada increased following the 2010 Winter Olympics in Vancouver. Comparing rates of sport participation prior to and following the Games using nationally representative data, the results suggest that the Olympics had almost no impact on sport participation in Canada, although there does appear to be a modest "bounce" in sport participation in the Vancouver area immediately following the Vancouver Games. As such, if the trickle-down effect did occur, the analysis suggests that the effect was locally situated, short-lived, and small. Answer: The Vancouver 2010 Olympic Games aimed to increase physical activity participation among Vancouver residents through the implementation of the Vancouver Active Communities policy. However, a critical policy analysis of health promotion policy documents revealed inconsistencies within the policy. The policy instruments implemented were not designed to address the needs of the identified target populations in need of health promotion efforts, which were used to justify the approval of funding for the policy. There was also inconsistency across municipal policies, particularly in promoting physical activity among low-income residents (PUBMED:24385473). Despite the intentions, the impact of the Vancouver Winter Olympics on population-level physical activity and sport participation among Canadian children and adolescents was negligible. A study found that there was no measurable impact on objectively measured physical activity or the prevalence of overall sports participation among Canadian children. The study concluded that much greater cross-Government and long-term efforts are needed to create the conditions for an Olympic legacy effect on physical activity (PUBMED:25182041). In a separate study exploring the Olympic "legacy," it was found that the 2010 Winter Olympics in Vancouver had almost no impact on sport participation in Canada. There was a modest "bounce" in sport participation in the Vancouver area immediately following the Games, suggesting that if the trickle-down effect did occur, it was locally situated, short-lived, and small (PUBMED:26577884). In summary, while the Vancouver 2010 Olympic Games had aspirations to promote physical activity for all, the evidence suggests that the policy implementation was inconsistent and the overall impact on increasing physical activity and sport participation was limited.
Instruction: Does Prior Surgery for Femoroacetabular Impingement Compromise Hip Arthroplasty Outcomes? Abstracts: abstract_id: PUBMED:27026643 Does Prior Surgery for Femoroacetabular Impingement Compromise Hip Arthroplasty Outcomes? Background: Open and arthroscopic approaches have been described to address femoroacetabular impingement (FAI). Despite good outcomes, there is a subset of patients who subsequently require total hip arthroplasty (THA). However, there is a paucity of data on the outcomes of THA after surgery for FAI. The purpose of this study was to determine whether clinical outcomes of THA are affected by prior open or arthroscopic treatment of FAI. Methods: This case-matched retrospective review included 23 patients (24 hips) that underwent THA after previous surgery for FAI (14 arthroscopic and 10 open) and compared them to 24 matched controls with no history of prior surgery on the operative hip. The controls were matched for age, sex, surgical approach, implants used, and preoperative modified Harris hip score (mHHS) did not differ between groups. The primary outcome measure was the mHHS. Operative time, blood loss, and the presence of heterotopic ossification after THA were also compared between groups. Results: There was no significant difference in mean mHHS between the FAI treatment group 92.9 ± 12.7 and controls 95.2 ± 6.6 (P = .43) at a mean follow-up after THA of 33 (24-70) months. Increased operative times were noted for THA after surgical hip dislocation (SHD; mean 109.3 ± 29.8) compared to controls (mean 88.0 ± 24.2; P &lt; .05). There was no significant difference in blood loss between groups. The occurrence of heterotopic ossification was significantly higher after SHD compared to controls (P &lt; .05). Conclusions: Clinical outcomes after THA are not affected by prior open or arthroscopic procedures for FAI. However, increased operative times and an increased risk of heterotopic ossification were noted after SHD. abstract_id: PUBMED:33221128 Prior Femoroacetabular Osteoplasty Does Not Compromise the Clinical Outcome of Subsequent Total Hip Arthroplasty. Background: Total hip arthroplasty (THA) is the most effective treatment option for patients with symptomatic osteoarthritis after a prior femoroacetabular osteoplasty (FAO). This study evaluated clinical outcomes of THA after a prior FAO and compared the results with a matched group of patients who underwent THA with no prior surgical procedures in the affected hip. Methods: By reviewing our prospectively maintained database, we identified 74 hips (69 patients) that underwent THA after previous FAO between 2004 and 2017. They were matched 1:3 to a control group of primary THA with no history of any procedures on the same hip based on age, sex, body mass index, date of surgery, Charlson comorbidity index, surgical approach, and acetabular and femoral component type. At minimum 2-year follow-up, modified Harris Hip Score, 90-day readmission, and revision THA for any reason were compared between the groups. Results: The median time interval between FAO and subsequent THA was 1.64 years. There was no significant difference in preoperative Harris Hip Score between patients in the case and control cohorts. At the latest follow-up, the median modified Harris Hip Score was 77.6 in the case group and 96.2 in the control, and the difference was not statistically significant. None of the patients in the case group developed infection. 7 patients in the case group required additional procedures at any point, compared with 15 in the control. Conclusion: THA after prior FAO has similar outcomes to primary THA in patients with no prior procedures in the affected hip. THA can be performed safely with excellent outcome in patients with a history of FAO. abstract_id: PUBMED:31005437 Prior Hip Arthroscopy Increases Risk for Perioperative Total Hip Arthroplasty Complications: A Matched-Controlled Study. Background: Arthroscopic hip surgery is becoming increasingly popular for the treatment of femoroacetabular impingement and labral tears. Reports of outcomes of hip arthroscopy converted to total hip arthroplasty (THA) have been limited by small sample sizes. The purpose of this study was to investigate the impact of prior hip arthroscopy on THA complications. Methods: We queried our institutional database from January 2005 and December 2017 and identified 95 hip arthroscopy conversion THAs. A control cohort of 95 primary THA patients was matched by age, gender, and American Society of Anesthesiologists score. Patients were excluded if they had undergone open surgery on the ipsilateral hip. Intraoperative complications, estimated blood loss, operative time, postoperative complications, and need for revision were analyzed. Two separate analyses were performed. The first being intraoperative and immediate postoperative complications through 90-day follow-up and a second separate subanalysis of long-term outcomes on patients with minimum 2-year follow-up. Results: Average time from hip arthroscopy to THA was 29 months (range 2-153). Compared with primary THA controls, conversion patients had longer OR times (122 vs 103 minutes, P = .003). Conversion patients had a higher risk of any intraoperative complication (P = .043) and any postoperative complication (P = .007), with a higher rate of wound complications seen in conversion patients. There was not an increased risk of transfusion (P = .360), infection (P = 1.000), or periprosthetic fracture between groups (P = .150). When comparing THA approaches independent of primary or conversion surgery, there was no difference in intraoperative or postoperative complications (P = .500 and P = .790, respectively). Conclusion: Conversion of prior hip arthroscopy to THA, compared with primary THA, resulted in increased surgical times and increased intraoperative and postoperative complications. Patients should be counseled about the potential increased risks associated with conversion THA after prior hip arthroscopy. abstract_id: PUBMED:31001666 Prior arthroscopic treatment for femoro-acetabular impingement does not compromise hip arthroplasty outcomes: a matched-controlled study with minimum two-year follow-up. Background: Femoro-acetabular impingement (FAI) is known as a predisposing factor in the development of osteoarthritis of the hip. In order to treat this condition, hip arthroscopy is considered as the gold standard in recent years. The number of performed hip arthroscopies has risen immensely. However, a number of patients with poor outcome after hip arthroscopy will require further surgical intervention, sometimes even conversion into THR (total hip replacement). The purpose of this study was to analyze whether outcomes of THR are affected by prior hip arthroscopy in these patients. Methods: Patients who underwent a THR following an ipsilateral hip arthroscopy were matched to a control group of THR patients with no history of prior ipsilateral hip surgery. Matching criteria were age, sex, body mass index, implants used, and surgical approach. Modified Harris Hip Score, surgical time, presence of heterotopic ossification, and post-operative complication were prospectively compared at a minimum two year follow-up. Results: Thirty-three THR after hip arthroscopy patients were successfully matched to control patients. There was no significant difference in mHHS between both groups (FAI treatment group 92.8 vs. control group 93.8, p = 0.07). However, FAI treatment group showed a lower mHHS score pre-operatively (48 vs. 60, p = 0.002). There was no significant difference in surgical time and post-operative complication rate. No heterotopic ossification could be found. Conclusion: A prior hip arthroscopy has no affect to clinical outcomes of subsequent THR. abstract_id: PUBMED:34226003 Editorial Commentary: Spine Pathology May Compromise the Results of Hip Arthroscopy: Will Hip Arthroscopy Improve Low Back Pain? Pathology of the lumbar spine and hip commonly occur concurrently. The hip-spine connection has been well documented in the hip arthroplasty literature but until recently has been largely ignored in the setting of hip arthroscopy. Physical examination and diagnostic workup of the lumbosacral junction are warranted to further our understanding of the effects of lumbosacral motion and pathology in patients with concomitant femoroacetabular impingement syndrome. An understanding of this relationship will better allow surgeons to counsel and preoperatively optimize patients undergoing evaluation and treatment of femoroacetabular impingement syndrome. Several studies have reported that patients with a previous lumbar arthrodesis undergoing hip arthroplasty have lower patient-reported outcomes and greater revision rates compared with patients without previous lumbar surgery, and similar to its effect on outcomes after hip arthroplasty, lumbar spine disease can compromise outcomes after hip arthroscopy. On the other side of the coin, hip arthroplasty has been shown to improve low back pain in patients with concomitant hip osteoarthritis. Can the arthroscopic treatment of nonarthritic hip pathology offer a similar result? We won't know unless we look. abstract_id: PUBMED:23636955 Clinical experience with computer navigation in revision total hip arthroplasty. The biomechanically and anatomically correct placement of hip prostheses components is the main challenge in revision hip arthroplasty. The orientation of the cup and stem with the restoration of leg length, offset and hip centre is hampered by the defect situations frequently present. In primary hip arthroplasty, it has been demonstrated that the components can be accurately positioned using computer-navigated procedures. However, such procedures could also be of considerable benefit in revision hip arthroplasty. Systems that not only detect anatomical landmarks using pointers but also use image data for referencing may provide a possible solution for the defect situation. Literature about navigation in revision arthroplasty is very rare. This article comprises general considerations on this subject and presents our experience and possible clinical applications. abstract_id: PUBMED:25842248 Mild to Moderate Hip OA: Joint Preservation or Total Hip Arthroplasty? Treatment of structural hip disease such as FAI and acetabular dysplasia has increased dramatically over the past decade with the goal of preservation of the native hip joint. A number of patient and disease specific parameters including the amount of underlying hip osteoarthrosis can help predict success with joint preservation surgery. Total hip arthroplasty remains a very good option in young patients who are not ideal candidates for joint preservation surgery. Future developments will help to better identify ideal surgical candidates and improve understanding of the disease processes. abstract_id: PUBMED:36661250 Outcomes for Treatment of Capsulolabral Adhesions With a Capsular Spacer During Revision Hip Arthroscopy. Background: The presence of adhesions is a common source of pain and dysfunction after hip arthroscopic surgery and an indication for revision surgery. The placement of a capsular spacer in the capsulolabral recess after lysis of adhesions has been developed to treat and prevent the recurrence of adhesions. Purpose: To evaluate patient-reported outcomes (PROs) and survivorship at a minimum of 2 years after revision hip arthroscopic surgery with capsular spacer placement for capsular adhesions. Study Design: Case series; Level of evidence, 4. Methods: Between January 2013 and June 2018, a total of 95 patients (99 hips) aged ≥18 years underwent revision hip arthroscopic surgery for the treatment of capsular adhesions with the placement of a capsular spacer. Overall, 53 patients (56 hips) met the inclusion criteria and had a minimum 2-year follow-up, forming the cohort of this study. Exclusion criteria included confounding metabolic bone diseases (eg, Legg-Calve-Perthes disease, Marfan syndrome), labral deficiency, or advanced osteoarthritis (Tönnis grade 2 or 3). Preoperative and postoperative outcome scores (modified Harris Hip Score [mHHS], Hip Outcome Score-Activities of Daily Living [HOS-ADL], Hip Outcome Score-Sport-Specific Subscale [HOS-SSS], 12-Item Short Form Health Survey [SF-12], and Western Ontario and McMaster Universities Osteoarthritis Index [WOMAC]) were collected and compared in addition to the revision rate, conversion to total hip arthroplasty, and patient satisfaction. Results: The mean age of the cohort was 32 ± 11 years, with 32 female hips (57%) and a median number of previous hip arthroscopic procedures of 1 (range, 1-5). The arthroplasty- and revision-free survivorship rate at 2 years was 91%. Overall, 5 patients (6 hips; 11%) underwent revision surgery at a mean of 2.4 ± 1.4 years after capsular spacer placement, with symptomatic capsular defects being the most common finding. There were 4 patients (7%) who converted to total hip arthroplasty. For hips not requiring subsequent surgery (n = 46), there was a significant improvement in outcome scores except for the SF-12 Mental Component Summary, with rates of achieving the minimal clinically important difference of 70%, 70%, and 65% for the mHHS, HOS-ADL, and HOS-SSS, respectively. Conclusion: Capsular spacers, as part of a systematic approach including lysis of adhesions with early and consistent postoperative physical therapy including circumduction exercises, resulted in improved PROs as well as high arthroplasty- and revision-free survivorship (91%) at a minimum 2-year follow-up. Capsular spacers should be considered in revision hip arthroscopic procedures when an adequate labral volume remains but adhesions continue to be a concern. abstract_id: PUBMED:33032461 Satisfaction, functional outcomes and predictors in hip arthroscopy: a cohort study. Introduction: Hip arthroscopy is not always successful, leading to high rates of total hip arthroplasty (THA) after arthroscopy. The aim of this study was to identify risk factors for THA, revision arthroscopy and low patient satisfaction and to compare outcomes of the different procedures of primary hip arthroscopy. Methods: A total of 91 primary hip arthroscopy procedures in 90 patients (66% female) were analysed. Data were gathered from patient files and a questionnaire was sent to patients including the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC), modified Harris Hip Score (mHHS), the EuroQol 5-dimension and questions about return to sports, satisfaction and pain before and after surgery. Using regression analyses, predictive factors for the outcomes were identified. Results: After a mean of 1.6 years, 4 patients (4%) underwent revision arthroscopy and 10 (11%) a THA. Of the responders (62%), 66% of the patients was satisfied to very satisfied about the surgery. Mean mHHS score was 75.3 (SE 1.9) and the mean WOMAC score was 81.0 (SE 2.8). Return to sports rate was 58%. A higher age was a significant predictor for lower satisfaction (p = 0.008) and a longer duration of symptoms was a significant predictor for worse mHHS outcome scores (p = 0.005). Conclusion: A higher age is a predictor for a lower satisfaction and a longer duration of symptoms before surgery has a negative influence on functional outcome. No risk factors for THA or revision arthroscopy were found and there were no significant differences in outcome measurements between the performed surgeries. abstract_id: PUBMED:36543423 Editorial Commentary: Revision Hip Arthroscopy in Patients Older Than 40 Has a Higher Rate of Conversion to Total Hip Arthroplasty-Get It Right the First Time, and the Second. Getting hip arthroscopy right the first time is critical to the overall patient outcome. This involves proper patient selection, with avoidance of arthritis, understanding the pathology of each hip, and properly executing the surgery. Care must be taken to restore labral function and preserve capsule function while accurately resecting pincer or cam impingement. While good results can be achieved in patients older than 40 years of age, an opportunity exists for improved optimization of clinical outcomes. Moreover, revision hip arthroscopy in patients older than 40 years of age has a higher rate of conversion to total hip arthroplasty. Again, get it right the first time, and carefully consider indications for revision hip arthroscopy in patients older than 40 years of age if there is a second time. Answer: The evidence from the provided abstracts suggests that prior surgery for femoroacetabular impingement (FAI) does not significantly compromise the clinical outcomes of subsequent total hip arthroplasty (THA). Several studies have reported that patients who underwent THA after previous surgery for FAI, including both open and arthroscopic procedures, had similar clinical outcomes compared to patients who underwent primary THA with no history of prior surgery on the operative hip (PUBMED:27026643; PUBMED:33221128; PUBMED:31001666). One study specifically noted that there was no significant difference in the modified Harris Hip Score (mHHS) between patients who had THA after FAI treatment and matched controls (PUBMED:27026643). Another study echoed these findings, stating that THA after prior femoroacetabular osteoplasty (FAO) has similar outcomes to primary THA (PUBMED:33221128). Additionally, a matched-controlled study found that prior hip arthroscopy did not affect the clinical outcomes of subsequent total hip replacement (THR) (PUBMED:31001666). However, some studies have indicated that there may be an increased risk of certain complications or operative challenges following prior FAI surgery. For example, one study found that THA after surgical hip dislocation (SHD) for FAI treatment was associated with increased operative times and an increased risk of heterotopic ossification compared to controls (PUBMED:27026643). Another study reported that conversion of prior hip arthroscopy to THA resulted in increased surgical times and increased intraoperative and postoperative complications compared with primary THA (PUBMED:31005437). In summary, while prior surgery for FAI may lead to some increased risks or operative challenges, the overall clinical outcomes of THA do not appear to be significantly compromised by previous FAI surgery. Patients should be counseled about the potential increased risks associated with conversion THA after prior hip arthroscopy, but they can also be reassured that the clinical outcomes are generally comparable to those of primary THA (PUBMED:31005437; PUBMED:31001666).
Instruction: Is potential malnutrition associated with septic failure and acute infection after revision total joint arthroplasty? Abstracts: abstract_id: PUBMED:26718779 Is Hypoalbuminemia Associated With Septic Failure and Acute Infection After Revision Total Joint Arthroplasty? A Study of 4517 Patients From the National Surgical Quality Improvement Program. Introduction: Several studies have suggested that malnutrition may be associated with periprosthetic joint infection (PJI) after total joint arthroplasty (TJA). However, strong evidence for this association is lacking. The purpose of the present study is to ask, Is the proportion of patients with hypoalbuminemia (a proxy for malnutrition) higher among patients with a septic indication for revision TJA than patients with an aseptic indication for revision TJA? Secondly, among patients undergoing revision TJA for an aseptic indication, is hypoalbuminemia predictive of subsequent early postoperative PJI? Methods: Patients undergoing revision total hip or knee arthroplasty were identified in the American College of Surgeons National Surgical Quality Improvement Program. Hypoalbuminemia was defined as serum albumin &lt;3.5 g/dL. All analyses were adjusted for differences in demographic, comorbidity, and procedural characteristics. Results: A total of 4517 patients met inclusion criteria, of which 715 (15.8%) underwent revision for a septic indication. Patients undergoing revision for a septic indication had a higher rate of hypoalbuminemia than patients undergoing revision for an aseptic indication (42.8% vs 11.8%; relative risk = 3.6, 95% confidence interval = 3.2-4.1, P &lt; .001). Of the 3802 patients who underwent revision TJA for an aseptic indication, patients with hypoalbuminemia had a higher rate of early PJI after the revision than patients with normal serum albumin levels (4.5% vs 2.1%; relative risk = 2.1, 95% CI = 1.2-3.5, P = .005). Conclusions: These findings add to the growing body of evidence that malnutrition increases the risk of PJI after TJA. Future prospective studies should consider whether correcting malnutrition preoperatively reduces the risk of PJI after TJA. abstract_id: PUBMED:24867449 Is potential malnutrition associated with septic failure and acute infection after revision total joint arthroplasty? Background: Although malnutrition has been hypothesized to increase the risk of periprosthetic joint infection (PJI), strong evidence linking the two is lacking. Questions/purposes: The purposes of this study were to determine (1) if one or more laboratory values suggestive of malnutrition is independently associated with being revised for an infected joint arthroplasty as opposed to for an aseptic failure; (2) the relationship between laboratory parameters suggestive of malnutrition and obesity; and (3) if one or more laboratory parameters suggestive of malnutrition is independently associated with acute PJI complicating an aseptic revision procedure. Methods: Between 2002 and 2010, one surgeon performed 600 revision total joint arthroplasties in 547 patients; during that time, nutritional parameters (including serum albumin, total lymphocyte count, and transferrin) were routinely obtained preoperatively; complete data sets were available on 454 patients (501 procedures [84%]). We compared the frequency of having one or more laboratory parameters suggestive of malnutrition between patients undergoing a revision for septic reasons and aseptic reasons as well as between obese and nonobese patients. The 375 aseptic revisions were then assessed for the incidence of acute postoperative infection (within 90 days, diagnosed with Musculoskeletal Infection Society criteria). Multivariate logistic regression modeling was used to evaluate factors independently associated with (1) a septic as opposed to an aseptic mode of failure; and (2) acute postoperative infection after an aseptic revision. Results: Patients in 67 of 126 (53%) revisions for PJI had one or more laboratory parameters suggestive of malnutrition compared with 123 of 375 (33%) undergoing revision for a noninfectious etiology (odds ratio [OR], 2.3 [95% confidence interval, 1.5-3.5]; p&lt;0.001). Patients who were of normal weight at the time of revision had the highest frequency of laboratory parameters suggestive of malnutrition (42 of 82 [51%]), although this was common in obese patients as well (76 of 238 [32%]) (p=0.002). Among the 375 aseptic revisions, 12 developed an acute postoperative infection (3%). The frequency of infection was nine of 123 in the group having one or more laboratory parameters suggestive of malnutrition and three of 252 in the group not having such laboratory parameters (7% versus 1%; p=0.003). Multivariate regression revealed that having laboratory parameters suggestive of malnutrition is independently associated with both chronic PJI (p=0.003; OR, 2.1) and an acute postoperative infection complicating an aseptic revision arthroplasty (p=0.02; OR, 5.9). Conclusions: Having one or more laboratory parameters suggestive of malnutrition is common among patients undergoing revision arthroplasty and is independently associated with both chronic septic failure and acute postoperative infection complicating a revision performed for a noninfectious etiology. Future studies should assess the impact of a standardized screening protocol with subsequent correction of abnormal laboratory parameters suggestive of malnutrition on the risk of PJI to determine a potential causal relationship between the two. Level Of Evidence: Level III, prognostic study. See Guidelines for Authors for a complete description of levels of evidence. abstract_id: PUBMED:33992479 A Novel Biomarker to Screen for Malnutrition: Albumin/Fibrinogen Ratio Predicts Septic Failure and Acute Infection in Patients Who Underwent Revision Total Joint Arthroplasty. Background: This study aimed to investigate the efficacy of the albumin/fibrinogen ratio (AFR) in the assessment of malnutrition and to compare its ability to predict early postoperative periprosthetic joint infection (PJI) in patients with aseptic revisions. Methods: Four hundred sixty-six patients undergoing revision total hip or knee arthroplasty between February 2017 and December 2019 were recruited in this retrospective study. We compared the differences in nutritional parameters between patients undergoing revision for septic and aseptic reasons. We used multivariate logistic regression and assessed the association between nutritional parameters and risk of PJI. 207 patients with aseptic revision were then evaluated for the incidence of acute postoperative infection within 90 days. The predictive ability of nutritional markers was assessed by receiver operating characteristic curves. Results: In the multivariate logistic regression analysis, low albumin level (adjusted OR 1.56, 95% CI 1.16-2.08, P = .003), low prognostic nutritional index (PNI) (adjusted OR 1.57, 95% CI 1.01-2.43, P &lt; .043), and low AFR (adjusted OR 2.54, 95% CI 1.92-3.36, P &lt; .001) were independently associated with revision surgery for septic reasons. In accordance with the receiver operating characteristic analysis, the AFR exhibited a greater area under the curve value (0.721) than did the prognostic nutritional index and albumin. An elevated AFR (≥11.7) was significantly associated with old age, joint type, high Charlson comorbidity index, high American Society of Anesthesiologist, and diabetes (P &lt; .05). Conclusion: Our findings demonstrated AFR may be an effective biomarker to assess nutrition status and predict acute PJIs after revision TJA. abstract_id: PUBMED:37071192 Hypoalbuminemia increases the risk of failure following one-stage septic revision for periprosthetic joint infection. Purpose: Malnutrition is a potentially modifiable risk factor of periprosthetic joint infection (PJI). The purpose of this study was to analyze the role of nutritional status as a risk factor for failure after one- stage revision hip or knee arthroplasty for PJI. Methods: Retrospective, single-center, case-control study. Patients with PJI according to the 2018 International Consensus Meeting criteria were evaluated. Minimum follow-up was 4 years. Total lymphocyte count (TLC), albumin values, hemoglobin, C-reactive protein, white blood cell (WBC) count and glucose levels were analyzed. An analysis was also made of the index of malnutrition. Malnutrition was defined as serum albumin &lt; 3.5 g/dL and TLC &lt; 1500/mm3. Septic failure was defined as the presence of local or systemic symptoms of infection and the need of further surgery as a result of persistent PJI. Results: No significant differences were found between increased failure rates after a one-stage revision hip or knee arthroplasty for PJI and TLC, hemoglobin level, WBC count, glucose levels, or malnutrition. Albumin and C-reactive protein values were found to have a positive and significant relationship with failure (p &lt; 0.05). Multivariate logistic regression identified only hypoalbuminemia (serum albumin &lt; 3.5 g/dL) (OR 5.64, 95% CI 1.26-25.18, p = 0.023) as a significant independent risk factor for failure. The receiver operating characteristic (ROC) curve for the model yielded an area under the curve of 0.67. Conclusion: TLC, hemoglobin; WBC count; glucose levels; and malnutrition, understood as the combination of albumin and TLC, were not found to be statically significant risk factors for failure after single-stage revision for PJI. However, albumin &lt; 3.5 g/dL, alone was a statically significant risk factor for failure after single-stage revision for PJI. As hypoalbuminemia seems to influence the failure rate, it is advisable to measure albumin levels in preoperative workups. abstract_id: PUBMED:37524637 Outcomes in revision knee arthroplasty: Preventing reoperation for infection Keynote lecture - BASK annual congress 2023. Revision total knee arthroplasty (TKA) patients have a lower survival rate and lower post-surgical outcomes compared to primary TKA patients. Infection and aseptic loosening are the most common reasons for revision and re-revision TKAs, with infection accounting for nearly half of re-revision cases. To prevent infection, patient optimization addressing obesity, diabetes, malnutrition, and smoking cessation is crucial. Advancements in irrigation solutions, antibiotic-impregnated bone fillers, bacteriophage therapy, and electrochemical therapy hold promise for preventing infection. Technical strategies such as obtaining sufficient component fixation, joint line restoration, and using robot assistance may improve revision TKA outcomes. As the burden of revision TKA continues to rise, substantial efforts remain for mitigating future revision TKAs and their associated complications. abstract_id: PUBMED:36693515 Hypoalbuminemia Predicts Failure of Two-Stage Exchange for Chronic Periprosthetic Joint Infection of the Hip and Knee. Background: Nutritionally compromized patients, with preoperative serum albumin (SAB) &lt; 3.5g/dL, are at higher risk for periprosthetic joint infection (PJI) in total joint arthroplasty. The relationship between nutritional and PJI treatment success is unknown. The purpose of this study was to examine the relationship between preresection nutrition and success after first-stage resection in planned two-stage exchange for PJI. Methods: A retrospective review was performed on 418 patients who had first-stage resection of a planned two-stage exchange for chronic hip or knee PJI between 2014 and 2018. A total of 157 patients (58 hips and 99 knees) were included who completed first stage, had available preop SAB and had a 2-year follow-up. Failure was defined as persistent infection or repeat surgery for infection after resection. Demographic and surgical data were abstracted and analyzed. Results: Among knee patients with preop SAB &gt;3.5 g/dL, the failure rate was 32% (15 of 47) versus a 48% (25 of 40) failure rate when SAB &lt;3.5 g/dL (P = .10). Similarly, the failure rate among hip patients with preop SAB &gt;3.5 g/dL versus 12.5% (3 of 24) versus 44% (15 of 34) for hip patients with SAB &lt;3.5 g/dL (P = .01). Multivariable regression results indicated that patients with SAB&lt; 3.5 g/dL (P = .0143) and Musculoskeletal Infection Society host type C (P = .0316) were at an increased risk of failure. Conclusion: Low preoperative SAB and Musculoskeletal Infection Societyhost type-C are independent risk factors for failure following first-stage resection in planned two-stage exchange for PJI. Efforts to nutritionally optimize PJI patients, when possible, may improve the outcome of two-stage exchange. abstract_id: PUBMED:24973087 CORR Insights®: Is potential malnutrition associated with septic failure and acute infection after revision total joint arthroplasty? N/A abstract_id: PUBMED:28529854 Pseudomonas oryzihabitans Infected Total Hip Arthroplasty. Pseudomonas oryzihabitans is a saprophytic gram-negative microorganism usually found in damp environments, only occasionally responsible for human pathology. Infection mainly occurs in malnourished, immunocompromised individuals with indwelling catheters. There is no previous published record of infection after joint arthroplasty. To enhance the literature, in this article we report a patient with a Pseudomonas oryzihabitans infected total hip arthroplasty, and discuss the diagnosis and management of this unusual infection. abstract_id: PUBMED:28807468 Are Revision Hip Arthroplasty Patients at Higher Risk for Venous Thromboembolic Events Than Primary Hip Arthroplasty Patients? Background: The purpose of this study is to determine whether revision total hip arthroplasty (THA) is associated with increased rates of deep vein thrombosis (DVT) and pulmonary embolism (PE) when compared to primary THA. Methods: We queried the American College of Surgeons National Surgical Quality Improvement Program database for all primary and revision THA cases from 2011 to 2014. Demographic data, medical comorbidities, and venous thromboembolic rates within 30 days of surgery were compared between the primary and revision THA groups. Results: Revision THA had a higher rate of DVT than the primary THA (0.6% vs 0.4%, P = .016), but there was no difference in the rate of PE (0.3% vs 0.2%, P = .116). When controlling for confounding variables, revision surgery alone was not a risk factor for DVT (odds ratio 0.833, 95% confidence interval 0.564-1.232) or PE (odds ratio 1.009, 95% confidence interval 0.630-1.616). Independent risk factors for DVT include age &gt;70 years, malnutrition, infection, operating time &gt;3 hours, general anesthesia, American Society of Anesthesiologists classification 4 or greater, and kidney disease (all P &lt; .05). Probability of DVT ranged from 0.2% with zero risk factors to 10% with all risk factors. Independent risk factors for PE included age &gt;70 years, African American ethnicity, and operating time &gt;3 hours (all P &lt; .05) with probabilities of PE postoperatively ranging from 0.2% to 1.1% with all risk factors. Conclusion: Revision surgery alone is not a risk factor for venous thromboembolism after hip arthroplasty. Surgeons should weigh the risks and benefits of more aggressive anticoagulation in certain high-risk patients. abstract_id: PUBMED:31138503 Effect of Preoperative Dental Extraction on Postoperative Complications After Total Joint Arthroplasty. Background: Poor dental hygiene has historically been considered a potential risk factor for infection in total joint arthroplasty (TJA), which has resulted in the common practice of requiring preoperative dental clearance and often results in dental extractions. However, the association between dental pathology and periprosthetic joint infection (PJI) has recently been called into question. Methods: A consecutive series of 2457 primary total hip and total knee arthroplasties were retrospectively reviewed. Documented dental evaluation was found in 1944 (79.1%) procedures, 223 (11.5%) of which had extraction of at least 1 tooth. No documented dental evaluation was found in 369 (15.0%) patients. Results: The overall complication rate was 3.87% with an overall PJI rate of 1.51%. There was no statistically significant association between a complication and procedure type, sex, preoperative dental evaluation or extraction, diabetes status, immunosuppression, malnutrition, or age. However, our sample demonstrated a higher complication rate among patients undergoing dental extraction as well as for those with diabetes and immunosuppressed patients. Conclusion: Our findings suggest that routine formal dental clearance for all TJA patients may not be necessary. Additionally, patients with poor oral hygiene may not have elevated risk of postoperative PJI, and preoperative tooth extraction may represent an unnecessary step for patients undergoing elective TJA. Answer: Yes, potential malnutrition is associated with septic failure and acute infection after revision total joint arthroplasty (TJA). Several studies have found a significant relationship between malnutrition and the risk of periprosthetic joint infection (PJI) following revision TJA. One study found that patients undergoing revision for a septic indication had a higher rate of hypoalbuminemia, a marker for malnutrition, than patients undergoing revision for an aseptic indication (42.8% vs 11.8%). Additionally, among patients undergoing revision TJA for an aseptic indication, those with hypoalbuminemia had a higher rate of early PJI after the revision than patients with normal serum albumin levels (4.5% vs 2.1%) (PUBMED:26718779). Another study reported that patients with one or more laboratory parameters suggestive of malnutrition had higher odds of being revised for an infected joint arthroplasty as opposed to for an aseptic failure. It also found that having laboratory parameters suggestive of malnutrition is independently associated with both chronic PJI and acute postoperative infection complicating an aseptic revision procedure (PUBMED:24867449). The albumin/fibrinogen ratio (AFR) was also identified as an effective biomarker to assess nutrition status and predict acute PJIs after revision TJA. Low albumin level, low prognostic nutritional index (PNI), and low AFR were independently associated with revision surgery for septic reasons (PUBMED:33992479). Furthermore, hypoalbuminemia was identified as a significant independent risk factor for failure after one-stage revision for PJI (PUBMED:37071192), and low preoperative serum albumin was an independent risk factor for failure following first-stage resection in planned two-stage exchange for PJI (PUBMED:37524637). These findings suggest that malnutrition, as indicated by hypoalbuminemia and other laboratory markers, is a risk factor for septic failure and acute infection after revision TJA. Therefore, screening for and potentially correcting malnutrition preoperatively may reduce the risk of PJI after revision TJA.
Instruction: Does thigh compression improve venous hemodynamics in chronic venous insufficiency? Abstracts: abstract_id: PUBMED:12422092 Does thigh compression improve venous hemodynamics in chronic venous insufficiency? Objective: The aim of this study was to investigate the hemodynamic effects of thigh compression in patients with deep venous incompetence. Patients And Methods: This diagnostic test study was set in a municipal general hospital. Twelve patients with venous leg ulcers (CEAP classification, C6 Es Ad Pr; four men and eight women), with a mean age of 56.5 +/- 16.8 years, with popliteal venous reflux of more than 1 second detected with duplex scan, underwent investigation with the following methods: 1, the pressure exerted under thigh-length compression stockings class II and short-stretch adhesive compression bandages was measured with an MST tester (Salzmann, Switzerland) and a CCS 1000 device (Juzo, Germany), respectively; 2, the great saphenous vein and the femoral vein on the thigh were compressed with a pneumatic cuff (0, 20, 40, and 60 mm Hg) containing a window through which the diameters of these veins could be measured with duplex ultrasonography; and 3, with the same thigh-cuff occlusion procedure, the venous filling index (VFI) for each experiment was measured with air plethysmography. These values reflected the presence and extent of venous reflux in each experiment depending on the degree of venous narrowing. Results: The mean pressure of a class II compression stocking was about 15 mm Hg at the thigh level, and adhesive bandages achieved a pressure of more than 40 mm Hg in the same location. A statistically significant reduction of the diameters of the great saphenous vein and the femoral vein could be obtained only when the cuff pressure on the thigh was equal to or higher than 40 mm Hg (P &lt;.001). A reduction of the venous reflux (VFI) was achieved only with a thigh pressure of 60 mm Hg (P &lt;.001). No significant reduction was seen of VFI with a thigh pressure in the range of the class II stockings. Previous investigations have shown that, in patients with deep venous incompetence, a pressure cuff on the thigh with 60 to 80 mm Hg is able to reduce ambulatory venous hypertension. Conclusion: Thigh compression as exerted with class II thigh-length compression stockings is not able to significantly reduce venous diameter or venous reflux. However, with a pressure of 40 to 60 mm Hg on the thigh that can be achieved with strongly applied short-stretch bandages, considerable hemodynamic improvement, including reduced venous reflux, can be obtained in patients with severe stages of chronic venous insufficiency from deep vein incompetence. The practical value of these preliminary findings should be investigated with further clinical trials. abstract_id: PUBMED:27886005 Compression in the treatment of chronic venous insufficiency: Efficacy depending on the length of the stocking. Background: Below knee two-component compression stockings (AD) have revealed as effective for compression treatment of venous leg ulcers. Upto groin, thigh length stocking (AG) may enhance clinical effects, however wear comfort of these stocking may be affected. Objective: venous haemodynamic in relation to the length of compression stockings. Methods: A two-component AD stocking (37 mmHg) and two thigh length stockings (AG 37, with an interface pressure of 37 mmHg; AG 45, with an interface pressure of 45 mmHg) were tested by 16 patients with CVI. Leg volume changes and venous ejection fraction and venous filling index were measured, whilst quality of life and wear comfort were surveyed by questionnaires. Results: Volume of both the lower limb and the thigh was reduced by AG stockings, whereas AD stockings reduced only the volume of the lower limb and increased thigh volume. Venous hemodynamic, ejection fraction and filling index were improved by AG and AD stockings, AG, however, was superior to AD. Quality of life and comfort of the stockings was assessed as good for AG 37 mmHg, AG 45 mmHg and AD 37 mmHg. Conclusions: Thigh length two component stockings (AG) were shown to be superior to below knee stocking (AD) with regard to volume reduction and venous hemodynamic, yet wear comfort was not impaired. These results imply that healing of trophic skin changes e.g. ulcers will be faster when thigh length two component stocking will be worn. abstract_id: PUBMED:12563216 Effect of elastic compression stockings on venous hemodynamics during walking. Purpose: Venous hemodynamics evaluated during walking better reflect changes that occur under active physiologic conditions than do conventional static modes of exercise such as tip-toe exercise, knee bending, or dorsiflexion. We prospectively studied the efficacy of air-plethysmography (APG) in monitoring venous hemodynamics during ambulation, and with this method we determined the hemodynamic effects of graduated elastic compression stockings on the lower limb during walking at various speeds. Methods: The residual volume fraction (RVF%) during treadmill walking was monitored with APG in 10 limbs with primary chronic venous insufficiency (CVI)(CEAP(2-4)) at four speeds (1.0, 1.5, 2.0 and 2.5 km/h consecutively), with and without elastic compression (21 mm Hg at the ankle). The method was validated in comparison with standard APG, which is based on tip-toe exercise. RVF obtained during treadmill walking at 1.5 km/h was correlated with RVF measured with standard APG in 30 subjects: 12 healthy volunteers, 11 patients with primary CVI, and 7 postthrombotic limbs. Data were analyzed with nonparametric statistics. Results: RVF measurements during walking were reproduced with an intra-day coefficient of variation of 5.1% to 16.5%. RVF during walking correlated well with RVF during standard APG (tip-toe) (r = 0.5, P =.004). At each of the investigated walking speeds, stockings improved venous hemodynamics by decreasing RVF, from a median of 50.5% without stockings to 40.5% with stockings at 1.0 km/h (19.8% decrease), from 49% to 39.5% at 1.5 km/h (19.4% decrease), from 50.5% to 41% at 2.0 km/h (18.8% decrease), and from 53% to 45.5% at 2.5 km/h (14.2% decrease) (all speeds, P &lt;.02). Efficacy of the stockings in decreasing RVF (percent change in RVF) was similar across the spectrum of examined speeds (P =.47). During walking with elastic stockings, nominal RVF values were also similar across the spectrum of walking speeds, except at 2.5 km/h (P =.012). During walking without stockings, RVF did not change with treadmill speed, nor did it differ from that obtained with conventional APG (tip-toe) (P =.46). The percentage decrease in RVF generated with elastic stockings correlated with the venous filling index (r = 0.73, P =.017) at 1.0 km/h. Conclusions: APG is a reproducible and valid method for monitoring venous hemodynamics during walking. Graduated elastic compression stockings significantly improved venous hemodynamics by reducing RVF in limbs with primary CVI at all examined walking speeds (1.0 to 2.5 km/h). The effect was linearly correlated with the amount of reflux (1.0 km/h). The modified application of APG during walking offers a new noninvasive method for assessment of venous hemodynamics in limbs with CVI, enabling quantification of the actual effect of elastic compression therapy during ambulation. abstract_id: PUBMED:8918324 Inelastic versus elastic leg compression in chronic venous insufficiency: a comparison of limb size and venous hemodynamics. Purpose: Compression of the lower extremity is the mainstay of therapy in patients who have chronic venous insufficiency. We evaluated the ability of two forms of compression-elastic stockings and an inelastic compression garment-with air plethysmography to determine how well they corrected abnormal deep venous hemodynamics in patients who had class III chronic venous insufficiency and how well this correction was sustained over time. Methods: Patients had measurements taken with no compression, with a 30 to 40 mm Hg below-knee stocking, and with the inelastic compression garment 2 hours and 6 hours after donning the garments. Therapies were compared with baseline and with themselves over time. Results: Inelastic compression maintained limb size and reduced venous volume better than no compression or stockings over time (ankle circumference at 2 hr vs 6 hr: baseline, 24.7 +/- 7 cm vs 26.1 +/- 1.1 cm; stocking, 23.9 +/- 1.1 cm vs 26.2 +/- 1.2 cm; inelastic compression, 25.4 +/- 1.1 cm vs 25.4 +/- 0.9 cm; venous volume at 2 hr vs 6 hr: baseline, 97.5 +/- 14.1 ml vs 105.2 +/- 17.9 ml; stocking, 112.4 +/- 29.7 ml vs 77.5 +/- 13.2 ml; inelastic compression, 72.2 +/- 14.1 ml vs 56.1 +/- 10.2 ml). At 6 hours, the ejection fraction was increased and the venous filling index was significantly less with inelastic compression compared with the stocking and baseline (ejection fraction at 6 hr: baseline, 61.6% +/- 6.9%; stocking, 75.9% +/- 17.7%; inelastic compression, 78.8% +/- 12.2%). Conclusions: Inelastic compression has a significant effect on deep venous hemodynamics by decreasing venous reflux and improving calf muscle pump function when compared with compression stockings, which may exert their primary effect on the superficial venous system. abstract_id: PUBMED:31845297 Innovations in medical compression therapy For the treatment of phlebological and lymphological diseases as well as constitutional edema diseases, a discussion of innovative concepts of medical compression therapy is essential. It is recommended that medical compression stockings should always be prescribed based on symptoms and with the lowest effective interface pressure to optimize the tolerability of compression therapy. Likewise, medical compression stockings with an integrated care formula, but also the application of additional skincare can improve the quality of life and compliance in patients with chronic venous insufficiency. Optimization of ulcer therapy can be achieved by using two-component compression stocking systems. These consist of an understocking and a firm outer compression stocking, which improve the venous and capillary hemodynamics with good wearing comfort and lead to the healing of venous ulcerations. Multicomponent compression bandages and short stretch bandages are proven in the decongestion phase of edema. Multicomponent bandages ensure a sustained interface pressure for at least 5 days and are ideal for outpatient treatment with less frequent dressing changes. For compression therapy in patients with arterial-venous leg ulcers (ABI [ankle brachial index] &gt;0.5), specially developed "lite" versions of the multicomponent dressings can be used. abstract_id: PUBMED:12515008 Principles of surgical treatment of varicose veins with regard to new findings on venous hemodynamics Pressure changes occurring during the activity of the calf muscle venous pump are the driving force of venous hemodynamics in the lower extremity. An ambulatory pressure gradient arises between the veins of the thigh and the lower leg as a consequence of pumping up the blood from the deep veins of the lower leg, where the venous pressure decreases, into the popliteal and femoral vein, where no pressure decrease occurs. Therefore, venous reflux can only take place in an incompetent vein connecting the femoral, profunda femoris, popliteal or iliac vein with one of the deep veins of the lower leg. Calf perforators represent the so called re-entry points and can't become the source of reflux. Venous reflex disturbs venous hemodynamics to a various degree dependent on the magnitude of reflux volume. When strong enough, it can produce the graviest form of chronic venous insufficiency even if localised in superficial veins. The magnitude of reflux volume, not the localisation of reflux in deep or superficial veins is the most important hemodynamic factor causing venous disturbance. The goal of varicose vein surgery is to remove reflux and visible varicose veins with the aim to achieve the most favorable hemodynamic and cosmetic results. Crossectomy is a very important step, because it is able to repair even the most pronounced hemodynamic disorder and restore normal hemodynamic conditions. If stripping of the incompetent saphenous trunk on the thigh is not performed in addition to crossectomy, the saphenous trunk continues to be patent and incompetent after surgery in most patients and provokes recurrent reflux. But nor can crossectomy combined with stripping avert the risk of recurrence definitively, because varicose veins are a dynamic disease with distinct tendency to recurrence. Correctly performed operation can reduce the recurrence rate and postpone its occurrence. A hemodynamic factor--the ambulatory pressure gradient--triggers probably the process leading to recurrence. When varicose veins recur, the recurrent reflux volume remains significantly lower for many years of follow-up as compared with the situation before surgery. External banding of incompetent valve in the long saphenous vein and the CHIVA-method are less efficient in comparison with standard surgery (crossectomy plus stripping). Sclerotherapy is a useful supplement to surgery during follow-up, as it is able to improve significantly the hemodynamic situation. This improvement is only transitory, but sclerotherapy can be repeated and the improvement re-established, if necessary, during follow-up. abstract_id: PUBMED:27594309 Progressive compression versus graduated compression for the management of venous insufficiency. Venous leg ulceration (VLU) is a chronic condition associated with chronic venous insufficiency (CVI), where the most frequent complication is recurrence of ulceration after healing. Traditionally, graduated compression therapy has been shown to increase healing rates and also to reduce recurrence of VLU. Graduated compression occurs because the circumference of the limb is narrower at the ankle, thereby producing a higher pressure than at the calf, which is wider, creating a lower pressure. This phenomenon is explained by the principle known as Laplace's Law. Recently, the view that compression therapy must provide a graduated pressure gradient has been challenged. However, few studies so far have focused on the potential benefits of progressive compression where the pressure profile is inverted. This article will examine the contemporary concept that progressive compression may be as effective as traditional graduated compression therapy for the management of CVI. abstract_id: PUBMED:1547069 The hemodynamics of venous ulceration. Venous ulceration is the result of progressive chronic venous insufficiency, the pathophysiology of which is complex and incompletely understood. Ambulatory venous hypertension in this disease has been well-documented; however, relatively little attention has been directed toward other parameters of venous function. This study evaluates a spectrum of hemodynamic variables and the degree to which they are altered in patients with venous ulceration, and correlates ambulatory venous pressure (AVP) with the noninvasive estimate of this parameter. Air-plethysmography was used to evaluate 36 ulcerated extremities from 30 patients with chronic venous disease and 80 asymptomatic extremities from 54 patients. This technique measures the functional venous volume (VV), assesses valvular function [Venous Filling Index (VFI)], evaluates the efficiency of the calf muscle-pump [Ejection Fraction (EF)], and provides an estimation of ambulatory venous pressure [Residual Volume Fraction (RVF)]. In addition, AVP's were recorded in 13 asymptomatic extremities from 10 patients and 16 ulcerated extremities from 14 patients with chronic venous disease. Significant differences existed between the two groups for all of the hemodynamic parameters. Ulcerated extremities had greater venous volumes, displayed marked deterioration in valvular competence and calf muscle-pump function, and showed significant ambulatory venous hypertension compared to the asymptomatic group. Additionally, the relationship between RVF and AVP appeared linear, with a correlation coefficient of 0.87. Air-plethysmography currently provides the most complete evaluation of venous hemodynamics and should improve our understanding of the pathophysiology of chronic-venous disease. abstract_id: PUBMED:24840970 Management of venous ulcers. Chronic venous insufficiency (CVI) results from venous hypertension secondary to superficial or deep venous valvular reflux, as well as venous obstruction. The most severe clinical manifestation of CVI is venous leg ulceration that can result in significant morbidity, including venous gangrene and amputation, albeit rare. Treatment modalities are aimed at reducing venous hypertension. Diuretic therapy, although widely used, only provides short-term improvement of the edema but provides no long-term benefit. Compression therapy is the cornerstone in the management of CVI. Compression can be achieved using compression bandaging, compression pumps, or graduated compression stockings. Topical steroid creams may reduce inflammation, venous eczema, and pain in the short term, but they can be detrimental in the long run. Apligraf (a living, bilayered, cell-based product) in conjunction with compression therapy was noted to be more effective in healing venous leg ulcerations, when compared with treatment with compression therapy and zinc paste. Endovascular and surgical techniques that minimize valvular reflux and relieve venous obstruction improve venous hemodynamics, promoting wound healing. abstract_id: PUBMED:21146339 Digital simulation of venous and lymphatic edema and the effects of compression Objective: Compression therapy for venous and lymphatic edema of the lower limbs raises a major challenge concerning the optimal pressure ensuring both efficacy and patient compliance. We present a mathematical model of tissue fluid transfers which is aimed at determining the lowest pressure required to prevent edema. Methods: The model is based on a set of equations, derived from published experimental data, which describe the fluid and solute transfers between blood, interstitium and lymphatics, and the mechanical properties of interstitial compartment. It enables us to compute the changes in tissue volume, at the ankle level, resulting from increases of capillary pressure in case of venous insufficiency, and from an impairment of lymph drainage; as well as the effect of various external pressures upon this volume. Results: An increase of capillary pressure to 40 and 50 mmHg results in an ankle edema which is completely prevented by an external pressure of 10 mmHg. This result is in keeping with the observation by Partsch that vesperal leg swelling is reduced by low compression stockings. The dose effect reported in this study is also found by simulation. The complete blockade of lymphatic return leads to an edema, the prevention of which requires a counterpressure of at least 30 mmHg. When an increase of venous pressure to 60 mmHg, and a reduction by 2/3 of lymphatic drainage are combined, simulating chronic venous insufficiency, the resulting edema is prevented by a 25 mmHg counterpressure. Conclusion: These first results of simulation are in reasonable agreement with clinical experience. As nearly every combination of disturbances may be simulated, the computer model could help to understand and treat edemas, as long as their cause can be identified. Answer: Yes, thigh compression can improve venous hemodynamics in chronic venous insufficiency. A study investigating the hemodynamic effects of thigh compression in patients with deep venous incompetence found that thigh compression with pressures of 40 to 60 mm Hg, which can be achieved with strongly applied short-stretch bandages, resulted in considerable hemodynamic improvement, including reduced venous reflux, in patients with severe stages of chronic venous insufficiency from deep vein incompetence (PUBMED:12422092). Another study showed that thigh-length two-component stockings (AG) were superior to below-knee stockings (AD) with regard to volume reduction and venous hemodynamics, without impairing wear comfort (PUBMED:27886005). Additionally, graduated elastic compression stockings were found to significantly improve venous hemodynamics by reducing the residual volume fraction (RVF) in limbs with primary chronic venous insufficiency (CVI) at all examined walking speeds (PUBMED:12563216). In comparison, inelastic compression was shown to have a significant effect on deep venous hemodynamics by decreasing venous reflux and improving calf muscle pump function when compared with compression stockings (PUBMED:8918324). Furthermore, innovations in medical compression therapy suggest that medical compression stockings should be prescribed based on symptoms and with the lowest effective interface pressure to optimize the tolerability of compression therapy (PUBMED:31845297). The principles of surgical treatment of varicose veins also emphasize the importance of addressing venous hemodynamics in the lower extremity (PUBMED:12515008). Progressive compression has been discussed as potentially as effective as traditional graduated compression therapy for the management of CVI (PUBMED:27594309). Lastly, a mathematical model of tissue fluid transfers aimed at determining the lowest pressure required to prevent edema supports the use of compression therapy for venous and lymphatic edema of the lower limbs (PUBMED:21146339).
Instruction: Is the histamine skin test inhibited by prednisone? Abstracts: abstract_id: PUBMED:81630 Immediate (IgE-mediated) skin testing in the diagnosis of allergic disease. Immediate (IgE-mediated) skin tests are widely used in the diagnosis of allergic diseases. Skin tests correlate well with more specialized studies (RAST, histamine release and provocation tests) in the diagnosis of allergic disease. Lack of standardization and quantitation of biologic potency of allergens make critical comparison of skin test results impossible. A survey of practicing allergists yielded widely divergent opinions concerning the effect of anti-allergic drugs on skin tests. The results of published studies indicate that only antihistamines cause significant depression of skin reactivity. Therefore, therapy for asthma may be continued while diagnostic skin testing is in progress, avoiding the possible morbidity associated with discontinuing pharmacologic therapy. abstract_id: PUBMED:6108129 A comparison of the efficacy of ketotifen (HC 20-511) with sodium cromoglycate (SCG) in skin test positive asthma. 1 Ketotifen (HC 20-511 Sandoz) 1 mg twice daily for 12 weeks was found to be equivalent to sodium cromoglycate (SCG) 20 mg four times daily for 12 weeks in 35 skin test positive asthmatic patients in a randomised double-blind cross-over study. 2 No statistically significant difference between the two drugs in mean values for daily peak flow rates, diary card scores and spirometry at monthly visits was demonstrated. 3 Treatment failures as judged by severe asthma requiring withdrawal from the trial or addition of short courses of prednisone occurred in three patients on each drug. 4 Sedation was noted by 10 patients onHC 20-511 and 5 on SCG. 5 Weight loss was noted in those patients on SCG, but not those on HC 20-511. abstract_id: PUBMED:7505536 Pathogenesis and pharmacologic modulation of the cutaneous late-phase reaction. Late-phase reactions that occur in response to local antigen challenge have been demonstrated in the skin, nose, and lungs of humans. Late-phase reactions in these organs involve many mechanisms important in diseases such as atopic dermatitis, allergic rhinitis, and asthma. For this discussion, late-phase reaction research involving the skin model will be presented. Past research focused on the inflammatory components of late-phase reactions, but results were confounded by abrasion-related nonspecific inflammatory changes. Newer, less traumatic skin test methods have clarified the involvement of humoral and cellular elements in cutaneous late-phase reactions and demonstrated the efficacy of agents commonly used to treat allergenic conditions. Antigen challenge triggers the local release of histamine, prostaglandin D2, leukotriene C4, and tryptase. Dermal infiltrate abounds with eosinophils, basophils, neutrophils, and mononuclear cells several hours after challenge. Several investigators have shown that soluble proinflammatory cytokines are produced by cells at the antigen challenged site. Several of these cytokines may activate eosinophils and basophils, which release mediators of inflammation. Some of the newer nonsedating antihistamines appear to possess anti-inflammatory properties that may be unrelated to histamine antagonism and that might alter late inflammatory events of allergic disease. abstract_id: PUBMED:16464759 Skin lesions induced by bortezomib. We report on six cases of skin lesions induced by bortezomib in patients treated for relapsed multiple myeloma. The folliculitis-like rash appeared in the second cycle of bortezomib therapy. Therapy with prednisone led to rapid resolution of the skin lesions. Prednisone 10 mg before each infusion of bortezomib was necessary to prevent recurrence of the rash while antihistamines alone were ineffective. abstract_id: PUBMED:4401002 The effect of prednisone and antihistamines on patch test reactions N/A abstract_id: PUBMED:2877560 Dermatologic emergencies. Some skin conditions, such as blistering diseases, guttate and pustular psoriasis, exfoliative erythroderma, reactive erythemas and acute contact dermatitis have an acute onset and require prompt intervention. Acute vesicular or weeping dermatoses are best treated with cool, wet compresses and topical lotions, whereas dry, scaly skin conditions should be treated with hydrating measures and topical ointments. Oral antihistamines and topical and systemic steroids are the mainstays of treatment for these severe dermatoses. abstract_id: PUBMED:4400703 A study of the effect of prednisone and an antihistamine on patch test reactions. N/A abstract_id: PUBMED:16966021 Cutaneous vasculitis: diagnosis and management. Vasculitis is histologically defined as inflammatory cell infiltration and destruction of blood vessels. Vasculitis is classified as primary (idiopathic, eg, cutaneous leukocytoclastic angiitis, Wegener's granulomatosis) or secondary, a manifestation of connective tissue diseases, infections, adverse drug eruptions, or a paraneoplastic phenomenon. Cutaneous vasculitis, manifested as urticaria, purpura, hemorrhagic vesicles, ulcers, nodules, livedo, infarcts, or digital gangrene, is a frequent and often significant component of many systemic vasculitic syndromes such as lupus or rheumatoid vasculitis and antineutrophil cytoplasmic antibody-associated primary vasculitic syndromes such as Churg-Strauss syndrome. In most instances, cutaneous vasculitis represents a self-limited, single-episode phenomenon, the treatment of which consists of general measures such as leg elevation, warming, avoidance of standing, cold temperatures and tight fitting clothing, and therapy with antihistamines, aspirin, or nonsteroidal anti-inflammatory drugs. More extensive therapy is indicated for symptomatic, recurrent, extensive, and persistent skin disease or coexistence of systemic disease. For mild recurrent or persistent disease, colchicine and dapsone are first-choice agents. Severe cutaneous and systemic disease requires more potent immunosuppression (prednisone plus azathioprine, methotrexate, cyclophosphamide, cyclosporine, or mycophenolate mofetil). In cases of refractory vasculitis, plasmapheresis and intravenous immunoglobulin are viable considerations. The new biologic therapies that work via cytokine blockade or lymphocyte depletion such as tumor alpha inhibitor infliximab and the anti-B-cell antibody rituximab, respectively, are showing benefit in certain settings such as Wegener's granulomatosis, antineutrophil cytoplasmic antibody-associated vasculitis, Behçet's disease, and cryoglobulinemic vasculitis. abstract_id: PUBMED:6283956 Adrenocortical suppression in the dog given a single intramuscular dose of prednisone or triamcinolone acetonide. Adrenocortical function was assessed in dogs given a single intramuscular dose of either prednisone or triamcinolone acetonide (TCA; or saline solution to controls) to determine the duration of adrenocortical suppression caused by 2 commonly used glucocorticoids. The glucocorticoids were administered at recommended therapeutic doses; therefore, dogs given prednisone received a greater amount of glucocorticoid activity than did in dogs given TCA. Basal and ACTH-stimulated plasma cortisol concentrations, as determined by radioimmunoassay, were obtained once a week. Total intravascular eosinophil concentration and skin responses to intradermally injected histamine phosphate were quantitated. Dogs given TCA showed suppressed basal and ACTH-stimulated plasma cortisol concentrations 1 week after injection; the latter change persisted 2 weeks after injection. Adrenocortical function in 1 of 4 dogs given TCA remained suppressed for 4 weeks. In contrast, prednisone did not significantly alter adrenocortical function. Although intravascular eosinophil concentrations did not vary among groups, skin responses to intradermally injected histamine phosphate were reduced 6 days after prednisone and TCA were given. abstract_id: PUBMED:15798144 Anaphylaxis induced by glucocorticoids. Glucocorticoids are frequently used to treat allergic reactions. Therefore, allergic reactions to systemic glucocorticoids in particular are considered most unlikely and are not well known. We report on a 23-year-old woman with atopic dermatitis who had an anaphylactic reaction after oral administration of prednisolone. On treatment with epinephrine, antihistamines and volume symptoms resolved. Skin testing with a panel of glucocorticoids showed immediate type reactions to prednisolone, prednisolone hydrogen succinate, prednisone, and betamethasone dihydrogen phosphate. In challenge testing the patient tolerated methyl prednisolone and dexamethasone. There is increasing evidence that true allergic immediate type reactions to glucocorticoids exist. The severity of the reaction can vary from a rash to anaphylaxis. However, a patient sensitized to one or a group of glucocorticoids does not have to refrain from all types of glucocorticoids. Careful challenge testing is by far the best way to select glucocorticoids that are safe for future treatment. Clinicians should be aware that allergic reactions to glucocorticoids can occur and that worsening of symptoms does not always mean treatment failure. Answer: The histamine skin test can be affected by prednisone. According to the abstract from PUBMED:6283956, skin responses to intradermally injected histamine phosphate were reduced after dogs were given prednisone. This suggests that prednisone can suppress the skin reaction to histamine, which is a common method used to assess allergic responses in skin testing. Therefore, it can be inferred that prednisone may inhibit the histamine skin test.
Instruction: Can mass media influence emergency department visits for stroke? Abstracts: abstract_id: PUBMED:17540967 Can mass media influence emergency department visits for stroke? Background And Purpose: Television advertising has been associated with significant increases in the knowledge of the warning signs of stroke among Ontarians aged 45 and older. However, to date there has been little data on the relationship between knowledge of the warning signs of stroke and behavior. Methods: Data on presentation to regional and enhanced district stroke center emergency departments were obtained from the Registry of the Canadian Stroke Network for a 31-month period between mid 2003 and the beginning of 2006. Public opinion polling was used to track knowledge of the warning signs of stroke among Ontarians aged 45 and older. Results: The public's awareness of the warning signs of stroke increased during 2003 to 2005, decreasing in 2006 after a 5-month advertising blackout. There was a significant increase in the mean number of emergency department visits for stroke over the study period. A campaign effect independent of year was observed for total presentations, presentation within 5 hours of last seen normal, and presentation within 2.5 hours. For TIAs there was a strong campaign effect but no change in the number of presentations by year. Conclusions: Continuous advertising may be required to build and sustain public awareness of the warning signs of stroke. There are many factors that may influence presentation for stroke and awareness of the warning signs may be only one. However, results of this study suggest there may be an important correlation between the advertising and emergency department presentations with stroke, particularly for TIAs. abstract_id: PUBMED:33489609 The Effect of the COVID-19 Pandemic on Emergency Department Visits for Neurological Diseases in Saudi Arabia. Introduction COVID-19 has been a gravitating topic in the past months, yet much information about this new virus is to be unraveled. The uncertainties about the virus and its effects have affected a lot of daily life activities. One of these affected activities is emergency department (ED) visits and how this disease might have changed people's perspective on when to go to an emergency. This study aims to assess the effect of the COVID-19 pandemic on emergency department visits for neurological conditions. Methods A retrospective record review study was conducted at King Abdul-Aziz University Hospital (KAUH) during the month of July 2020. The study included visits of patients with common neurological conditions (headache, seizures, and weakness), during December 2019 - May 2020 at KAUH. Information obtained from the medical records included demographic data, date of visit, the reason for the visit, history of a similar episode, number of ED visits during the past year, priority given at the ED, length of hospitalization, diagnosis of COVID-19 at KAUH, known chronic diseases, and whether brain imaging was performed with which kind of imaging. Descriptive analysis was conducted to assess the impact of the pandemic on ED visits and statistical analysis (chi-square test) was performed on ED visit data to assess for significance. Results There was a 24% reduction in the number of visits for common neurological symptoms (during the pandemic) time period in comparison to (pre-pandemic). However, some other variables have also shown an increase (during the pandemic) time period. Most notably, brain CT scans, which underwent an 11.3% increase during the pandemic time period (p=0.005). Some variables have shown no significant change, for example, the relationship between the time period and the reason for the visit (p=0.305). Conclusion Multiple factors most likely contributed to the decrease in emergency department visits recorded in this study. One of the main reasons is the fear of catching COVID-19 infection by just vising the hospitals. Considering these findings, it is predominant to raise awareness when patients do need to go to the emergency department due to an acute neurological condition regardless of any pandemic. abstract_id: PUBMED:30453207 Association of office-based provider visits with emergency department utilization among publicly insured stroke survivors. Objective: To evaluate the association between visits to office-based providers and Emergency Department (ED) utilization among stroke survivors. Methods: We analyzed 12-years of data representing a weighted sample of 3,317,794 publicly insured US adults aged ≥18 years with stroke, using the Medical Expenditure Panel Survey Household Component (MEPS-HC), 2003-2014 dataset. We used a negative binomial regression model that accounts for dispersion to estimate the association between office-based and ED visits controlling for covariates. We used a multivariate logistic regression model to identify independent predictors of ED visits. Results: Annual mean (SD) ED visits and office based visits for publicly insured stroke survivors were 0.60 (1.10) and 12.2 (19.9) respectively. Each unit increase in office based visits was associated with a 1% increase in ED visit (p = 0.008). Being unmarried (adjusted OR = 1.26; 95% CI: 1.015-1.564) and having several comorbidities (adjusted OR = 1.93; 95% CI: 1.553-2.412) were associated with a higher likelihood of at least one ED visit. The odds for an ED visit for individuals aged 45-64, those aged 65 years and above, and those with a college or higher level of education were respectively 34% (OR = 0.66; 95% CI: 0.454-0.965), 52% (OR = 0.48; 95% CI: 0.330-0.701), and 36% (OR = 0.64; 95% CI: 0.497-0.834) lower than their counterparts. Conclusions: Contrary to our expectations, there was a direct relationship between ED visits and office base visits among U.S. stroke survivors. This finding may reflect the difficulties associated with managing stroke survivors with multiple co-morbidities or complex psycho-socio-economic issues. abstract_id: PUBMED:37780446 Impact of desert dust storms, PM10 levels and daily temperature on mortality and emergency department visits due to stroke. Objective: It is known that the inhalation of air pollutants adversely affects human health. These air pollutants originated from natural sources such as desert storms or human activities including traffic, power generating, domestic heating, etc. This study aimed to investigate the impacts of desert dust storms, particulate matter ≤10 μm (PM10) and daily maximum temperature (MT) on mortality and emergency department (ED) visits due to stroke in the city of Gaziantep, Southeast Turkey. Method: The data on mortality and ED visits due to stroke were retrospectively recruited from January 1, 2009, to March 31, 2014, in Gaziantep City Centre. Results: PM10 levels did not affect ED visits or mortality due to stroke; however, MT increased both ED visits [adjusted odds ratio (OR) = 1.002, 95% confidence interval (CI) = 1.001-1.003] and mortality (OR = 1.006, 95% CI = 0.997-1.014) due to stroke in women. The presence of desert storms increased ED visits due to stroke in the total population (OR = 1.219, 95% CI = 1.199-1.240), and all subgroups. It was observed that desert dust storms did not have an increasing effect on mortality. Conclusion: Our findings suggest that MT and desert dust storms can induce morbidity and mortality due to stroke. abstract_id: PUBMED:38254140 Time-series analysis of temperature variability and cardiovascular emergency department visits in Atlanta over a 27-year period. Background: Short-term temperature variability, defined as the temperature range occurring within a short time span at a given location, appears to be increasing with climate change. Such variation in temperature may influence acute health outcomes, especially cardiovascular diseases (CVD). Most research on temperature variability has focused on the impact of within-day diurnal temperature range, but temperature variability over a period of a few days may also be health-relevant through its impact on thermoregulation and autonomic cardiac functioning. To address this research gap, this study utilized a database of emergency department (ED) visits for a variety of cardiovascular health outcomes over a 27-year period to investigate the influence of three-day temperature variability on CVD. Methods: For the period of 1993-2019, we analyzed over 12 million CVD ED visits in Atlanta using a Poisson log-linear model with overdispersion. Temperature variability was defined as the standard deviation of the minimum and maximum temperatures during the current day and the previous two days. We controlled for mean temperature, dew point temperature, long-term time trends, federal holidays, and day of week. We stratified the analysis by age group, season, and decade. Results: All cardiovascular outcomes assessed, except for hypertension, were positively associated with increasing temperature variability, with the strongest effects observed for stroke and peripheral vascular disease. In stratified analyses, adverse associations with temperature variability were consistently highest in the moderate-temperature season (October and March-May) and in the 65 + age group for all outcomes. Conclusions: Our results suggest that CVD morbidity is impacted by short-term temperature variability, and that patients aged 65 and older are at increased risk. These effects were more pronounced in the moderate-temperature season and are likely driven by the Spring season in Atlanta. Public health practitioners and patient care providers can use this knowledge to better prepare patients during seasons with high temperature variability or ahead of large shifts in temperature. abstract_id: PUBMED:37776981 Unplanned Emergency Department Visits Following Revision Total Joint Arthroplasty: Incidences, Risk Factors, and Mortalities. Background: The incidence of unplanned emergency department (ED) visits following revision total joint arthroplasty is an indicator of the quality of postoperative care. The aim of this study was to investigate the incidences, timings, and characteristics of ED visits within 90 days after revision total joint arthroplasty. Methods: A retrospective review of 457 consecutive cases, including 254 revision total hip arthroplasty (rTHA) and 203 revision total knee arthroplasty (rTKA) cases, was conducted. Data regarding patient demographics, timings of the ED encounter, chief complaints, readmissions, and diagnoses indicating reoperation were analyzed. Results: The results showed that 41 patients who had rTHA (16.1%) and 14 patients who had rTKA (6.9%) returned to the ED within 90 days postoperatively. The incidence of ED visits was significantly higher in the rTHA group than in the rTKA group (P = .003). The most common surgery-related complications were dislocation among rTHA patients and wound conditions among rTKA patients. Apart from elevated calculated comorbidity scores, peptic ulcer in rTHA patients and cerebral vascular events and chronic obstructive pulmonary disease in rTKA patients might increase chances of unplanned ED visits. Patients who had ED visits showed significantly higher mortality rates than the others in both rTHA and rTKA cohorts (P = .050 and P = .008, respectively). Conclusions: The ED visits within 90 days are more common after rTHA than after rTKA. Patients in both ED visit groups after rTHA and rTKA demonstrated worse survival. Efforts should be made to improve quality of care to prevent ED visits. abstract_id: PUBMED:32423155 Temporal Trends of Emergency Department Visits of Patients with Atrial Fibrillation: A Nationwide Population-Based Study. We aimed to describe temporal trends in emergency department (ED) visits of patients with atrial fibrillation (AF) over 12 years. A repeated cross-sectional analysis of ED visits in AF patients using the Korean nationwide claims database between 2006 and 2017 were conducted. We identified AF patients who had ≥1 ED visits. The incidence of ED visits among total AF population, cause of ED visit, and clinical outcomes were evaluated. During 12 years, the annual numbers of AF patients who attended ED at least once a year continuously increased (40,425 to 99,763). However, the annual incidence of ED visits of AF patients was stationary at about 30% because the number of total AF patients also increased during the same period. The most common cause of ED visits was cerebral infarction. Although patients had a higher risk profile over time, the 30-day and 90-day mortality after ED visit decreased over time. ED visits due to ischemic stroke, intracranial hemorrhage, and myocardial infarction decreased, whereas ED visits due to AF, gastrointestinal bleeding, and other major bleeding slightly increased among total AF population over 12 years. A substantial proportion of AF patients attended ED every year, and the annual numbers of AF patients who visited the ED significantly increased over 12 years. Optimized management approaches in a holistic and integrated manner should be provided to reduce ED visits of AF patients. abstract_id: PUBMED:33877519 An effect of 24-hour temperature change on outpatient and emergency and inpatient visits for cardiovascular diseases in northwest China. Some studies suggested that 24-h temperature change (TC24) was one of the potential risk factors for human health. However, evidence of the short-term effect of TC24 on outpatient and emergency department (O&amp;ED) visits and hospitalizations for cause-specific cardiovascular diseases (CVDs) is still limited. The aim of this study is to explore the short-term effects of TC24 on O&amp;ED visits and hospitalizations for CVDs in northwest China which is an area with large temperature variation. The O&amp;ED visits records for CVDs of 3 general hospitals and the inpatient records for CVDs of 4 general hospitals were collected from January 1, 2013, to December 31, 2016, in Jinchang City, northwest China. Meteorological and air pollution data were also obtained during the same study period from local meteorological monitoring station and environmental monitoring station, respectively. A generalized additive model (GAM) with Poisson regression was employed to analyze the effects of TC24 on O&amp;ED visits and hospitalizations for CVDs. V-shaped relationship were found between TC24 and O&amp;ED visits and hospitalizations for CVDs, including total CVD, hypertension, coronary heart disease (CHD) and stroke. Stratified analysis showed that men and patients over 65 years old were more susceptible to temperature changes. The estimates in non-heating months were higher than in full year. TC24 can affect the O&amp;ED visits and hospitalizations for CVDs in this study. This study provides useful data for policy makers to better prepare local responses to the impact of changes in temperature on population health. abstract_id: PUBMED:33955546 Emergency department visits for emergent conditions among older adults during the COVID-19 pandemic. Background/objective: Emergency department (ED) visits have declined while excess mortality, not attributable to COVID-19, has grown. It is not known whether older adults are accessing emergency care differently from their younger counterparts. Our objective was to determine patterns of ED visit counts for emergent conditions during the COVID-19 pandemic for older adults. Design: Retrospective, observational study. Setting: Observational analysis of ED sites enrolled in a national clinical quality registry. Participants: One hundred and sixty-four ED sites in 33 states from January 1, 2019 to November 15, 2020. Main Outcome And Measures: We measured daily ED visit counts for acute myocardial infarction (AMI), stroke, sepsis, fall, and hip fracture, as well as deaths in the ED, by age categories. We estimated Poisson regression models comparing early and post-early pandemic periods (defined by the Centers for Disease Control and Prevention) to the pre-pandemic period. We report incident rate ratios to summarize changes in visit incidence. Results: For AMI, stroke, and sepsis, the older (75-84) and oldest old (85+ years) had the greatest decline in visit counts initially and the smallest recovery in the post-early pandemic periods. For falls, visits declined early and partially recovered uniformly across age categories. In contrast, hip fractures exhibited less change in visit rates across time periods. Deaths in the ED increased during the early pandemic period, but then fell and were persistently lower than baseline, especially for the older (75-84) and oldest old (85+ years). Conclusions: The decline in ED visits for emergent conditions among older adults has been more pronounced and persistent than for younger patients, with fewer deaths in the ED. This is concerning given the greater prevalence and risk of poor outcomes for emergent conditions in this age group that are amenable to time-sensitive ED diagnosis and treatment, and may in part explain excess mortality during the COVID-19 era among older adults. abstract_id: PUBMED:35548587 Effect of the COVID-19 Pandemic on Emergency Department Visits of Patients with an Emergent or Urgent Diagnosis. Purpose: During the coronavirus disease 2019 (COVID-19) pandemic, visits to emergency department (ED) have significantly declined worldwide. The purpose of this study was to identify the trend of visits to ED for different diseases at the peak and slack stages of the epidemic. Patients And Methods: This was a retrospective observational study conducted in a tertiary referral medical center in northern Taiwan. We recorded weekly ED visits for myocardial infarction with or without ST-elevation (STEMI or NSTEMI), out-of-hospital cardiac arrest (OHCA), acute stroke, and congestive heart failure from 2016 to 2021. We compared the local epidemic peak periods (calendar weeks 4-18, 2020 and calendar weeks 21-31, 2021) and its corresponding slack periods (calendar weeks 4-18, 2021 and calendar weeks 21-31, 2020) with the baseline period (2016-2019) using Mann-Whitney test to identify the difference. Results: We observed a significant decline in ED visits (median [Q1, Q3]) during the epidemic for OHCA (6 [5, 7] and 5 [4, 6], p = 0.046, for baseline and peak period, respectively, in week 4-18), acute stroke (41.5 [38, 47] and 35 [28, 39], p &lt; 0.001, in week 4-18, 40 [35, 45] and 35 [28, 40], p = 0.039, in week 21-31) and CHF (28 [24.25, 33] and 19 [12, 23], p &lt; 0.001, in week 4-18, 18 [16, 23] and 13 [11, 16], p = 0.001, in week 21-31). Significant difference was not observed in patients with NSTEMI and STEMI in both week 4-18 and 21-31, and cardiac arrest in week 21-31. There was a rebound in ED visits in the slack period. Conclusion: This study revealed that ED visits significantly declined during the COVID-19 epidemic and rebounded in the slack period. The trend was significant for acute stroke and heart failure but was relatively less prominent effect for emergent events such as cardiac arrest or myocardial infarction. Answer: Yes, mass media can influence emergency department visits for stroke. A study conducted in Ontario, Canada, found that television advertising was associated with significant increases in the knowledge of the warning signs of stroke among Ontarians aged 45 and older. The study observed that public awareness of the warning signs of stroke increased during 2003 to 2005, which corresponded with a significant increase in the mean number of emergency department visits for stroke over the same period. The study also noted a campaign effect independent of year for total presentations, presentation within 5 hours of last seen normal, and presentation within 2.5 hours. For transient ischemic attacks (TIAs), there was a strong campaign effect but no change in the number of presentations by year. The study concluded that continuous advertising may be required to build and sustain public awareness of the warning signs of stroke and suggested an important correlation between advertising and emergency department presentations with stroke, particularly for TIAs (PUBMED:17540967).
Instruction: Pseudomonas aeruginosa-catecholamine inotrope interactions: a contributory factor in the development of ventilator-associated pneumonia? Abstracts: abstract_id: PUBMED:22556319 Pseudomonas aeruginosa-catecholamine inotrope interactions: a contributory factor in the development of ventilator-associated pneumonia? Background: Ventilated patients receiving intensive care are at significant risk of acquiring a ventilator-associated pneumonia that is associated with significant morbidity and mortality. Despite intensive research, it is still unclear why Pseudomonas aeruginosa, a microbe that rarely causes pneumonia outside of intensive care, is responsible for so many of these infections. Methods: We investigated whether medications frequently prescribed to patients in the ICU, the catecholamine inotropes, were affecting the growth and virulence of P aeruginosa . Effects of clinically attainable concentrations of inotropes on P aeruginosa pathogenicity were explored using in vitro growth and virulence assays and an ex vivo model of infection using ciliated human respiratory epithelium. Results: We found that inotropes were potent stimulators of P aeruginosa growth, producing upto 50-fold increases in bacterial numbers via a mechanism involving inotrope delivery of transferrin-ron,internalization of the inotrope, and upregulation of the key pseudomonal siderophore pyoverdine.Inotropes also markedly increased biofilm formation on endotracheal tubing and enhanced the biofilm production and toxicity of P aeruginosa in its interaction with respiratory epithelium.Importantly, catecholamine inotropes also facilitated the rapid recovery of P aeruginosa from tobramycin antibiotic challenge. We also tested out the effect of the inotropes vasopressin and phenylephrine on the growth and virulence of P aeruginosa and found that, in contrast to the catecholamines,these drugs had no stimulatory effect. Conclusions: Collectively, our results suggest that catecholamine inotrope-bacterial interactions may be an unexpected contributory factor to the development of P aeruginosa -ventilator-associated pneumonia. abstract_id: PUBMED:16854339 Pneumonia due to Pseudomonas aeruginosa Pseudomonas aeruginosa is one of the leading causes of Gram-negative nosocomial pneumonia. It is the most common cause of ventilator-associated pneumonia and carries the highest mortality among hospital-acquired infections. P. aeruginosa produces a large number of toxins and surface components that make it especially virulent compared with other microorganisms. These include pili, flagella, membrane bound lipopolysaccharide, and secreted products such as exotoxins A, S and U, elastase, alkaline protease, cytotoxins and phospholipases. The most common mechanism of infection in mechanically ventilated patients is through aspiration of upper respiratory tract secretions previously colonized in the process of routine nursing care or via contaminated hands of hospital personnel. Intravenous therapy with an antipseudomonal regimen should be started immediately when P. aeruginosa pneumonia is suspected or confirmed. Empiric therapy with drugs active against P. aeruginosa should be started, especially in patients who have received previous antibiotics or present late-onset pneumonia. abstract_id: PUBMED:19259632 Reservoirs of Pseudomonas aeruginosa in the intensive care unit. The role of tap water as a source of infection In spite of significant changes in the spectrum of organisms causing nocosomial infections in intensive care units (ICUs), Pseudomonas aeruginosa has held a nearly unchanged position as an important pathogen. Today, the organism is isolated as the second most frequent organism causing ventilator-associated pneumonia, and the third or fourth most frequent pathogen causing septicemia, urinary tract infections, and surgical wound infections. In the past, horizontal transmissions were regarded as the most relevant route of strain acquisition. However, during the last 10 years, a significant proportion of P. aeruginosa isolates were demonstrated to stem from ICU water sites. Studies using molecular typing techniques have shown that up to 50% (in one study 92%) of nosocomial P. aeruginosa acquisitions may result from transmission through tap water. Additional proof of concept of waterborne infection comes from the reports of three recent studies that infection rates may be lowered significantly by eliminating colonized tap water sources or interrupting transmission chains from water sites. abstract_id: PUBMED:19334381 Colistin use in ventilator-associated pneumonia due to panresistant Pseudomonas aeruginosa and Acinetobacter baumannii Nosocomial infections by resistant gram-negative microorganisms are important causes of mortality in intensive care unit (ICU)'s. The treatment choices are limited in infections due to Acinetobacter baumannii and Pseudomonas aeruginosa, especially if they are panresistant. In these type of resistant infections, colistin--an old antibiotic--has become a current issue. The aim of this study was to evaluate the efficacy of colistin in 9 cases (6 males, mean age 75.8 +/- 9.4 years), with ventilator associated pneumonia (VAP) caused by panresistant A. baumannii and P. aeruginosa in respiratory ICU. All cases were referred to ICU from other hospitals or clinics. It was detected that 7 of 9 cases were treated with anti-pseudomonal antibiotics before the development of VAP. Panresistant A. baumannii was isolated in 5 cases and P. aeruginosa in 4 cases. VAP by these microorganisms was detected on the 26.6 +/- 12.4th days of invasive mechanical ventilation and the cases were followed up for 54.2 +/- 25.7 days in ICU. During colistin treatment, dermatitis (one case) and nephrotoxicity (one case) were observed as side effects. Microbiological response to colistin was obtained in 6 cases. Three cases died due to non-eradication of panresistant microorganisms and three cases died due to other infections during ICU follow-up. The data presented in this study demonstrates that colistin can be considered as a safe and effective antibiotic in the treatment of panresistant A. baumannii and P. aeruginosa infections. abstract_id: PUBMED:35891262 Pseudomonas aeruginosa: Recent Advances in Vaccine Development. Pseudomonas aeruginosa is an important opportunistic human pathogen. Using its arsenal of virulence factors and its intrinsic ability to adapt to new environments, P. aeruginosa causes a range of complicated acute and chronic infections in immunocompromised individuals. Of particular importance are burn wound infections, ventilator-associated pneumonia, and chronic infections in people with cystic fibrosis. Antibiotic resistance has rendered many of these infections challenging to treat and novel therapeutic strategies are limited. Multiple clinical studies using well-characterised virulence factors as vaccine antigens over the last 50 years have fallen short, resulting in no effective vaccination being available for clinical use. Nonetheless, progress has been made in preclinical research, namely, in the realms of antigen discovery, adjuvant use, and novel delivery systems. Herein, we briefly review the scope of P. aeruginosa clinical infections and its major important virulence factors. abstract_id: PUBMED:18453277 Shifting paradigms in Pseudomonas aeruginosa biofilm research. Biofilms formed by Pseudomonas aeruginosa have long been recognized as a challenge in clinical settings. Cystic fibrosis, endocarditis, device-related infections, and ventilator-associated pneumonia are some of the diseases that are considerably complicated by the formation of bacterial biofilms, which are resistant to most current antimicrobial therapies. Due to intense research efforts, our understanding of the molecular events involved in P. aeruginosa biofilm formation, maintenance, and antimicrobial resistance has advanced significantly. Over the years, several dogmas regarding these multicellular structures have emerged. However, more recent data reveal a remarkable complexity of P. aeruginosa biofilms and force investigators to continually re-evaluate previous findings. This chapter provides examples in which paradigms regarding P. aeruginosa biofilms have been challenged, reflecting the need to critically re-assess what is emerging in this rapidly growing field. In this process, several avenues of research have been opened that will ultimately provide the foundation for the development of preventative measures and therapeutic strategies to successfully treat P. aeruginosa biofilm infections. abstract_id: PUBMED:31130925 Tolerance and Resistance of Pseudomonas aeruginosa Biofilms to Antimicrobial Agents-How P. aeruginosa Can Escape Antibiotics. Pseudomonas aeruginosa is one of the six bacterial pathogens, Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, and Enterobacter spp., which are commonly associated with antimicrobial resistance, and denoted by their acronym ESKAPE. P. aeruginosa is also recognized as an important cause of chronic infections due to its ability to form biofilms, where the bacteria are present in aggregates encased in a self-produced extracellular matrix and are difficult or impossible to eradicate with antibiotic treatment. P. aeruginosa causes chronic infections in the lungs of patients with cystic fibrosis and chronic obstructive lung disease, as well as chronic urinary tract infections in patients with permanent bladder catheter, and ventilator-associated pneumonia in intubated patients, and is also an important pathogen in chronic wounds. Antibiotic treatment cannot eradicate these biofilm infections due to their intrinsic antibiotic tolerance and the development of mutational antibiotic resistance. The tolerance of biofilms to antibiotics is multifactorial involving physical, physiological, and genetic determinants, whereas the antibiotic resistance of bacteria in biofilms is caused by mutations and driven by the repeated exposure of the bacteria to high levels of antibiotics. In this review, both the antimicrobial tolerance and the development of resistance to antibiotics in P. aeruginosa biofilms are discussed. Possible therapeutic approaches based on the understanding of the mechanisms involved in the tolerance and resistances of biofilms to antibiotics are also addressed. abstract_id: PUBMED:17940409 Strategies for management of difficult to treat Gram-negative infections: focus on Pseudomonas aeruginosa Pseudomonas aeruginosa is often involved in the aetiology of numerous infections, particularly those occurring in hospital. The infections in which P. aeruginosa most frequently has a pathogenic role include respiratory tract infections, particularly those occurring in patients with chronic obstructive pulmonary disease (COPD), nosocomial pneumonia, ventilator-associated pneumonia, and cystic fibrosis, as well as those developing in patients with AIDS, bacteraemia, sepsis, urinary tract infections, especially those related to catheterisation or kidney transplants, infections in neutropenic patients, and skin infections, particular those developing in surgical wounds or in burns. Thus, in practice, P. aeruginosa is ubiquitously present in all body districts. Particular attention should also be given to the presence of P. aeruginosa in the community setting, for example when it causes community-acquired pneumonia in the elderly or pneumonia in patients with advanced stage COPD. The mortality rate of patients with severe P. aeruginosa infections is very high. Treatment should be initiated very promptly with the most suitable drug, perhaps making use of combination therapy with a beta-lactam and a fluoroquinolone when indicated, and continued for a sufficiently long period. As far as concerns future therapeutic options for the treatment of P. aeruginosa infections, the only two new molecules that will probably become available are doripenem and ceftobiprole. Given this prospective, trust must be placed in the already known drugs, exploiting them more appropriately. abstract_id: PUBMED:16427231 Targeting mechanisms of Pseudomonas aeruginosa pathogenesis. Pseudomonas aeruginosa is an opportunistic pathogen responsible for ventilator-acquired pneumonia, acute lower respiratory tract infections in immunocompromised patients and chronic respiratory infections in cystic fibrosis patients. High incidence, infection severity and increasing resistance characterize P. aeruginosa infections, highlighting the need for new therapeutic options. One such option is to target the many pathogenic mechanisms conferred to P. aeruginosa by its large genome encoding many different virulence factors. This article reviews the pathogenic mechanisms and potential therapies targeting these mechanisms in P. aeruginosa respiratory infections. abstract_id: PUBMED:17905537 Pseudomonas aeruginosa and Candida albicans interaction in the respiratory tract: from pathophysiology to a therapeutic perspective Several studies evidenced a pathogenic interaction between P. aeruginosa and C. albicans. P. aeruginosa is one of the most frequent microorganisms responsible for ventilator-associated pneumonia (VAP) and C. albicans usually colonize tracheo-bronchial tract of patients undergoing mechanical ventilation in the intensive care unit. In vitro, P. aeruginosa exploits filamentous fungi resulting in fungal killing and limitation of C. albicans growth in the host. Biofilm also play an important role as a persistent source of infectious material for recurrent episodes of VAP. Indeed, Candida spp and P. aeruginosa are the most common pathogens retrieved from endotracheal tube biofilm and tracheal secretions in patients with VAP. Finally, it has been demonstrated that previous tracheo-bronchial C. albicans colonization enhanced the incidence of P. aeruginosa pneumonia in a murine model. A recent study performed in patients requiring intubation and mechanical ventilation identified Candida spp tracheo-bronchial colonization as a risk factor for P. aeruginosa VAP. In addition, a retrospective study suggested that antifungal treatment might reduce P. aeruginosa VAP or tracheo-bronchial colonization in intubated patients with Candida spp tracheo-bronchial colonization. These interactions have major environmental and medical consequences. Experimental studies providing a better understanding of the mechanisms of interaction and clinical studies evaluating the necessity of a antifungal treatment might improve the management of these opportunistic infections. Answer: The interactions between catecholamine inotropes and Pseudomonas aeruginosa have been suggested as a potential contributory factor in the development of ventilator-associated pneumonia (VAP). Research has shown that catecholamine inotropes, which are medications frequently prescribed to patients in the ICU, can significantly stimulate the growth and virulence of P. aeruginosa. These inotropes can lead to up to a 50-fold increase in bacterial numbers through a mechanism that involves the delivery of transferrin-iron, internalization of the inotrope, and upregulation of the key pseudomonal siderophore pyoverdine. Additionally, catecholamine inotropes have been found to markedly increase biofilm formation on endotracheal tubing and enhance biofilm production and toxicity of P. aeruginosa in its interaction with respiratory epithelium. Importantly, these inotropes also facilitate the rapid recovery of P. aeruginosa from antibiotic challenges, such as those posed by tobramycin. In contrast, other drugs like vasopressin and phenylephrine did not show a stimulatory effect on the growth and virulence of P. aeruginosa (PUBMED:22556319). Given that P. aeruginosa is a leading cause of Gram-negative nosocomial pneumonia and the most common cause of VAP with high mortality rates (PUBMED:16854339), understanding the factors that contribute to its virulence and growth in the ICU setting is critical. The findings that catecholamine inotropes can enhance the pathogenicity of P. aeruginosa suggest that these drug-bacterial interactions may indeed be an unexpected factor in the development of VAP caused by this microorganism. Therefore, the use of catecholamine inotropes in ICU patients requiring ventilation should be carefully considered, and alternative medications that do not promote P. aeruginosa growth and virulence might be preferred when possible.
Instruction: Are there treatment variations in triage outcomes across out-of-hours co-ops? Abstracts: abstract_id: PUBMED:20003719 Are there treatment variations in triage outcomes across out-of-hours co-ops? Background: This study considers the factors that affect service provision for individuals who present to out-of-hours (OOH) primary care services in the Republic of Ireland and Northern Ireland. The organisations under consideration are known as OOH co-ops. Specifically, an individual can potentially receive one of four services: nurse advice, doctor advice, a treatment centre consultation or a home visit. Aim: The principal aim was to investigate whether service provision was consistent across co-ops once patient characteristics, patient complaints and other covariates were controlled for. In this paper, service provision was seen as a necessary but not sufficient condition for quality. Methods: A multinomial logit approach was used to model the choice between the three services offered by co-ops. Results: The results indicate that service provision was relatively homogenous across co-ops. Conclusions: Quality was consistent across co-ops in terms of service provision. Therefore the next step is to consider whether quality within the treatment received varies. Nevertheless, the result provides some support for using OOH co-ops as a means to provide OOH primary care. abstract_id: PUBMED:28657243 Nurse Triage in an Irish Out-of-hours General Practice Co-Operative. Specially trained triage nurses play a crucial role in the operation of out-of-hours GP co-operatives. This study aimed to establish the proportion of all patient contacts with the out-of-hours GP co-operative based in the Mid-West of Ireland (Shannondoc), which were managed by triage nurses. A retrospective, descriptive analysis was conducted on the database of contacts to the Shannondoc urgent, out-of-hours primary care co-operative. Of the 110,039 contacts to the service in 2013, 19,147 (17.4%) were classified as being managed by nurses and 14.2% were managed by nurse telephone triage alone. Twenty-four percent of the 19,147 calls managed by nurses involved children under six years. Triage nurses play an important role in administering safe medical advice over the phone. This has implications for the training of triage nurses and the future planning of urgent out-of-hours primary care services. abstract_id: PUBMED:27484430 Triage in primary care: overkill? Based on triage during out-of-hours emergency services with physical contact with patients, the Dutch Triage Standard - a telephone triage algorithm - has been developed for use in primary care out-of-hours services. However, it is also used in the daytime setting. We argue that this tool should be evaluated by actually evaluating the telephone contacts that are backed up during triage and using the final diagnoses of these contacts as the reference standard. We have serious doubts whether the Dutch Triage Standard is an effective tool in the primary care daytime setting with its very low prevalence of high urgency. Adequate triage is time consuming, and may result in reduced accessibility thus creating critical situations. Well-evaluated pilots should precede large-scale implementation of decision support systems. abstract_id: PUBMED:33124524 How accurate is telephone triage in out-of-hours care? An observational trial in real patients. Objectives: Patients in Belgium needing out-of-hours medical care have two options: the emergency department (ED) or a general practitioner (GP) on call. Currently, there is no triage system in Belgium, so patients do not know where they should go. However, patients who could be managed by a GP frequently present themselves at an ED without referral. GPs often organise themselves in a General Practitioners Cooperative (GPC). This study assesses the accuracy of a newly developed telephone triage guideline. Methods: Observational real-time simulation: all walk-in patients at two GPCs and three EDs were asked to call a triage telephone number with their current medical problem. The operator handling this call registered an urgency level and a resource (ED, GP or ambulance) to deploy. The treating physician's opinion was used a the gold standard for correct triage. Patients were not informed about the outcome of the triage and continued the standard care path they had chosen. Results: The overall sensitivity of the telephone triage for detecting patients who could be managed by a GP was 82% with a specificity of 53%. The correctness of the advice given by the operator according to the physicians was 71%, with 12% underestimation of urgency and 17% overestimation. At the GPC, the sensitivity for detecting patients requiring GP management/care was 91% with a specificity of 36%. At the ED, the sensitivity for detecting GP patients was 67% with a specificity of 48%. Conclusion: This study evaluates a new guideline for telephone triage, showing potential overtriage for patients wanting to attend the GPC, with possible inefficiency, and potential undertriage for patients wanting to attend the ED, with possible safety issues. abstract_id: PUBMED:22126218 Safety of telephone triage in out-of-hours care: a systematic review. Objective: Telephone triage in patients requesting help may compromise patient safety, particularly if urgency is underestimated and the patient is not seen by a physician. The aim was to assess the research evidence on safety of telephone triage in out-of-hours primary care. Methods: A systematic review was performed of published research on telephone triage in out-of-hours care, searching in PubMed and EMBASE up to March 2010. Studies were included if they concerned out-of-hours medical care and focused on telephone triage in patients with a first request for help. Study inclusion and data extraction were performed by two researchers independently. Post-hoc two types of studies were distinguished: observational studies in contacts with real patients (unselected and highly urgent contacts), and prospective observational studies using high-risk simulated patients (with a highly urgent health problem). Results: Thirteen observational studies showed that on average triage was safe in 97% (95% CI 96.5-97.4%) of all patients contacting out-of-hours care and in 89% (95% CI 86.7-90.2%) of patients with high urgency. Ten studies that used high-risk simulated patients showed that on average 46% (95% CI 42.7-49.8%) were safe. Adverse events described in the studies included mortality (n = 6 studies), hospitalisations (n = 5), attendance at emergency department (n=1), and medical errors (n = 6). Conclusions: There is room for improvement in safety of telephone triage in patients who present symptoms that are high risk. As these have a low incidence, recognition of these calls poses a challenge to health care providers in daily practice. abstract_id: PUBMED:31266836 Optimisation of telephone triage of callers with symptoms suggestive of acute cardiovascular disease in out-of-hours primary care: observational design of the Safety First study. Introduction: In the Netherlands, the 'Netherlands Triage Standard' (NTS) is frequently used as digital decision support system for telephone triage at out-of-hours services in primary care (OHS-PC). The aim of the NTS is to guarantee accessible, efficient and safe care. However, there are indications that current triage is inefficient, with overestimation of urgency, notably in suspected acute cardiovascular disease. In addition, in primary care settings the NTS has only been validated against surrogate markers, and diagnostic accuracy with clinical outcomes as the reference is unknown. In the Safety First study, we address this gap in knowledge by describing, understanding and improving the diagnostic process and urgency allocation in callers with symptoms suggestive of acute cardiovascular disease, in order to improve both efficiency and safety of telephone triage in this domain. Methods And Analysis: An observational study in which 3000 telephone triage recordings (period 2014-2016) will be analysed. Information is collected from the recordings including caller and symptom characteristics and urgency allocation. The callers' own general practitioners are contacted for the final diagnosis of each contact. We included recordings of callers with symptoms suggestive of acute coronary syndrome (ACS) or transient ischaemic attack (TIA)/stroke. With univariable and multivariable logistic regression analyses the diagnostic accuracy of caller and symptom characteristics will be analysed in terms of predictive values with urgency level, and ACS and TIA/stroke as outcomes, respectively. To further improve our understanding of the triage process at OHS-PC, we will carry out additional studies applying both quantitative and qualitative methods: (i) case-control study on serious adverse events (SAE), (ii) conversation analysis study and (iii) interview study with triage nurses. Ethics And Dissemination: The Medical Ethics Committee Utrecht, the Netherlands endorsed this study (National Trial Register identification: NTR7331). Results will be disseminated at scientific conferences, regional educational sessions and publication in peer-reviewed journals. abstract_id: PUBMED:34081571 Reliability and validity of an original nurse telephone triage tool for out-of-hours primary care calls: the SALOMON algorithm. Objectives: Due to the persistent primary care physicians shortage and the substantial increase in their workload, the organization of primary care calls during out-of-hours periods has become an everyday challenge. The SALOMON algorithm is an original nurse telephone triage tool allowing to dispatch patients to the best level of care according to their conditions. This study evaluated its reliability and criterion validity in rea-life settings. Methods: In this 5-year study, out-of-hours primary care calls were dispatched into four categories: Emergency Medical Services Intervention (EMSI), Emergency Department referred Consultation (EDRC), Primary Care Physician Home visit (PCPH), and Primary Care Physician Delayed visit (PCPD). We included data of patients' triage category, resources, and destination. Patients included into the primary care cohort were classified undertriaged if they had to be redirected to an emergency department (ED). Patients from the ED cohort were considered overtriaged if they did not require at least three diagnostic resources, one emergency-specific treatment or any hospitalization. In the ED cohort, only patients from the University Hospitals were considered. Results: 10,207 calls were triaged using the SALOMON tool: 19.2% were classified as EMSI, 15.8% as EDRC, 62.8% as PCPH, and 2.2% as PCPD. The triage was appropriate for 85.5% of the calls with a 14.5% overtriage rate. In the PCPD/PCPH cohort, 96.9% of the calls were accurately triaged and 3.1% were undertriaged. SALOMON sensitivity and specificity reached 76.6% and 98.3%, respectively. Conclusion: SALOMON algorithm is a valid triage tool that has the potential to improve the organization of out-of-hours primary care work. abstract_id: PUBMED:20671002 Delays in response and triage times reduce patient satisfaction and enablement after using out-of-hours services. Background: several different models of out-of-hours primary care now exist in the UK. Important outcomes of care include users' satisfaction and enablement to manage their illness or condition, but the determinants of these outcomes in the unscheduled care domain are poorly understood. Aim. To identify predictors of user satisfaction and enablement across unscheduled care or GP out-of-hours service providers in Wales. The design of the study is a cross-sectional survey. The setting of the study is nine GP out-of-hours services, three Accident and Emergency units and an all Wales telephone advice service in Wales. Methods: postal survey using the Out-of-hours Patient Questionnaire. Logistic regression was used to fit both satisfaction and enablement models, based on demographic variables, service provider and treatment received and perceptions or ratings of the care process. Results: eight hundred and fifty-five of 3250 users responded (26% response rate, range across providers 14-41%, no evidence of non-response bias for age or gender). Treatment centre consultations were significantly associated with decreased patient satisfaction and decreased enablement compared with telephone advice. Delays in call answering or callback for triage and shorter consultations were significantly associated with lower satisfaction. Waiting more than a minute for initial call answering was associated with lower enablement. Conclusions: giving users more time to discuss their illness in consultations may enhance satisfaction and enablement but this may be resource intensive. More simple interventions to improve access by quicker response and triage, and keeping users informed of waiting times, could also serve to increase satisfaction and ultimately impact on their enablement. abstract_id: PUBMED:36597106 Effect of an educational intervention for telephone triage nurses on out-of-hours attendance: a pragmatic randomized controlled study. Background: Telephone triage has been established in many countries as a response to the challenge of non-urgent use of out-of-hours primary care services. However, limited evidence is available regarding the effect of training interventions on clinicians' telephone consultation skills and patient outcomes. Methods: This was a pragmatic randomized controlled educational intervention for telephone triage nurses in 59 Norwegian out-of-hours general practitioners' (GPs) cooperatives, serving 59% of the Norwegian population. Computer-generated randomization was performed at the level of out-of-hours GP cooperatives, stratified by the population size. Thirty-two out-of-hours GP cooperatives were randomized to intervention. One cooperative did not accept the invitation to participate in the educational programme, leaving 31 cooperatives in the intervention group. The intervention comprised a 90-minute e-learning course and 90-minute group discussion about respiratory tract infections (RTIs), telephone communication skills and local practices. We aimed to assess the effect of the intervention on out-of-hours attendance and describe the distribution of RTIs between out-of-hours GP cooperatives and list-holding GPs. The outcome was the difference in the number of doctor's consultations per 1000 inhabitants between the intervention and control groups during the winter months before and after the intervention. A negative binomial regression model was used for the statistical analyses. The model was adjusted for the number of nurses who had participated in the e-learning course, the population size and patients' age groups, with the out-of-hours GP cooperatives defined as clusters. Results: The regression showed that the intervention did not change the number of consultations for RTIs between the two groups of out-of-hours GP cooperatives (incidence rate ratio 0.99, 95% confidence interval 0.91-1.07). The winter season's out-of-hours patient population was younger and had a higher proportion of RTIs than the patient population in the list-holding GP offices. Laryngitis, sore throat, and pneumonia were the most common diagnoses during the out-of-hours primary care service. Conclusions: The intervention did not influence the out-of-hours attendance. This finding may be due to the intervention's limited scope and the intention-to-treat design. Changing a population's out-of-hours attendance is complicated and needs to be targeted at several organizational levels. abstract_id: PUBMED:37150153 Displaying concerns within telephone triage conversations of callers with chest discomfort in out-of-hours primary care: A conversation analytic study. Objectives: In primary care out of hours service (OHS-PC), triage nurses ask questions to assign urgency level for medical assessment. A semi-automatic decision tool (the Netherlands Triage Standard, NTS) facilitates triage nurses with key questions, but does not leave much room for paying attention to callers' concerns. We wanted to understand how callers with chest pain formulate their concerns and are helped further during telephone triage. Methods: We conducted a conversation analytic study of 68 triage calls from callers with chest discomfort who contacted OHS-PC of which we selected 35 transcripts in which concerns were raised. We analyzed expressions of concerns and the corresponding triage nurse response. Results: Due to the task-oriented nature of the NTS, callers' concerns were overlooked. For callers, however, discussing concerns was relevant, stressed by the finding that the majority of callers with chest discomfort expressed concerns. Conclusions: Interactional difficulties in concern-related discussions arised directly after expressed concerns if not handled adequately, or during the switch to the counseling phase. Practice Implications: When callers display concerns during telephone triage, we recommend triage nurses to explore them briefly and then return to the sequence of tasks described in the NTS-assisted triage process. Answer: The study with PUBMED:20003719 investigated whether service provision was consistent across out-of-hours (OOH) co-ops in the Republic of Ireland and Northern Ireland once patient characteristics, patient complaints, and other covariates were controlled for. The results indicated that service provision was relatively homogenous across co-ops, suggesting that there were not significant treatment variations in triage outcomes across the OOH co-ops studied. Quality was consistent in terms of service provision, leading to the conclusion that OOH co-ops are a viable means to provide OOH primary care.
Instruction: Pediatric inflammatory bowel disease: is it still increasing? Abstracts: abstract_id: PUBMED:9324622 Bactericidal permeability increasing protein (BPI-ANCA marked chronic inflammatory bowel diseases and hepatobiliary diseases Background: Bactericidal permeability increasing protein (BPI) is an antibacterial product of neutrophilic granulocytes that can serve as target antigen for antineutrophil cytoplasmic antibodies (ANCA). The clinical associations of autoantibodies against BPI (BPI-ANCA) are essentially unclear. Patients And Methods: 587 sera from patients with chronic inflammatory bowel diseases, inflammatory hepatobiliary diseases, primary systemic vasculitides and other rheumatological diseases were examined for BPI-ANCA by mono-specific ELISA and a standard indirect immunofluorescence test for ANCA. (ACD-CPR versus S-CPR). The treatment groups were similar with respect to age, sex, time interval from collapse to CPR, defibrillation and first epinephrine medication. There was no difference between the ACD group and the standard CPR group in terms of survival rates and neurologic outcome. No differences occurred concerning complications of CPR. Conclusion: In our two-tiered EMS system with physician-staffed ambulances ACD-CPR neither improved nor impaired the survival rates and the neurological prognosis in patients with out-of-hospital cardiac arrest. Our results are in accordance with other studies carried out in EMS systems, with first tier call-response intervals between 4 and 6 min. Results: The prevalence of BPI-ANCA was 43% in ulcerative colitis, 23% in Crohn's disease, 35% in primary sclerosing cholangitis, 25% in primary biliary cirrhosis and 29% in autoimmune hepatitides. In a spectrum of systemic vasculitides, inflammatory joint diseases and collagen vascular diseases the prevalence was only 3 to 11%. In contrast to PR3-ANCA and MPO-ANCA, BPI-ANCA was not associated with a particular pattern of fluorescence in the immuno-fluorescence test on ethanol- and formalin-fixed neutrophils. Conclusion: This study shows that BPI-ANCA is the third ANCA specificity, besides PR3-ANCA and MPO-ANCA, with a limited spectrum of clinical associations. The diagnostic and prognostic relevance of BPI-ANCA in the above clinical conditions is being examined prospectively. abstract_id: PUBMED:28035462 Bactericidal/permeability increasing protein gene polymorphism and inflammatory bowel diseases: meta-analysis of five case-control studies. Objective: Bactericidal/permeability increasing protein (BPI) gene polymorphisms have been extensively investigated in terms of their associations with inflammatory bowel disease (IBD), with contradictory results. The aim of this meta-analysis was to evaluate associations between BPI gene polymorphisms and the risk of IBD, Crohn's disease (CD), and ulcerative colitis (UC). Methods: Eligible studies from PubMed, Embase, and Cochrane library databases were identified. Results: Ten studies (five CD and five UC) published in five papers were included in this meta-analysis. G645A polymorphism was associated with a decreased risk of UC in allele model, dominant model, and homozygous model. Conclusions: Our data suggested that BPI G645A polymorphism was associated with a decreased risk of UC; the BPI G645A polymorphism was not associated with the risk of CD. abstract_id: PUBMED:26228368 Bactericidal permeability increasing protein gene polymorphism is associated with inflammatory bowel diseases in the Turkish population. Background/aims: Inflammatory bowel disease, a chronic inflammatory disease with unknown etiology, affects the small and large bowel at different levels. It is increasingly considered that innate immune system may have a central position in the pathogenesis of the disease. As a part of the innate immune system, bactericidal permeability increasing protein has an important role in the recognition and neutralization of gram-negative bacteria. The aim of our study was to investigate the involvement of bactericidal permeability increasing protein gene polymorphism (bactericidal permeability increasing protein Lys216Glu) in inflammatory bowel disease in a large group of Turkish patients. Patients And Methods: The present study included 528 inflammatory bowel disease patients, 224 with Crohn's disease and 304 with ulcerative colitis, and 339 healthy controls. Results: Bactericidal permeability increasing protein Lys216Glu polymorphism was found to be associated with both Crohn's disease and ulcerative colitis (P = 0.0001). The frequency of the Glu/Glu genotype was significantly lower in patients using steroids and in those with steroid dependence (P = 0.012, OR, 0.80; 95% confidence interval [CI]: 0.68-0.94; P = 0.0286, OR, 0.75; 95% CI: 0.66-0.86, respectively). There was no other association between bactericidal permeability increasing protein gene polymorphism and phenotypes of inflammatory bowel disease. Conclusions: Bactericidal permeability increasing protein Lys216Glu polymorphism is associated with both Crohn's disease and ulcerative colitis. This is the first study reporting the association of bactericidal permeability increasing protein gene polymorphism with steroid use and dependence in Crohn's disease. abstract_id: PUBMED:27366026 Neutrophil anti-neutrophil cytoplasmic autoantibody proteins: bactericidal increasing protein, lactoferrin, cathepsin, and elastase as serological markers of inflammatory bowel and other diseases. Inflammatory bowel disease (IBD) is a chronic inflammatory disorder of the gastrointestinal tract comprising Crohn's disease and ulcerative colitis. Although the pathogenesis of the disease is not clearly defined yet, environmental, genetic and other factors contribute to the onset of the disease. Apart from the clinical and histopathological findings, several serological biomarkers are also employed to detect IBD. One of the most thoroughly studied biomarker is anti-neutrophil cytoplasmic autoantibody (ANCA). We herein provide an overview of the current knowledge on the use of ANCA and certain ANCA proteins, such as bactericidal increasing protein, lactoferrin, cathepsin G and elastase, as serological markers for IBD and other diseases. abstract_id: PUBMED:10521980 Bactericidal/permeability-increasing protein in colonic mucosa in ulcerative colitis. Background/aims: Increased mucosal concentration of bactericidal/permeability-increasing protein (BPI) has been shown in inflammatory bowel diseases. The purpose of the present study was to investigate the relationship between the mucosal concentration of BPI and the grade of mucosal inflammation in ulcerative colitis. Methodology: Samples of colonic mucosa from 12 patients with ulcerative colitis and from 8 control patients were studied. The concentration of BPI in tissue extracts was measured by a time-resolved fluoroimmunoassay. The concentration of BPI was compared between samples with histological inflammatory changes of different severity. BPI was localized in tissue sections by immunohistochemistry. Results: The concentration of BPI was higher (p &lt; 0.001) in samples of colonic mucosa from patients with ulcerative colitis (median: 3.2 micrograms/g, range: 0.3-22.6 micrograms/g) than in control samples (0.4 microgram/g, 0.1-0.6 microgram/g,). Moreover, the concentration of BPI was higher (p = 0.015) in samples with severe inflammation (2.5 mu/g, 0.3-22.6 micrograms/g) than in those with mild inflammation (0.5 mu/g, 0.3-2.5 micrograms/g). The concentration of BPI in mucosal samples correlated well with the degree of histological inflammation (Spearman R = 0.70, p = 0.01). BPI was localized in polymorphonuclear leukocytes in the mucosa and stroma of the colonic wall. Conclusions: The concentration of BPI is increased in the colonic mucosa of patients with ulcerative colitis. The increase in the concentration of BPI in colonic mucosa seems to be closely associated with the inflammatory activity of ulcerative colitis. abstract_id: PUBMED:17317612 From infection to autoimmunity: a new model for induction of ANCA against the bactericidal/permeability increasing protein (BPI). Antineutrophil cytoplasmic autoantibodies against the neutrophil granule bactericidal/permeability increasing protein (BPI-ANCA) have been found in diseases of different etiologies, such as cystic fibrosis, TAP deficiency or inflammatory bowel diseases. A common feature of these conditions is the chronic or profuse exposure of the host to Gram-negative bacteria and their endotoxin. BPI plays an important role in killing Gram-negative bacteria as well as neutralization and disposal of their endotoxin. During this interaction BPI can direct the delivery of complexes which contain endotoxin and bacterial outer membrane proteins to antigen presenting cells. Based on recent findings on how complexes of endotoxin and protein antigens need to be processed by dendritic cells in order to become presented on MHC class II molecules, a model can be proposed how Gram-negative bacterial infections can be linked to the generation of autoantibodies against BPI. abstract_id: PUBMED:8608882 Inflammatory bowel disease is associated with increased mucosal levels of bactericidal/permeability-increasing protein. Background & Aims: Clinical sepsis seldom accompanies inflammatory bowel disease. The aim of this study was to measure colonic mucosal levels of the neutrophil product bactericidal/permeability-increasing protein (BPI), which kills gram-negative bacteria in addition to inactivating endotoxin. Methods: Enzyme-linked immunosorbent assay and immunohistochemistry for BPI were performed on homogenates and tissue secretions of biopsy specimens from patients with ulcerative colitis (n=11) and Crohn's disease (n=5) and from normal controls (n=5). Results: Mucosal neutrophil content (144 +/- 23 vs. 35 +/- 9 neutrophils/mg protein; P&lt;0.007) and BPI content (2.07 +/- 0.75 vs. 0.12 +/- 0.02 ng/mg protein; P&lt;0.002) were greater in the colitis groups and correlated closely (r=0.68; P&lt;0.001). This relationship held for both ulcerative colitis (P&lt;0.002) and Crohn's disease (P&lt;0.01) with a trend towards greater levels in Crohn's disease. There was a trend towards higher BPI levels with an increasing endoscopic inflammation score (grade I, 1.32 +/- 0.6 ng/mg protein; grade II, 2.82 +/- 1.4 ng/mg protein). Immunohistochemistry and the biopsy culture showed BPI to be both intracellular and extracellular, to be present in the crypt lumen, and to be released into incubating medium. Conclusions: Mucosal levels of BPI are increased in colitis. Such localization may ameliorate mucosal responses to gram-negative bacteria and their products. abstract_id: PUBMED:15758620 A polymorphism of the bactericidal/permeability increasing protein (BPI) gene is associated with Crohn's disease. Background: The bactericidal/permeability increasing protein (BPI) is involved in the elimination of gram-negative bacteria. A functionally relevant single nucleotide polymorphism of the BPI gene causes an amino acid exchange (Glu216Lys). Study: To evaluate whether this single nucleotide polymorphism contributes to the predisposition to inflammatory bowel disease, we compared the allele frequencies of 265 patients with Crohn's disease, 207 patients with ulcerative colitis, and 608 healthy controls. Results: The Glu/Glu genotype frequency was decreased significantly in Crohn's disease patients as compared with controls (P &lt; 0.027). No differences were obvious in patients with ulcerative colitis. Conclusions: Failure of the innate intestinal immune system could be involved in the pathogenesis of Crohn's disease via reduced/impaired defense against gram-negative bacteria. abstract_id: PUBMED:21272798 Association between bactericidal/permeability increasing protein (BPI) gene polymorphism (Lys216Glu) and inflammatory bowel disease. Background: Increasing evidence suggests that innate immune system may have a key role in the pathogenesis of the inflammatory bowel disease (IBD). Bactericidal/permeability increasing protein (BPI) has an important role in the recognition and neutralization of gram-negative bacteria by host innate immune system. The polymorphism on BPI gene called Lys216Glu is on the suspected list of IBD pathogenesis. Methods: We studied the Lys216Glu polymorphism on BPI gene, in a Turkish IBD patient population. A total of 238 IBD patients; 116 Crohn's disease (CD) and 122 ulcerative colitis (UC), besides 197 healthy controls were included in this study. Results: The Glu/Glu genotype and allele frequencies were found to be statistically higher compared to healthy control group not only in CD patients [P: 0.03, OR: 1.87 (CI 95% 1.02-3.42) and P: 0.00001 (OR: 2.07 CI 95% 1.47-2.91) respectively] but also in UC patients [P: 0.0002, OR: 2.71 (CI 95% 1.53-4.80) and P: 0.00002 (OR: 2.71 CI 95% 1.53-4.80) respectively]. Conclusions: BPI polymorphism (Lys216Glu) is associated both to CD and UC. Our findings differ from the two Western European studies; one without any association and the other indicating an association only with CD. Our study is the first one reporting a novel association between BPI gene mutation (Lys216Glu) and UC. abstract_id: PUBMED:15626895 Autoantibodies against the bactericidal/permeability-increasing protein from inflammatory bowel disease patients can impair the antibiotic activity of bactericidal/permeability-increasing protein. Bactericidal/permeability-increasing protein (BPI) is an antineutrophil cytoplasmic autoantibody (ANCA) target antigen in inflammatory bowel disease (IBD). The aim of this study was to characterize binding regions of BPI-autoantibodies and to analyze their ability to block the antibiotic effect of BPI. Sera of 24 ulcerative colitis and Crohn's disease patients were examined in indirect immuno-fluorescence, ANCA enzyme-linked immunosorbent assay (ELISA), and by epitope mapping with 13mer peptides and Western blot for presence of BPI-autoantibodies. IgG preparations were used to determine inhibition of BPI's antimicrobial function by BPI-autoantibodies in a bacterial growth inhibition assay. BPI-autoantibodies were detected by ELISA in 18/24 patients. Epitope mapping and western blotting revealed an additional 3 patients with BPI-autoantibodies. IgG preparations of all patients with Crohn's disease and 9 of 12 ulcerative colitis patients could inhibit the antibiotic function of BPI in vitro as compared with healthy control subjects. Inhibiting BPI-autoantibodies correlated with extraintestinal manifestations, peripheral blood leukocyte counts, and anemia. BPI-autoantibodies recognizing the N-terminal portion were associated with greater mucosal damage and intestinal extent of disease. BPI is a frequent target antigen of autoantibodies in ulcerative colitis and Crohn's disease. Inhibition of the antibiotic function mediated by the N-terminal region of BPI by these autoantibodies may contribute to a proinflammatory environment in IBD patients. Answer: The provided abstracts do not directly address the current trends in the incidence of pediatric inflammatory bowel disease (IBD) or whether it is still increasing. The abstracts focus on the role of bactericidal/permeability-increasing protein (BPI) and its gene polymorphisms in IBD, the presence of BPI-antineutrophil cytoplasmic antibodies (ANCA) in IBD and other diseases, and the potential impact of autoantibodies against BPI on the disease process (PUBMED:9324622, PUBMED:28035462, PUBMED:26228368, PUBMED:27366026, PUBMED:10521980, PUBMED:17317612, PUBMED:8608882, PUBMED:15758620, PUBMED:21272798, PUBMED:15626895). To answer the question about whether pediatric IBD is still increasing, one would need to consult epidemiological studies or reviews that specifically address the incidence and prevalence of IBD in the pediatric population over time. Such studies would typically involve longitudinal data collection and analysis of trends in IBD diagnosis among children and adolescents. The abstracts provided do not contain this information.
Instruction: Does a nanomolecule of Carboplatin injected periocularly help in attaining higher intravitreal concentrations? Abstracts: abstract_id: PUBMED:19628744 Does a nanomolecule of Carboplatin injected periocularly help in attaining higher intravitreal concentrations? Purpose: To compare intravitreal concentration (VC) of commercially available carboplatin (CAC) and the novel nanomolecule carboplatin (NMC), after periocular injection. Methods: The study was a comparative animal study involving 24 white Sprague-Dawley rats, aged between 6 weeks and 3 months. CAC was bound with a nanoparticulate carrier by co-acervation with a biocompatible and biodegradable protein BSA (bovine serum albumin). The particulate size, binding, and structure of the carrier was analyzed with dynamic light-scattering electron microscopy, FTIR (Fourier transform infrared) spectroscopy, and SDS-polyacrylamide gel electrophoresis. Twenty-four white rats were anesthetized. The right eye of each rat was injected with periocular CAC (1 mL) and the left eye with NMC (1 mL) by a trained ophthalmologist. Four mice each were euthanatized at days 1, 2, 3, 5, 7, 14, and 21 and both eyes were enucleated. The intravitreal concentrations of commercial carboplatin and nanomolecule carboplatin were determined with HPLC (high-performance liquid chromatography). Data were analyzed with the paired t-test. The main outcome measure was intravitreal concentrations CAC and NMC over time. Results: The NMC vitreal concentration was higher than the CAC concentrations in all animals, until day 7 (P = 0.0001). On days 14 and 21, the CAC vitreal concentration was higher than the NMC concentrations in all animals (P = 0.0002). Overall, the mean vitreal concentration of NMC was greater than CAC. Conclusions: Nanoparticulate-bound carboplatin has greater transscleral transport than commercially available carboplatin, especially in the first week after injection and may help enhance the proven adjuvant efficacy of periocular carboplatin over and above systemic chemotherapy in treating human retinoblastoma, especially those with vitreal seeds. This trial is being published to establish a proof of principle for this method of therapy. abstract_id: PUBMED:34250764 Ocular safety of repeated intravitreal injections of Carboplatin and Digoxin: A preclinical study on the healthy rabbits. To evaluate the ocular safety of intravitreal carboplatin and digoxin injections as a new intravitreal chemotherapy option for retinoblastoma tumor vitreous seeds. Eighteen rabbits were divided randomly into three groups to receive intravitreal injection of Digoxin (6 rabbits), Carboplatin (7 rabbits), or Saline (5 rabbits). In every group, one eye randomly treated with 10 µg Digoxin in 0.1 cc or 1 µg Carboplatin or Saline, and the contralateral eye was considered as the control. All groups underwent three consecutive injections of the drugs with 1-week intervals. Baseline electroretinography (ERG) was recorded from both eyes of all the animals prior to injection and was repeated 1st day, 1st week, and 1st month after the last injection. All rabbits were sacrificed 1 month after the last injection, and histological studies were done. Mean a and b wave amplitudes decreased significantly at 1st day, 1st week, and 1st month after the last intravitreal injection of 10 µg Digoxin in comparison with other groups (p-value: .02). Contradictory, 1 µg Carboplatin injected eyes had minimal ERG changes. There were some nonspecific ERG changes with unclear clinical significance in non-injected contralateral control eyes of Digoxin and Carboplatin groups in comparison with the control eyes of the Saline group. Histological studies revealed considerable neural retinal atrophy in injected eyes of the Digoxin group. Intravitreal 10 µg Digoxin might have more local ocular toxicity in comparison with intravitreal Carboplatin in albino rabbit eyes. Future studies should assess the induced toxicity of intravitreal injection of these drugs on the non-injected contralateral eye. abstract_id: PUBMED:20306443 Intravitreal carboplatin concentration and area under concentration versus time curve after intravitreal and periocular delivery. Purpose: To determine platinum (Pt) concentrations and area under the concentration versus time curve (AUC) of the vitreous humor after periocular or transcorneal intravitreal administration of carboplatin in rabbits. Methods: Eighteen albino rabbits were included in an in vivo experiment. Each animal received a single dose of either 30 mg of carboplatin by periocular injection (POI30 group: n = 6) or 15 mg by periocular injection (PI15 group: n = 6), or 0.05 mg by transcorneal intravitreal injection (TII group: n = 6), respectively, into the right eye. Vitreous humor from the right eyes and plasma samples were collected post dose at 1, 2, 6, 24, 48, 168, and 336 hours or 448 hours, respectively. Flameless atomic absorption spectroscopy was employed to analyze total platinum concentrations in blood and vitreous humor. AUC was calculated using the trapezoidal rule. Results: Pt concentration was mostly &lt; 1 mg/L (0-3.15 mg/L) in the vitreous humor samples and &gt; or = 2 mg/L (2.33-7.3 mg/L) in the blood samples 1 hour after administration in POI groups. Markedly higher Pt concentrations were found 1 hour after intravitreal (TII) administration (10.285-66.759 mg/L) and decreased below 1 mg/L no less than 168 hours after administration. The mean AUC for Pt in vitreous humor was significantly lower (p = 0.0001) after both POI30 and P0I15 administration compared to TII route (8.955 +/- 2.464 mg/L/min). Conclusions: These findings proved that intravitreal carboplatin delivery enables the achievement of relatively stable concentrations and AUC of platinum in the rabbit vitreous humor. This moreover suggests that transcorneal intravitreal delivery of carboplatin aiming to treat retinoblastoma vitreous seeding is a promising mode of chemotherapy. abstract_id: PUBMED:28646513 Long-term outcomes of Group D retinoblastoma eyes during the intravitreal melphalan era. Background: To evaluate outcomes of Group D retinoblastoma (Rb) eyes during the intravitreal melphalan era. Procedure: Retrospective chart review of patients diagnosed with Group D Rb from 2011 to 2016 was done. Overall, 76 Group D eyes of 68 patients were included; salvage therapy included systemic chemoreduction with vincristine, etoposide, and carboplatin with local consolidation, followed by intravitreal injection of melphalan for recurrent or persistent seeding. External beam radiation was not used as a treatment modality. Primary outcome measurement was globe salvage. Results: Of 76 Group D eyes, 24 were enucleated primarily and 52 were treated with intent to salvage the globe. Systemic chemoreduction salvaged 25 of 52 eyes (48%). Tumor recurrences were diagnosed in 27 eyes (52%); five with massive retinal recurrences underwent enucleation and 22 were treated with intravitreal melphalan injection. Of the 22 injected eyes, 14 (64%) were salvaged and eight required enucleation primarily for retinal recurrences. Success in eradicating vitreous seeds was 100%. The Kaplan-Meier 3-year survival estimate for treated eyes is 76.5% (95% CI: 61.4-86.3). Median follow-up for the group of 76 Group D eyes was 29.5 months (SD 17.9 months). Conclusion: During a 6-year period that included the initiation of intravitreal melphalan at our institution, the salvage rate of treated Group D eyes was 75% (39/52 eyes). Intravitreal melphalan was utilized for ocular salvage in 42% (22/52 eyes). Systemic chemoreduction combined with intravitreal melphalan for seeding demonstrated a high overall salvage rate for Group D eyes in this cohort. abstract_id: PUBMED:22368261 Combined intravitreal and subconjunctival carboplatin for retinoblastoma with vitreous seeds. Background: To describe the technique of intravitreal chemotherapy preceded by subconjunctival chemotherapy for the treatment of vitreous seeds in advanced stage retinoblastoma. Methods: This non-comparative interventional case series retrospectively reviewed the medical records and postenucleation histopathological findings of two patients who presented within weeks of each other with bilateral retinoblastoma, Reese-Ellsworth (R-E) stage Vb in the worse eye. Both patients had failed systemic chemotherapy prior to receiving a single treatment of 0.5 ml (5 mg per 0.5 ml) of subconjunctival carboplatin, through which 0.05 ml (3 mcg per 0.05 ml) of carboplatin was injected into the vitreous (Case 2 received 0.1 ml of intravitreal carboplatin). The subconjunctival chemotherapy was given to reduce the risk of orbital tumour seeding following intravitreal injection. Following enucleation, ocular toxicity and the presence or absence of viable tumour cells at the intravitreal injection site were recorded. Results: Histopathological examination did not reveal patency of the pars plana intravitreal penetration site in either case at 6 weeks post-treatment nor was malignant seeding detected in the area of injection. Examination of the two enucleated eyes did not demonstrate structural toxicity to the cornea, anterior segment, iris or retina. Additionally, both cases were followed for over 37 months post-treatment, without the occurrence of orbital malignancy. Conclusions: Injecting a bleb of subconjunctival chemotherapy prior to intravitreal drug delivery appeared to mitigate the risk of orbital tumour seeding in two patients with advanced stage retinoblastoma. Incorporating this technique may allow further investigation of intravitreal chemotherapy for the treatment of vitreous seeds in retinoblastoma. abstract_id: PUBMED:29600179 Safety and efficacy of posterior sub-Tenon's carboplatin injection versus intravitreal melphalan therapy in the management of retinoblastoma with secondary vitreous seeds. Aim: To evaluate the safety and efficacy of posterior sub-Tenon's carboplatin injection compared to intravitreal melphalan injection in the management of retinoblastoma (RB) with secondary vitreous seeds. The outcome measures were vitreous seeds regression, need for other treatment modalities to achieve ocular salvage and treatment side effects. Methods: A prospective interventional comparative nonrandomized study included RB eyes developed secondary vitreous seeds during the period of follow up. They subdivided into two groups: study group I where posterior sub-Tenon's carboplatin (20 mg/2 mL) was injected and study group II where intravitreal melphalan (20 µg /0.1 mL) was injected. The injections repeated every 2-4wk. Results: Thirty-three eyes were included in the study. Seventeen eyes (16 patients) in study group I and 16 eyes (16 patients) in study group II. Ten eyes (30.3%) were completely salvaged following local chemotherapies. Ocular salvage was 23.5% following posterior sub-Tenon's carboplatin injection versus 37.5% following intravitreal melphalan raised to 47.1% and 75% with addition of external beam radiotherapy (EBR) with no statistically significant difference between the study groups (P=0.16). A statistically significant correlation was found between ocular salvage rate and type of vitreous seeds either dust, spheres and clouds (r=0.42, P=0.015) and eyes harbor new solid tumor growth (r=0.35, P=0.045). The mean and median follow up periods following local chemotherapy injections were 2.0y in the study group I and 2.37y in the study group II. Few complications were reported: periorbital edema in all eyes and ocular motility disturbances in 13 eyes (76.5%) following posterior sub-Tenon's carboplatin injection. Vitreous hemorrhage developed in 2 eyes (12.5%) and localized retinopathy in 5 eyes (31.25%) following intravitreal melphalan. Conclusion: Local chemotherapy for treatment of RB with secondary vitreous seeds is safe and can salvage 30.3% of eyes without EBR. There is a superiority of intravitreal melphalan in ocular salvage however, no statistically significant difference between both groups. abstract_id: PUBMED:23235724 Retinal toxicity after repeated intravitreal carboplatin injection into rabbit eyes. Background: The aim of this study was to assess retinal toxicity in a rabbit model after carboplatin delivered as repeated transcorneal intravitreal injection, in order to determine the highest possible safe dose for use in human retinoblastoma "seeding" tumor chemotherapy. Methods And Results: We used six albino rabbits in an in vivo experiment and injected 0.008 mg of carboplatin intravitreally (iv) 4 times at two-week intervals. 0.08 mL saline was injected into the left eye. We recorded electroretinograms (ERGs) before the first carboplatin injection and after the fourth injection. Platinum concentration was measured 1 h after the fifth additional injection. We found reduced dark-adapted b-wave amplitudes and, light-adapted b-wave and a-wave amplitudes. The differences between right and left eyes was significant but we found no histopathologic retinal changes. Conclusions: 0.008 mg of carboplatin is probably the highest possible safe dose for the treatment of retinoblastoma patients. Questionable is direct extrapolation of retinal toxicity from the rabbit eye model to the human eye. abstract_id: PUBMED:25908004 Efficacy of intravitreal carboplatin plus bevacizumab in refractory retinoblastoma Objectives: To evaluate the efficacy of intravitreal carboplatin plus bevacizumab in refractory retinoblastoma. Methods: Perspective study.Eleven patients (11 eyes) with the diagnosis of refractory retinoblastoma were enrolled in Department of Ophthalmology of Peking University People's Hospital from June 2013 to March 2014. They underwent intravitreal carboplatin plus bevacizumab every 4 weeks, an average of 4.5 times of treatment.Observe for 3 months after the last treatment. Aqueous humor was taken for cytological and VEGF detection and retinal funds were taken photos for observation.Statistical analyses between experimental group and control group and before and after intravitreal injection within experimental group were performed with independent samples t test. Results: Tumor in vitreous cavity reduced significantly in seven patients, however, poor control in four cases, and three of them were recurrent after first-line treatment. Cytology detection for aqueous humor showed no tumor cells in all of them. Aqueous VEGF of patients with retinoblastoma (60.65 ± 6.20) was significantly higher than the control group (21.98 ± 6.91). The difference was statistically significant (t = 13.80, P &lt; 0.01). And the aqueous VEGF content decreased significantly after treatment (t = 2.12, P &lt; 0.05). Conclusion: Intravitreal carboplatin plus bevacizumab, is a relatively safe, effective treatment for refractory retinoblastoma, however, ineffective for recurrent tumor. abstract_id: PUBMED:28851196 Short-term efficacy of intravitreal injection of melphalan for refractory vitreous seeding from retinoblastoma Objective: To evaluate the efficacy and safety of intravitreal chemotherapy for refractory vitreous seeding from retinoblastoma. Methods: Retrospective series of case studies. Nine patients (13 eyes) with the diagnosis of refractory vitreous seeding were enrolled in Department of Ophthalmology of Eye&amp; ENT Hospital of Fudan University from March 2014 to October 2015.There were 6 males and 3 females. Children aged 8 to 40 months, median age of 18 months. In the 13 eyes, 3 eyes were E period, 9 eyes were D period, and 1 eyes were C period. The fundus was examined by indirect ophthalmoscope and recorded by RetcamIII. Systemic chemotherapy was performed using the VEC protocol, that is vincristine, etoposide, and carboplatin. Local treatment also involves cryotherapy and/or thermotherapy. All patients were treated with intravitreal injection of melphalan. They underwent intravitreal melphalan, once every 4 weeks, with an average of 3 times of injections. The treatment dose of melphalan is 20 to 40 μg per dose. Observe the vitreous seed control and complications of therapy. Results: Vitreous seeds control was attained in all cases. There was no case of orbital extension or remote metastasis. Complications included retinal pigment epithelial and choroidal atrophy in 7 eyes, pupillary synechia and iris atrophy in 2 eyes,retinal vasculitis and vascular occlusion in 2 eyes, optic atrophy in 2 eyes, vitreous hemorrhage in 1 eye, and temporary hypotony in 3 eyes. Conclusions: Intravitreal melphalan is an effective treatment for refractory vitreous seeding from retinoblastoma. High dose may lead to local adverse reactions. (Chin J Ophthalmol, 2017, 53: 570-574). abstract_id: PUBMED:31872542 Low retinal toxicity of intravitreal carboplatin associated with good retinal tumour control in transgenic murine retinoblastoma. Background: Retinoblastoma is a rare intraocular malignancy in children. Current treatments have many adverse effects. New therapeutic approaches like intravitreal injections of chemotherapies are currently being developed but their toxicities need to be evaluated on animal models. This study compares the efficacy and toxicity of intravitreal melphalan, topotecan and carboplatin, alone or in combination (sequential administration), in the LHBetaTag retinoblastoma mice. Methods: Mice were divided into nine groups: control, carboplatin 1.5 and 4 μg, melphalan 0.1 and 1 μg, topotecan 0.1 and 1 μg, carboplatin 4 μg/topotecan 0.1 μg and melphalan 1 μg/topotecan 0.1 μg. The follow-up was performed using fundus imaging and optical coherence tomography combined with histopathological analysis. Absence of tumour and presence of calcified tumours were the criteria for therapeutic response assessment. Ocular complications were assessed after four weekly injections. Retinal toxicity was defined by the decrease of retinal thickness and of the number of retinal layers. Results: Topotecan was inactive on retinal tumours. Melphalan (1 μg) led to a complete tumour control in 91.7% of eyes. Carboplatin strongly decreased the tumour burden (85.7-93.8% of eyes without retinal tumour). The intravitreal injection itself led to ocular complications (25% of media opacities and 45.7% of retinal detachment). Only melphalan at 1 μg showed a strong retinal toxicity. The two combinations showed a good efficacy in reducing the number of eyes with retinal tumours with a reduced retinal toxicity. Conclusions: This preclinical study suggests that intravitreal injection of carboplatin has a low toxicity and could be evaluated in clinical practice to treat patients suffering from retinoblastoma. Answer: Yes, a nanomolecule of Carboplatin injected periocularly does help in attaining higher intravitreal concentrations. A study comparing intravitreal concentration (VC) of commercially available carboplatin (CAC) and the novel nanomolecule carboplatin (NMC) after periocular injection in rats found that the NMC vitreal concentration was higher than the CAC concentrations in all animals until day 7 (P = 0.0001). On days 14 and 21, the CAC vitreal concentration was higher than the NMC concentrations in all animals (P = 0.0002). Overall, the mean vitreal concentration of NMC was greater than CAC, indicating that nanoparticulate-bound carboplatin has greater transscleral transport than commercially available carboplatin, especially in the first week after injection (PUBMED:19628744).
Instruction: Treatment duration of extra-pulmonary tuberculosis: 6 months or more? Abstracts: abstract_id: PUBMED:25999175 Duration of treatment in pulmonary tuberculosis: are international guidelines on the management of tuberculosis missing something? Background: Despite evidence of an association between tuberculosis (TB) treatment outcomes and the performance of national tuberculosis programmes (NTP), no study to date has rigorously documented the duration of treatment among TB patients. As such, this study was conducted to report the durations of the intensive and continuation phases of TB treatment and their predictors among new smear-positive pulmonary tuberculosis (PTB) patients in Malaysia. Study Design: Descriptive, non-experimental, follow-up cohort study. Methods: This study was conducted at the Chest Clinic of Penang General Hospital between March 2010 and February 2011. The medical records and TB notification forms of all new smear-positive PTB patients, diagnosed during the study period, were reviewed to obtain sociodemographic and clinical data. Based on standard guidelines, the normal benchmarks for the durations of the intensive and continuation phases of PTB treatment were taken as two and four months, respectively. A patient in whom the clinicians decided to extend the intensive phase of treatment by ≥2 weeks was categorized as a case with a prolonged intensive phase. The same criterion applied for the continuation phase. Multiple logistic regression analysis was performed to find independent factors associated with the duration of TB treatment. Data were analyzed using Predictive Analysis Software Version 19.0. Results: Of the 336 patients included in this study, 261 completed the intensive phase of treatment, and 226 completed the continuation phase of treatment. The mean duration of TB treatment (n = 226) was 8.19 (standard deviation 1.65) months. Half (49.4%, 129/261) of the patients completed the intensive phase of treatment in two months, whereas only 37.6% (85/226) of the patients completed the continuation phase of treatment in four months. On multiple logistic regression analysis, being a smoker, being underweight and having a history of cough for ≥4 weeks at TB diagnosis were found to be predictive of a prolonged intensive phase of treatment. Diabetes mellitus and the presence of lung cavities at the start of treatment were the only predictors found for a prolonged continuation phase of treatment. Conclusions: The average durations of the intensive and continuation phases of treatment among PTB patients were longer than the targets recommended by the World Health Organization. As there are no internationally agreed criteria, it was not possible to judge how well the Malaysian NTP performed in terms of managing treatment duration among PTB patients. abstract_id: PUBMED:30602978 Male gender and duration of anti-tuberculosis treatment are associated with hypocholesterolemia in adult pulmonary tuberculosis patients in Kampala, Uganda. Background: Patients with Pulmonary tuberculosis (PTB) and hypocholesterolemia have an altered immune function, delayed sputum conversion at two months and increased mortality. However, the assessment for dyslipidemias is not often done in our setting. Methods: A cross-sectional study was conducted among adults at an urban TB clinic in Kampala, Uganda. We included different participants at diagnosis (0), 2, 5, 6 and 8 months of anti-TB treatment. Data was collected from a complete physical examination, a pre-tested structured questionnaire, six-hour fasting lipid profiles and random blood glucose levels. Results: Of the 323 included participants, 63.5% (205/323) were males and the median age was 30 years, IQR (23-39). The prevalence of hypocholesterolemia was 43.65% (95% CI 38.3-49.2). The participants at diagnosis had the highest hypocholesterolemia prevalence, 57.3%, 95% CI (46.7-67.2); and lowest amongst those completing treatment at 6/8 months, 32.2%, 95% CI (21.6-45.2). Significant factors associated with hypocholesterolemia were: male gender (PR 1.52, 95% CI: 1.13-2.03), and duration of anti-TB treatment (0.88, 95% CI: 0.80-0.98). Conclusion: Hypocholesterolemia is common among patients with PTB. The risk of hypocholesterolemia increases with being male and reduces with increased duration of treatment. There is a need for further research in lipid abnormalities in TB patients. abstract_id: PUBMED:22703726 Treatment duration of extra-pulmonary tuberculosis: 6 months or more? TB-INFO database analysis Purpose: The recommended duration of pulmonary tuberculosis therapy is 6 months. For extrapulmonary tuberculosis, treatment duration depends on tuberculosis involvement and HIV status. The objective of this study was to describe the main characteristics of a cohort of extrapulmonary tuberculosis patients, to compare patients with a 6-month treatment to those with more than a 6-month treatment, and to analyze the compliance of medical centres with recommended duration of treatment. Methods: A retrospective cohort study of 210 patients with extrapulmonary tuberculosis was carried from January 1999 to December 2006 in two hospitals in the north-east of Paris. These patients were treated with quadruple therapy during two months, followed by dual therapy during 4 months (n=77) or more (n=66). The characteristics of each group were compared by uni- and multivariate analysis. The primary endpoint was the rate of relapse or treatment failure at 24-month follow-up after treatment completion. Results: No relapse was observed after 24 months of follow-up after the end of treatment in the two groups. In univariate analysis, patients with lymph node tuberculosis were more often treated for 6 months than at other sites of tuberculosis (respectively 61% versus 40.9%; P=0.02); the decision of treatment duration was related to medical practices (79.2% treated 6 months in one hospital versus 20.7% in the other, P&lt;0.001); patients living in private residence were more often treated during 6 months than patients living in residence (24.2% versus 10.3%, P=0.042). In multivariate analysis, only hospital (P=0.046), sex (P=0.007) and private residence were significantly different in each group. Conclusion: A period of 6 months seems to be sufficient to treat extrapulmonary tuberculosis (except for neuromeningeal localization). abstract_id: PUBMED:34852880 Treatment of tuberculous meningitis in adults: Is the duration of intensive-phase therapy adequate? Tuberculous meningitis (TBM) results in considerable morbidity and mortality, especially in developing countries such as South Africa. Treatment regimens have been extrapolated from treatment for pulmonary tuberculosis, and the intensive-phase duration of 2 months may be inadequate for treatment of patients with TBM. We highlight this situation with a case report of a patient with TBM whose illness progressed after institution of the maintenance phase of treatment. We propose that the intensive-phase treatment of TBM be revisited with regard to duration of treatment, choice of drugs during continuation-phase therapy, or both. abstract_id: PUBMED:25255302 Optimal duration of anti-TB treatment in patients with diabetes: nine or six months? Background: Diabetes mellitus (DM) increases the risk of TB recurrence. This study investigated whether 9-month anti-TB treatment is associated with a lower risk of TB recurrence within 2 years after complete treatment than 6-month treatment in patients with DM with an emphasis on the impact of directly observed therapy, short course (DOTs). Methods: Patients with pulmonary but not extrapulmonary TB receiving treatment of 173 to 277 days between 2002 and 2010 were identified from the National Health Insurance Research Database of Taiwan. Patients with DM were then selected and classified into two groups based on anti-TB treatment duration (9 months vs 6 months). Factors predicting 2-year TB recurrence were explored using Cox regression analysis. Results: Among 12,688 patients with DM and 43,195 patients without DM, the 2-year TB recurrence rate was 2.20% and 1.38%, respectively (P &lt; .001). Of the patients with DM, recurrence rate decreased from 3.54% to 1.19% after implementation of DOTs (P &lt; .001). A total of 4,506 (35.5%) were classified into 9-month anti-TB treatment group. Although a 9-month anti-TB treatment was associated with a lower recurrence rate (hazard ratio, 0.76 [95% CI, 0.59-0.97]), the benefit disappeared (hazard ratio, 0.69 [95% CI, 0.43-1.11]) under DOTs. Other predictors of recurrence included older age, male sex, malignancy, earlier TB diagnosis year, culture positivity after 2 months of anti-TB treatment, and anti-TB treatment being ≤ 80% consistent with standard regimen. Conclusions: The 2-year TB recurrence rate is higher in a diabetic population in Taiwan and can be reduced by treatment supervision. Extending the anti-TB treatment by 3 months may also decrease the recurrence rate when treatment is not supervised. abstract_id: PUBMED:23940699 Month 2 culture status and treatment duration as predictors of tuberculosis relapse risk in a meta-regression model. Background: New drugs and regimens with the potential to transform tuberculosis treatment are presently in early stage clinical trials. Objective: The goal of the present study was to infer the required duration of these treatments. Method: A meta-regression model was developed to predict relapse risk using treatment duration and month 2 sputum culture positive rate as predictors, based on published historical data from 24 studies describing 58 regimens in 7793 patients. Regimens in which rifampin was administered for the first 2 months but not subsequently were excluded. The model treated study as a random effect. Results: The model predicted that new regimens of 4 or 5 months duration with rates of culture positivity after 2 months of 1% or 3%, would yield relapse rates of 4.0% or 4.1%, respectively. In both cases, the upper limit of the 2-sided 80% prediction interval for relapse for a hypothetical trial with 680 subjects per arm was &lt;10%. Analysis using this model of published month 2 data for moxifloxacin-containing regimens indicated they would result in relapse rates similar to standard therapy only if administered for ≥5 months. Conclusions: This model is proposed to inform the required duration of treatment of new TB regimens, potentially hastening their accelerated approval by several years. abstract_id: PUBMED:8984355 Shortening of therapy duration in patients with pulmonary tuberculosis from 9 to 6 months only justifiable on the basis of published data Objective: To determine if in the Netherlands, just like in other countries, the treatment of pulmonary tuberculosis with adequately sensitive tubercle bacilli may be shortened from 9 to 6 months. Design: Literature study. Setting: Municipal Health Service, Nijmegen, the Netherlands. Method: The relevant literature was analysed, using the percentage of recurrences as the criterion. The study was restricted to patients with pulmonary tuberculosis in whom the diagnosis had been confirmed bacteriologically and in whom a human, normally sensitive tubercle bacillus had been isolated. The treatment schedule had to include at least isoniazid, rifampicin and pyrazinamide. There were no studies with treatment of 9 months' duration. The studies with 6 months' treatment were selected on the basis of the predetermined criteria from among articles included in Medline in 1980-1991. Results: The treatment schedules of 6 months' duration (n = 44) from 25 articles were suitable for analysis. Treatment for 6 months' resulted in a proportion of recurrences of tuberculosis of 2.4% (95%-confidence interval: 2.0-2.8), with follow-up periods of 12 to 94 months after discontinuation of the treatment. Addition of streptomycin or ethambutol during the initial phase, self-medication or controlled treatment, daily or intermittent treatment made no difference as regards the ultimate results. No comparison with the proportion of recurrences of 1% (0.2-2.9) after 9 month's treatment without pyranizamide was possible. A recent calculation of the number of Dutch nationals with recurrent tuberculosis resulted in a proportion of recurrences of 2.5 (1.8-3.2). The guideline adopted was that mentioned by the American Thoracic Society, a proportion of recurrences of &lt; 5%. Conclusion: On the basis of the known percentages of recurrence, it could be decided in the Netherlands as well to shorten the duration of treatment from 9 to 6 months. abstract_id: PUBMED:11104408 Delay in Treatment of Pulmonary Tuberculosis: An Analysis of Symptom Duration Among Ethiopian Patients. Despite the heavy burden of tuberculosis in Ethiopia, little is known about the length of time taken by the patient to seek medical care. We therefore assessed the duration of symptoms before treatment starts in patients with pulmonary tuberculosis. We studied 198 patients (134 men and 66 women) from Yirga Alem, Ethiopia, who were consecutively treated for newly diagnosed pulmonary tuberculosis. Tuberculosis was considered proven when a Ziehl-Neelsen stain of sputum showed acid-fast bacilli. The mean duration was 5.9 months, with a median (range) duration of illness for all patients of 4 months (0.5-36 months). Seventy-five percent of the patients had a duration of illness of more than 2 months, and in 25% of the patients, the illness lasted more than 8 months. Patients with severe disease had a longer duration. Patients with a long duration of symptoms had a greater number of bacilli on direct microscopy of their sputum, suggesting a higher degree of infectivity. Married patients, persons with no formal education, and people living in rural areas had long illness duration. Also, patients with occupations such as farmers, housewives, soldiers, and houseworkers had increased risk compared with students. In south Ethiopia, patients with pulmonary tuberculosis present late to treatment. For some patients, the long pretreatment duration may have had consequences for the severity of the disease and for poor treatment results. Interventions that aim at earlier case detection may therefore be appropriate. abstract_id: PUBMED:34346856 Precision-Enhancing Risk Stratification Tools for Selecting Optimal Treatment Durations in Tuberculosis Clinical Trials. Rationale: No evidence-based tools exist to enhance precision in the selection of patient-specific optimal treatment durations to study in tuberculosis clinical trials. Objectives: To develop risk stratification tools that assign patients with tuberculosis into risk groups of unfavorable outcome and inform selection of optimal treatment duration for each patient strata to study in clinical trials. Methods: Publicly available data from four phase 3 trials, each evaluating treatment duration shortening from 6 to 4 months, were used to develop parametric time-to-event models that describe unfavorable outcomes. Regimen, baseline, and on-treatment characteristics were evaluated as predictors of outcomes. Exact regression coefficients of predictors were used to assign risk groups and predict optimal treatment durations. Measurements and Main Results: The parametric model had an area under the receiver operating characteristic curve of 0.72. A six-item risk score (HIV status, smear grade, sex, cavitary disease status, body mass index, and Month 2 culture status) successfully grouped participants into low (1,060/3,791; 28%), moderate (1,740/3,791; 46%), and high (991/3,791; 26%) risk, requiring treatment durations of 4, 6, and greater than 6 months, respectively, to reach a target cure rate of 93% when receiving standard-dose rifamycin-containing regimens. With current one-duration-fits-all approaches, high-risk groups have a 3.7-fold (95% confidence interval, 2.7-5.1) and 2.4-fold (1.9-2.9) higher hazard risk of unfavorable outcomes compared with low- and moderate-risk groups, respectively. Four-month regimens were noninferior to the standard 6-month regimen in the low-risk group. Conclusions: Our model discrimination was modest but consistent with current models of unfavorable outcomes. Our results showed that stratified medicine approaches are feasible and may achieve high cure rates in all patients with tuberculosis. An interactive risk stratification tool is provided to facilitate decision-making in the regimen development pathway. abstract_id: PUBMED:11858081 Results of surgical intervention depending on duration of preoperative treatment of pulmonary tuberculosis patients The case histories of 428 patients operated on for tuberculosis were analyzed. Three groups were identified. They were as follows: 1) 121 patients untreated with bactericidal drugs before surgery; 2) 247 patients treated less than 6 months before it; 3) 160 patients treated more than 6 months before surgery. Various complications due to resection of the lung were observed in 30 (7%) patients undergone surgery. They were 6.6, 6.8, and 7.5% in Groups 1, 2, and 3, respectively. A clinical effect was achieved in 99.8% of cases. The late outcomes of surgical intervention were studied within 1 to 10 years in 354 patients, including 102, 119, and 133 patients in Groups 1, 2, and 3, respectively. Progressive and recurrent tuberculosis was revealed in 7 (6.8%), 11 (9.2%), and 18 (13.5%), respectively. Thus, immediate and late outcomes of surgical treatment were not worse in patients with tuberculomas untreated with bactericidal drugs before surgery than in those who receive long-term therapy that substantially reduces the duration of therapy, which is a most important task of modern phthisiology. So patients should be operated on when they are found to have pulmonary tuberculomas without signs of a progressive tuberculous process. Answer: The treatment duration for extrapulmonary tuberculosis (EPTB) can vary depending on the site of infection and the patient's HIV status. A retrospective cohort study of 210 patients with EPTB carried out from January 1999 to December 2006 in two hospitals in the north-east of Paris found that a treatment period of 6 months was sufficient for most forms of EPTB, except for neuromeningeal localization (PUBMED:22703726). The study compared patients treated with quadruple therapy for two months followed by dual therapy for either 4 months or more and found no relapse after 24 months of follow-up in both groups. The decision on treatment duration was influenced by factors such as the site of tuberculosis, medical practices, and patient living conditions. The study concluded that 6 months of treatment seems to be adequate for treating EPTB, with the exception of neuromeningeal tuberculosis. However, it is important to note that the treatment of tuberculous meningitis (TBM), a form of EPTB, may require a different approach. A case report highlighted that the intensive-phase duration of 2 months, which is extrapolated from pulmonary tuberculosis treatment, may be inadequate for TBM. The report suggested that the intensive-phase treatment duration for TBM should be revisited, considering both the duration and the choice of drugs during the continuation phase (PUBMED:34852880). In summary, while a 6-month treatment duration is generally considered sufficient for most forms of EPTB, exceptions such as neuromeningeal tuberculosis may require a longer or more intensive treatment regimen. It is essential to consider individual patient factors and the specific type of EPTB when determining the optimal treatment duration.
Instruction: Head injuries in children: can clinical features identify patients with high or low risk for intracranial injury? Abstracts: abstract_id: PUBMED:9577018 Head injuries in children: can clinical features identify patients with high or low risk for intracranial injury? Objective: The objective of this study was to assess the clinical features that might reliably identify the presence of an intracranial injury. Patients And Methods: A prospective study of 1,128 children with head injury over a one year period was carried out. Information regarding each patient was documented, including demographic data, physical examination findings, neurologic status, diagnostic studies and the patient's outcome. Results: Of the 1.128 patients, traumatic intracranial abnormalities identified on CT of the head was found in 11 (1%). Four patients of this group (36%) required surgery. Two children subsequently died. Loss of consciousness, amnesia. Glasgow Coma Scale less than 15 and focal neurological deficits were significantly more common in the group with intacranial injury. The negative predictive values were high for all features. Conclusions: Patients with symptoms of head injury should undergo head CT because a small number will require surgery. After a minor head trauma, children who are neurologically normal and without symptoms may be discharged from the emergency department and sent home after careful physical examination alone. abstract_id: PUBMED:25440860 Intracranial bleeds after minor and minimal head injury in patients on warfarin. Background: There is little evidence to guide physicians on management of patients who sustain head injuries while on warfarin. Objectives: Our objective was to determine the rate of intracranial bleeding in anticoagulated patients with minor and minimal head injuries and the association with clinical features and international normalized ratio (INR). Methods: We conducted a historical cohort study of adult patients, taking warfarin, at two tertiary care emergency departments over 2 years with minor (Glasgow Coma Score 13-15, with loss of consciousness, amnesia, or confusion) or minimal (Glasgow Coma Score 15 without loss of consciousness, amnesia, or confusion) head injuries. Patients with penetrating injuries, INR &lt; 1.5, or a new focal neurological deficit were excluded. Our outcome, intracranial bleeding, was determined by the radiologist's final computed tomography (CT) report for imaging performed within 2 weeks. Results: There were 176 patients enrolled, of which 157 (89.2%) had CT and 28 (15.9%) had intracranial bleeding. Comparing patients with and without intracranial bleeding found no significant differences in INR, and loss of consciousness was associated with higher rate of intracranial bleeding. The rate of intracranial bleeding in the minor and minimal head injury groups was 21.9% and 4.8%, respectively. Conclusions: The rate of intracranial bleeding in patients on warfarin is considerable. Loss of consciousness is associated with high rates of intracranial bleeding. This study supports a low threshold for ordering CT scans for anticoagulated patients with head injuries. abstract_id: PUBMED:27473443 Risk of Delayed Intracranial Hemorrhage in Anticoagulated Patients with Mild Traumatic Brain Injury: Systematic Review and Meta-Analysis. Background: Delayed intracranial hemorrhage is a potential complication of head trauma in anticoagulated patients. Objective: Our aim was to use a systematic review and meta-analysis to determine the risk of delayed intracranial hemorrhage 24 h after head trauma in patients who have a normal initial brain computed tomography (CT) scan but took vitamin K antagonist before injury. Methods: EMBASE, Medline, and Cochrane Library were searched using controlled vocabulary and keywords. Retrospective and prospective observational studies were included. Outcomes included positive CT scan 24 h post-trauma, need for surgical intervention, or death. Pooled risk was estimated with logit proportion in a random effect model with 95% confidence intervals (CIs). Results: Seven publications were identified encompassing 1,594 patients that were rescanned after a normal first head scan. For these patients, the pooled estimate of the incidence of intracranial hemorrhage on the second CT scan 24 h later was 0.60% (95% CI 0-1.2%) and the resulting risk of neurosurgical intervention or death was 0.13% (95% CI 0.02-0.45%). Conclusions: The present study is the first published meta-analysis estimating the risk of delayed intracranial hemorrhage 24 h after head trauma in patients anticoagulated with vitamin K antagonist and normal initial CT scan. In most situations, a repeat CT scan in the emergency department 24 h later is not necessary if the first CT scan is normal. Special care may be required for patients with serious mechanism of injury, patients showing signs of neurologic deterioration, and patients presenting with excessive anticoagulation or receiving antiplatelet co-medication. abstract_id: PUBMED:12034395 Identifying patients at risk for intracranial and extracranial blunt carotid injuries. Background: Blunt carotid injuries are rare, often occult, and potentially devastating. Angiographic screening programs have detected this injury in up to 1% of blunt trauma patients. Implementing a liberal angiographic screening program at our hospital is impractical and we want to identify a high-risk group to target for screening. We hypothesize that intracranial and extracranial carotid injuries have different risks, presentations, and outcomes. Methods: Patients with intracranial and extracranial carotid injuries were identified from the British Columbia trauma registry. Presentation and outcome were reviewed. To facilitate statistical modeling the analysis was done by matching cases to 5 randomly selected controls. Risk factors for injury were evaluated by univariate and multiple logistic regression. Results: A total of 35 carotid injuries were identified. Thirteen intracranial injuries were identified in 10 patients. Twenty-two extracranial injuries were identified in 18 patients. Sixty-seven percent of patients with intracranial injuries and 31% of those with extracranial injuries died (P = 0.11). Eleven percent of intracranial injuries and 56% of extracranial injuries were occult (P = 0.04). Glasgow outcome scores were 2.04 intracranial and 3.12 extracranial (P = 0.18). For intracranial injuries the multiple variable predictive model had two predictors: Glasgow Coma Score &lt;/=8 and facial fractures. For extracranial the predictors were GCS &lt; or =8 and thoracic injury (Abbreviated Injury Score &gt; or =3). Conclusions: Intracranial injuries were frequently detected on initial investigations and have very poor outcomes. Extracranial injuries were more frequently occult and stand to benefit from early detection by screening programs. As independent risk factors for these two injuries differ, limited screening resources should focus on risk factors for occult extracranial injury: namely, low GCS and significant thoracic injury. abstract_id: PUBMED:24119451 Risk of intracranial injury after minor head trauma in patients with pre-injury use of clopidogrel. Background: Clopidogrel is an adenosine diphosphate receptor antagonist. The risk of intracranial hemorrhage following minor head trauma in patients with pre-injury use of clopidogrel has not been fully determined. Methods: This case-controlled study examined the effects of pre-injury use of clopidogrel in adult (age 14 years and older) patients with minor head trauma. Results: During the study period, 1660 patients head computed tomography scans were performed in the emergency department, of which 658 met inclusion criteria. Intracranial hemorrhage was noted in 30% of patients on clopidogrel, compared with 2.2% of those patients without pre-injury use of clopidogrel. After performing a logistic regression analysis for confounders, the pre-injury use of clopidogrel was significantly associated with intracranial hemorrhage in this study population (OR 16.7; 95% CI 1.71-162.7). Conclusion: The use of clopidogrel is associated with a significantly increased risk of developing intracranial hemorrhage following minor trauma. abstract_id: PUBMED:24929771 Characteristics of elderly fall patients with baseline mental status: high-risk features for intracranial injury. Background: Falls are a major cause of morbidity in the elderly. Objectives: We describe the low-acuity elderly fall population and study which historical and clinical features predict traumatic intracranial injuries (ICIs). Methods: This is a prospective observational study of patients at least 65 years old presenting with fall to a tertiary care facility. Patients were eligible if they were at baseline mental status and were not triaged to the trauma bay. At presentation, a data form was completed by treating physicians regarding mechanism and position of fall, history of head strike, headache, loss of consciousness (LOC), and signs of head trauma. Radiographic imaging was obtained at the discretion of treating physicians. Medical records were subsequently reviewed to determine imaging results. All patients were called in follow-up at 30 days to determine outcome in those not imaged. The study was institutional review board approved. Results: A total of 799 patients were enrolled; 79.5% of patients underwent imaging. Twenty-seven had ICIs (3.4%). Fourteen had subdural hematoma, 7 had subarachnoid hemorrhage, 3 had cerebral contusion, and 3 had a combination of injuries. Logistic regression demonstrated 2 study variables that were associated with ICIs: LOC (odds ratio, 2.8; confidence interval, 1.2-6.3) and signs of head trauma (odds ratio, 13.2; confidence interval, 2.7-64.1). History of head strike, mechanism and position, headache, and anticoagulant and antiplatelet use were not associated with ICIs. Conclusion: Elderly fall patients who are at their baseline mental status have a low incidence of ICIs. The best predictors of ICIs are physical findings of trauma to the head and history of LOC. abstract_id: PUBMED:19741394 Secondary intracranial hemorrhage after mild head injury in patients with low-dose acetylsalicylate acid prophylaxis. Background: Low-dose acetylsalicylate acid (LDA) therapy is accepted as a major risk factor for intracranial hemorrhages (ICH) in head injuries. Coincidentally, patient admissions that might be indicated for in hospital observation of neurologic function causes increased health care costs. In the literature, there is no evidence concerning the incidence of secondary intracranial hemorrhagic events (SIHE) in patients with LDA prophylaxis that had negative primary computed tomography (CT)-scan of the head. Methods: In this prospective study, we enrolled 100 consecutive trauma patients older than 65 years presenting in a Level I urban trauma center after a mild head injury (Glasgow Coma Scale score of 15) who had LDA prophylaxis. Patients included had a negative primary head CT-scan concerning ICH. For analysis of the incidence of SIHEs patients had routine repeat head CT (RRHCT) after 12 hours to 24 hours. Results: Sixty-one patients were women and 39 men. Mean age was 81 years +/- 10 years. Injury mechanism was a level fall in 84 cases and others in 16. In four patients (4%) an SIHE was detected in the RRHCT (p &lt; 0.00001). In two patients (2%) major secondary ICH had occurred without neurologic deterioration at the time of RRHCT with fatal outcome in one patient and neurosurgical intervention in another. The remaining two patients (2%) had minor SIHE with an uneventful clinical course. Conclusion: The incidence of SIHE has been neglected until now. The current study revealed that patients with LDA prophylaxis after mild head injury with negative primary head CT should be subjected to RRHCT within 12 hours to 24 hours to accurately identify SIHE. Alternatively to RRHCT, patients should be subjected to a prolonged in-hospital observation for at least 48 hours. abstract_id: PUBMED:29787544 A clinical prediction model for raised intracranial pressure in patients with traumatic brain injuries. Background: Intracranial hypertension is believed to contribute to secondary brain insult in traumatically brain injured patients. Currently, the diagnosis of intracranial hypertension requires intracranial monitoring or advanced imaging. Unfortunately, prehospital transport times can be prolonged, delaying time to the initial radiographic assessment. The aim of this study was to identify clinical variables associated with raised intracranial pressure (ICP) prior to the completion of neuroimaging. Methods: We performed a retrospective cohort study of head injured patients over a 3-year period. Patients were labeled as having increased ICP if they had a single reading of ICP greater than 20 mm Hg within 1 hour of ICP monitor insertion or computed tomography findings suggestive of raised ICP. Patient and clinical characteristics were analyzed using stepwise multivariable logistic regression with ICP as the dependent variable. Results: Of 701 head injured patients identified, 580 patients met inclusion criteria. Mean age was 48.65 ± 21 years, 73.3% were male. The mean Injury Severity Score was 22.71 ± 12.38, and the mean Abbreviated Injury Scale for body region head was 3.34 ± 1.06. Overall mortality was 14.7%. Only 46 (7.9%) patients had an ICP monitor inserted; however, a total of 107 (18%) patients met the definition of raised ICP. The mortality rate for patients with raised ICP was 50.4%. Independent predictors of raised ICP were as follows: age, older than 55 years (odds ratio [OR], 2.26; 95% confidence interval [CI], 1.35-3.76), pupillary fixation (OR, 5.76; 95% CI, 3.16-10.50), signs of significant head trauma (OR, 2.431; 95% CI, 1.39-4.26), and need for intubation (OR, 3.589; 95% CI, 2.10-6.14). Conclusion: This study identified four independent variables associated with raised ICP and incorporated these findings into a preliminary risk assessment scale that can be implemented at the bedside to identify patients at significant risk of raised ICP. Future work is needed to prospectively validate these findings prior to clinical implementation. Level Of Evidence: Prognostic, Epidemiological, level III. abstract_id: PUBMED:2772807 Traumatic intracerebral hematoma--which patients should undergo surgical evacuation? CT scan features and ICP monitoring as a basis for decision making. When a patient presents to the neurosurgeon with a traumatic intracerebral hematoma and has not deteriorated or developed new neurological deficit since the injury, the decision to remove the hematoma may be difficult. Of 244 patients with traumatic intracerebral hematomas, 85 were selected for intracranial pressure monitoring to assist in deciding whether surgical evacuation was indicated. None had deteriorated in conscious level or developed new neurological deficit since injury. Fifty-five patients (65%) demonstrated high intracranial pressure and underwent craniotomy. In 30 patients, intracranial pressure remained under 30 mm Hg, and their hematomas were not initially removed. Five of these 30 patients suddenly deteriorated or died 6 to 11 days after injury, with features of high intracranial pressure clinically or at postmortem. Intracranial pressure monitoring therefore failed to predict a late rise in intracranial pressure in 16.6% of those with low intracranial pressure initially. An analysis of computed tomography scanning and clinical features was therefore carried out to search for better predictors of the need for surgery. Our data suggest that basal cistern status, coma score, and the severity of edema surrounding the intracerebral hematoma should be used, in addition to intracranial pressure monitoring, to improve management of patients with traumatic intracerebral hematoma. abstract_id: PUBMED:32553600 Risk of Intracranial Hemorrhage and Short-Term Outcome in Patients with Minor Head Injury. Objective: To examine the occurrence of traumatic intracranial hemorrhage (tICH) and outcome in patients with minor head injury and assess the probable risk factors. Methods: Patients with minor head injury who visited our hospital from January 2015 to July 2017 were registered consecutively, and enrolled patients were aged ≥18 years, visited within 24 hours of the injury, and had a Glasgow Coma Scale score of 15 at outpatient clinic or before the injury. Results: Of the 1122 enrolled patients, 55 (4.9%) had tICH. An antiplatelet agent was administered in 114 patients, an anticoagulant agent was administered in 49 patients, and none of them were administered in 948 patients. A multivariate analysis of tICH identified it as a risk factor, showing significant difference between antiplatelet medication (P = 0.0312), fall from stairs (P = 0.0057), traffic accident (P = 0.0117), neurologic symptoms (P = 0.0091), and modified Rankin Scale (mRS) score before trauma (P &lt; 0.0001). We also analyzed association of enlargement of tICH with different parameters and only anticoagulant medication indicated an increased risk (P = 0.0005). Thirty patients (2.6%) were dependent or died at discharge (mRS 3-6). The mRS score before trauma (P &lt; 0.0001), tICH (P &lt; 0.0001), spinal injury (P &lt; 0.0001), and enlargement of intracranial hemorrhage (P = 0.0008) indicated an increased probability of morbidity (mRS 3-6) in multivariate analysis. Conclusions: Antiplatelet and anticoagulant medications were risk factor for tICH and enlargement of tICH in patients with minor head injury, respectively. A pretrauma condition of disability/dependence is an important risk factor for tICH and outcome. Answer: Yes, clinical features can help identify patients with high or low risk for intracranial injury in children who have sustained head injuries. A prospective study of 1,128 children with head injury found that certain clinical features such as loss of consciousness, amnesia, a Glasgow Coma Scale (GCS) score less than 15, and focal neurological deficits were significantly more common in the group with intracranial injury (PUBMED:9577018). These findings suggest that children presenting with these symptoms after a minor head trauma are at higher risk for intracranial injury and may benefit from a head CT scan to identify any traumatic intracranial abnormalities. Conversely, children who are neurologically normal and without symptoms may be discharged from the emergency department and sent home after a careful physical examination alone, as the negative predictive values for these features were high, indicating a low risk of intracranial injury when these features are absent (PUBMED:9577018).
Instruction: Does socio-demographic status influence the effect of pollens and molds on hospitalization for asthma? Abstracts: abstract_id: PUBMED:15723767 Does socio-demographic status influence the effect of pollens and molds on hospitalization for asthma? Results from a time-series study in 10 Canadian cities. Purpose: Social status influences asthma morbidity but the mechanisms are not well understood. To determine if sociodemographics influence the susceptibility to ambient aeroallergens, we determined the association between daily hospitalizations for asthma and daily concentrations of ambient pollens and molds in 10 large Canadian cities. Methods: Daily time-series analyses were performed and results were adjusted for day of the week, temperature, barometric pressure, relative humidity, ozone, carbon monoxide, sulfur dioxide, and nitrogen dioxide. Results were then stratified by age, gender, and neighborhood family education and income. Results: There appeared to be age and gender interactions in the relation between aeroallergens and asthma. An increase in basidiomycetes equivalent to its mean value, about 300/m3, increased asthma admissions for younger males (under 13 years of age) by 9.3% (95% CI, 4.8%, 13.8%) vs. 4.2% (95% CI, - 0.1%, 8.5%) for older males. The reverse was true among females with increased effect in the older age group: 2.3% (95% CI, 1.2%, 5.8%) in those under 13 years vs. 7.1% (95% CI, 4.1%, 10.1%) for older females. Associations were seen between aeroallergens and asthma hospitalization in the lowest but not the highest education group. Conclusions: Our results suggest that younger males and those within less educated families may be more vulnerable to aeroallergens as reflected by hospitalization for asthma. abstract_id: PUBMED:24278078 Analysis of the impact of selected socio-demographic factors on quality of life of asthma patients. Aim: To evaluate the influence of selected socio-demographic factors on quality of life of patients with different degrees of asthma severity. Material And Methods: The study was conducted in 2009-2010 in the Clinic of Allergology, Clinical Immunology and Internal Diseases in Dr J. Biziel University Hospital No. 2 in Bydgoszcz. Patients were divided into a tested group (126) and a control group (86). The criterion for the division was the degree of asthma control according to GINA 2006. The following tools were used: the author's questionnaire containing questions about socio-demographic and clinical data, and the WHOQOL-100. Results: In the tested group, a statistically significant correlation was observed between quality of life and age (p &lt; 0.002 for the entire population), education (p &lt; 0.05 in the group with controlled asthma, p = 0.0005 for the entire population), professional activity (p &lt; 0.003 in the group with partially controlled asthma, p &lt; 0.05 with uncontrolled asthma and p &lt; 0.0001 in the entire population), marital status (p = 0.025 for the entire population) and financial situation (p &lt; 0.0001; p &lt; 0.0002; p &lt; 0.009 in all groups; p &lt; 0.0001 in the entire population). There was no significant difference between quality of life, and sex and the place of residence of the respondents. Conclusions: Age, education, professional activity, marital status and financial situation affect the assessment of quality of life in patients with asthma. Socio-demographic factors such as sex and the place of residence do not influence the assessment of quality of life in patients with asthma. abstract_id: PUBMED:37265830 Socio-demographic characteristics of children and young adults with varied asthma control- does it make a difference? Background: The socioeconomic status and caregiver perception of asthma as a disease, the availability of specialty care and medication adherence have major influence on outcome of asthma control in children with asthma. The control of asthma therefore depends on the optimizing the interplay of these factors taking into consideration the regional and racial variations. Objective: The objective of this study was to evaluate the association between socio-demographic factors and asthma control outcome in children with asthma. Materials And Methods: This was a cross-sectional study involving 66 consecutively enrolled participants with asthma whose economic burden for asthma was assessed in a previous study. Based on the number of registered children attending the clinic, a minimum sample size of 66 calculated for this study was obtained.The participants were consenting children and young adults between the ages of 1 and 20 years. Using standard methods, data on socio-economic status, personal and family demographics, including household number, mothers' educational attainments and employment status; and asthma control were collected and analyzed. Asthma control was assessed using Asthma control test (ACT) and guided by the original developers scoring, participants were grouped into well controlled, partly controlled and uncontrolled. The Chi-square test was used to test for association between participants' socio-demographic characteristics (age, socioeconomic status, mothers' education and employment, and number of children in the household) and asthma control status at 5% level of significance. Results: Of all study participants, 34 (51.55%) were male, with mean age (SD) of 11.6 (4.8) years. The mean (SD) age at initial asthma diagnosis was 6.2 (4.6) years. The majority 49(76.5%) of the mothers had tertiary education. Study participants belonging to the poorest; very poor; poor; and least poor socio-economic cadres were, 16 (24.2%); 17 (25.8%); 17 (25.8%); and 16 (24.2%) respectively. Asthma control classification showed that, 26 (39.4%); 31 (47%) and 9 (13.6%) participants had well controlled asthma, partially controlled asthma and uncontrolled asthma respectively. The factors like age, socioeconomic status, mothers 'educational level, employment status and number of children in the household did not show any statistically significant association with the asthma control status of participants. Conclusions: Asthma control outcome remains multifactorial as participants' socio-demographic characteristics did not impact on the level of control of asthma, among participants in the south eastern parts of Nigeria, despite being in a LMIC. A larger study is recommended to further explore this. abstract_id: PUBMED:35719028 Asthma exacerbations in Reunion Island: Environmental factors. Introduction: Reunion Island is a French overseas department characterized by a tropical climate with 2 distinct seasons. While the prevalence of asthma among adults in Reunion Island is close to that in mainland France, mortality and hospitalization rates are twice as high. To date, however, no epidemiological studies have evaluated the influence of environmental factors in asthma exacerbations in Reunion Island. Methods: From January 2010 to June 2013, 1157 residents of Saint-Denis visited the emergency rooms of the Centre hospitalier universitaire site Nord de Saint-Denis for asthma. After exclusion of children under the age of 3, 864 visits were analyzed. These were correlated with the following daily factors: pollens and molds, meteorological parameters (temperature, precipitation levels, humidity and relative humidity levels, wind), pollutants (sulfur dioxide (SO2), nitrogen oxide (NOx), and the fine particles PM10 and PM2.5), and the influenza virus. The correlation between these factors was evaluated using the DLNM and GO-GARCH models. Results: Of the 864 analyzed visits, 532 were by pediatric patients (aged 3 to 16 years) and 332 by adult patients (aged over 16 years). In adults, pollens positively correlated with asthma exacerbations were Urticaceae, Oleaceae, Moraceae, and Chenopodiaceae. In children, these were Urticaceae, Oleaceae, Poaceae, and Myrtaceae. Molds positively correlated with asthma exacerbations in adults were ascospores and basidiospores. Only basidiospores were positively correlated with exacerbations in children. Temperature was positively correlated with exacerbations in both adults and children. The pollutants PM10 and NOx were positively correlated with exacerbations in children. Influenza epidemics were strongly correlated with exacerbations in both adults and children. Conclusion: Our analysis shows that in Reunion Island, asthma is exacerbated by pollens (Urticaceae, Oleaceae, Moraceae, Chenopodiaceae in adults; Urticaceae, Oleaceae, Poaceae, Myrtaceae in children), molds (ascospores and basidiospores in adults; basidiospores in children), temperature, influenza, and the pollutants PM10 and NOx (in children). abstract_id: PUBMED:31168304 Association of molds and metrological parameters to frequency of severe asthma exacerbation. Background: Sensitization to airborne molds may be a risk factor for severe asthma and direct cause of asthma exacerbation (AE). Methods: A prospective, 1-year (April 2016-March 2017) study, done in Kuwait Allergy Centre, investigated the link between AEs with exposure to outdoor molds and the role of meteorological parameters in mold sensitized patients and compared with non-allergic asthma patients who had asthma deterioration. The total of 676 adult asthmatics with moderate-severe AEs were included and divided into atopic (85.65%) and non-atopic group. Atopy was defined by positive skin prick test (SPT) to at least one inhalant allergen. Data regarding atopy and asthma severity were collected from patient's records. Patients with symptoms and signs of acute respiratory infection and patient sensitized to indoor allergens only were excluded. Daily count of local pollens (Salsola kali, Bermuda grass) and molds (Aspergillus, Alternaria and Cladosporium) were obtained from the Aerobiology department. Daily metrological parameters (atmospheric pressure-AP, temperature-T and relative humidity-RH) were provided by Kuwait Environment Public Authority. Count of spores/m3 and weather variable are shown on weekly basis. The year circle was divided into 4 Seasons (1, 2, 3, 4) accordingly to typical desert climate. Results: Sensitization to molds was relatively high but significantly less (25.0%) if compared to the pollens sensitization. The highest number of AEs was in season 4 for both molds and pollens sensitized patients. Seasonal patterns for both allergens were significant and positively correlated with RH and AP. In season 1 only, mold sensitized patients showed higher rate of AEs. Non-atopic patients have been less sensitive to increased RH than atopic. Negative correlation with T was similar in both atopic and non-atopic patients. Conclusion: Despite of high rate of sensitization to molds, their significant role in triggering AE was not found in desert environment. Typical desert climate and high allergencity of local weeds outweigh the influence of the molds. abstract_id: PUBMED:24602681 Health impact of exposure to pollens: A review of epidemiological studies The aim of this review is to describe the health impact of exposure to pollen based on recently published epidemiological studies. The methodology chapter, describes a review of the literature and outlines important elements of these studies: measurement of exposure to pollens, study types used, study populations and the health indicators related to pollen exposure. In this review, two types of studies have been used to assess the epidemiological evidence of short-term links between pollen exposure and hay fever or asthma. Ecological time-series studies use daily indicators of asthma exacerbations (emergency room admissions or hospitalizations), consultations for rhinitis or conjunctivitis, or anti-allergic drug consumption within general population. Panel studies relate measurements of pollen grain concentrations to nasal, ocular and bronchial symptom severity in a group of subjects sensitized to a specific pollen, monitored during the pollen season. In both cases, the studies show a relationship on a day-to-day basis between health indicators and daily rates of atmospheric pollen collected by a pollen trap. These studies take into account confounding factors, such as air pollution, weather factors and sometimes exposure to outdoor molds. Unlike earlier studies, more and more studies focus on the shape of the dose-response relationship and the lag between pollen exposure and symptoms. Only rarely, individual susceptibility factors, the clinical phenomenon of priming and polysensitization are reported. Thus, ecological time-series studies and panel studies assess respectively the impact of pollen exposure in the general population and in groups of sensitized patients. Using appropriate statistical tools, these studies provide insight into the shape of the dose-response relationship, with a potential threshold below which symptoms are absent, then a linear relationship for nasal, ocular and bronchial symptoms and a plateau where the symptoms do not increase despite the continued increase in pollen. abstract_id: PUBMED:32050128 Hotspot detection and socio-ecological factor analysis of asthma hospitalization rate in Guangxi, China. Background: Asthma is a major public health concern throughout the world. Numerous researches have shown that the spatial-temporal patterns of asthma are inconsistent, leading to the suggestion that these patterns are determined by multiple factors. This study aims to detect spatial-temporal clusters of asthma and analyze socio-ecological factors associated with the asthma hospitalization rate in Guangxi, China. Methods: Asthma hospitalization and socio-ecological data for 88 counties/municipal districts in Guangxi, China in 2015 was collected. Space-time scan statistics were applied to identify the high-risk periods and areas of asthma hospital admissions. We further used GeoDetector and Spearman correlation coefficient to investigate the socio-ecological factors associated with the asthma hospitalization rates. Results: There were a total of 7804 asthma admissions in 2015. The high-risk period was from April to June. The age groups of 0-4 and ≥65 years were both at the highest risk, with hospital admission rates of 45.0/105 and 46.5/105, respectively. High-risk areas were found in central and western Guangxi with relative risk (RR) values of asthma hospitalizations greater than 2.0. GDP per capita and altitude were positively associated with asthma hospitalizations, while air pressure and wind speed had a negative association. The explanatory powers of these factors (i.e., GDP per capita, altitude, air pressure, wind speed) were 22%, 20%, 14% and 10%, respectively. Conclusions: The GDP per capita appears to have the strongest correlation with asthma hospitalization rates. High-risk areas were identified in central and western Guangxi characterized by high GDP per capita. These findings may be helpful for authorities developing targeted asthma prevention policies for high-risk areas and vulnerable populations, especially during high-risk periods. abstract_id: PUBMED:27238165 The clinical differences of asthma in patients with molds allergy. Introduction: Bronchial asthma is an increasing problem worldwide. The course of bronchial asthma is dependent on the type of inducing allergens. The differences between the clinical features of asthma in patients with monovalent allergies to molds and with other allergies were explored. Material And Methods: Randomly selected 1910 patients (924 women and 986 men) between 18-86 years in age were analyzed according to type of allergy and asthma. The diagnosis of asthma was confirmed on the basis of GINA criteria, physical examination and spirometry. Allergy diagnosis was confirmed on the basis of medical history, a positive skin prick test and the measurement of serum-specific IgE to inhalant allergens, using an extended profile of mold allergens. Results: Patients with monovalent allergies to molds (4% of analyzed group) had significantly more frequent diagnoses of asthma than patients in the other group (53% vs. 27.1-32.4%, p &lt; 0.05). Patients with allergies to Alternaria alternata had an odds ratio of 2.11 (95%CI: 1.86-2.32) for receiving a diagnosis of bronchial asthma. They had less control over their asthma, which was more severe compared to patients with other allergies. Patients with asthma and allergies to mold had significantly more frequent exacerbation of asthma requiring systemic corticosteroids and/or hospitalization. They used a significantly greater mean daily dose of inhaled steroids compared to other patients. Conclusion: Patients with monovalent IgE allergies to molds are at a higher risk for asthma than patients with other allergies. Their asthma is often more intense and less controlled compared to that of patients with other types of allergies. abstract_id: PUBMED:31901618 Socio economic and health status of street sweepers of Mekelle city, Ethiopia. A questionnaire survey to determine the socio economic and health status of street sweepers of Mekelle city, Ethiopia was held during April and May 2019. The random sample chosen was 137 out of a total of 297 street sweepers of the city. The major lot (86.13%) worked both in the morning and evening shifts whereas the rest 13.87% worked only in the morning shift in all days of the week. They served for a poor monthly salary of 500-1500 Birr. As a result of continuous working with municipal solid waste they developed several health problems. The most wide spread health issues were cuts and laceration (84.67%), respiratory and eye problems (58.39%) and musculoskeletal problems (53.28%). In addition, sweepers reported cases such as fracture by fall, joint pain, cough, asthma, low back pain and dysentery during the previous one year of their service. The study result also showed significant differences in health issues of different sub cities. Sweepers of Qedamay Weyane subcity were worst affected. These sweepers received no medical aid from their employers. In addition they faced several challenges at work place particularly bad weather of early morning, insults and sexual harassments by miscreants on the road. It was reckoned that 97.81% of sweepers had no job satisfaction. It is concluded that the socio economic and health status of street sweepers of Mekelle city was deplorable. In this study suggestions for improving their socio economic status and controlling or minimizing their occupational health hazards are given. abstract_id: PUBMED:11243946 Asthma and the risk of hospitalization in Canada : the role of socioeconomic and demographic factors. Objective: Asthma is an important determinant of hospitalization. The study aims to examine the modifying effects of demographic and socioeconomic factors on the relationship between asthma and the overall number of hospitalizations. Methods: We examined the data on 17,601 Canadians who were &gt; or = 12 years of age to explore the combined effects of asthma and other factors on hospitalization within the context of a publicly funded health-care system. Asthma was determined by an affirmative response to the question: "Do you have asthma diagnosed by a health professional?" The subjects also were asked whether they had been an overnight patient in a hospital during the past 12 months. Results: Asthma as a risk factor explained 3.7% of all hospitalizations of men and 2.4% of all hospitalizations of women. Overall, hospitalization was positively associated with female gender, old age, and low household income. The odds ratio for asthma as a risk factor for overall hospitalization (ie, hospitalization for any reason and all causes, not only for asthma) was greater for younger men than for older men, for less-educated women than for well-educated women, and for men with middle or high incomes than for men with low incomes. Conclusions: These results suggest that demographic and socioeconomic factors play a role in the relationship between asthma and the overall number of hospitalizations, with certain population subgroups being at greater risk of hospitalization in relation to asthma. Answer: Yes, socio-demographic status does influence the effect of pollens and molds on hospitalization for asthma. A study conducted in 10 Canadian cities found that there were age and gender interactions in the relation between aeroallergens and asthma. Younger males and those within less educated families appeared to be more vulnerable to aeroallergens as reflected by hospitalization for asthma (PUBMED:15723767). Another study indicated that socio-demographic factors such as age, education, professional activity, marital status, and financial situation affect the assessment of quality of life in patients with asthma (PUBMED:24278078). However, a study in the southeastern parts of Nigeria found that participants' socio-demographic characteristics did not impact the level of control of asthma (PUBMED:37265830). In Reunion Island, environmental factors such as pollens, molds, temperature, and pollutants were found to exacerbate asthma, but the study did not specifically address socio-demographic status (PUBMED:35719028). A study in Kuwait found that despite a high rate of sensitization to molds, their significant role in triggering asthma exacerbation was not found in a desert environment, suggesting that local climate and allergenicity of local weeds might outweigh the influence of molds (PUBMED:31168304). In Guangxi, China, socio-ecological factors such as GDP per capita and altitude were associated with asthma hospitalization rates, indicating that socio-economic status can influence asthma outcomes (PUBMED:32050128). Lastly, patients with monovalent IgE allergies to molds were found to be at a higher risk for asthma than patients with other allergies, and their asthma was often more intense and less controlled (PUBMED:24602681).
Instruction: Are we turning to more than a first line treatment of metastatic colorectal cancer with high dose irinotecan? Abstracts: abstract_id: PUBMED:12439596 High-dose, single-agent irinotecan as first-line therapy in the treatment of metastatic colorectal cancer. Purpose: The efficacy and safety of single-agent, high-dose irinotecan (CPT-11, Campto) 500 mg/m(2) every 3 weeks were investigated as first-line treatment for advanced colorectal cancer (CRC). Patients And Methods: Patients were enrolled into the study to receive a first cycle of therapy with irinotecan at a dose of 350 mg/m(2) every 3 weeks, which could be escalated to 500 mg/m(2) for the second and subsequent cycles depending on toxicity. Efficacy, safety and pharmacokinetics were determined in the intent to treat (ITT) population and the high-dose population (i.e. patients who had received at least three cycles of irinotecan, the second and third at 500 mg/m(2)). Results: Of 49 patients enrolled into the study (ITT population), 31 (63%) received at least three cycles of treatment with cycles 2 and 3 at an irinotecan dose of 500 mg/m(2) (the high-dose population). The response rates (RR) for the ITT and high-dose populations were 24.5% and 35.5%, respectively. The main grade 3/4 toxicities per cycle in the ITT and high-dose populations were neutropenia 22% and 17%, febrile neutropenia 5% and 3%, and diarrhoea 12% and 7%, respectively. The pharmacokinetics of irinotecan and its metabolite SN-38 were investigated in 31 patients in cycle 1 and 22 patients in cycle 2. Irinotecan clearance and SN-38 exposure were not sufficiently correlated with toxicity in cycle 1 to identify patients for dose increase in subsequent cycles. The exposure to irinotecan and SN-38 increased in proportion to dose from 350 to 500 mg/m(2). Conclusion: These results suggest that high-dose irinotecan can be safely administered as first-line monotherapy to approximately two-thirds of patients who present with advanced CRC following a selective first cycle. abstract_id: PUBMED:21654688 A genotype-directed phase I-IV dose-finding study of irinotecan in combination with fluorouracil/leucovorin as first-line treatment in advanced colorectal cancer. Background: Infusional fluorouracil/leucovorin (FU/LV) plus irinotecan (FOLFIRI) is one of the standard first-line options for patients with metastatic colorectal cancer (mCRC). Irinotecan is converted into 7-ethyl-10-hydroxycamptothecin (SN-38) by a carboxylsterase and metabolised through uridine diphosphate glucuronosyl transferase (UGT1A1). The UGT1A1*28 allele has been associated with the risk of developing severe toxicities. The present trial was designed to define the maximum tolerated dose according to UGT1A1 genotype. This report focuses on the results of tolerance to different escalated doses of FOLFIRI first-line of chemotherapy. Patients And Methods: Patients undergoing first-line treatment for mCRC and eligible for treatment with FOLFIRI were classified according to UGT1A1 genotype. A total of 94 patients were eligible for dose escalation of irinotecan. The starting dose of biweekly irinotecan was 180 mg m(-2) for the *1/*1, 110 mg m(-2) for the *1/*28 and 90 mg m(-2) for the *28/*28 genotypes. Results: The dose of irinotecan was escalated to 450 mg m(-2) in patients with the *1/*1 genotype, to 390 mg m(-2) in those with the *1/*28 genotype and to 150 mg m(-2) in those with the *28/*28 genotype. Neutropenia and diarrhoea were the most common grade 3 or 4 toxicities. Conclusions: Our results demonstrated that the recommended dose of 180 mg m(-2) for irinotecan in FOLFIRI is considerably lower than the dose that can be tolerated for patients with the UGT1A1 *1/*1 and *1/*28 genotypes. The maximum tolerable dose (MTD) in patients with a high-risk UGT1A1 *28/*28 genotype is 30% lower than the standard dose of 180 mg m(-2). abstract_id: PUBMED:21109376 Are we turning to more than a first line treatment of metastatic colorectal cancer with high dose irinotecan?: A monocentric institution safety analysis of 46 patients. Purpose: Irinotecan (CPT11) at 180 mg/m(2) with LV5FU2 for metastatic colorectal cancer (MCRC) has response rates (RRs) of 56 and 4% as first- and second-line treatments, respectively [1-2], and higher doses of CPT11 result in higher RRs. The present cohort analysis aimed to evaluate the effect of increasing doses of this combination treatment in clinical practice. Methods: Chemo-naive and pretreated patients with MCRC received CPT11 and LV5FU2 (5FU 48-h CI 2400 mg/m(2), D1 bolus leucovorin 200 mg/m(2)), followed by 5FU 400 mg/m(2) (cycles d1-d15). CPT11 dose was increased by 20 mg/m(2) at each cycle, from 180 mg/m(2) up to 260 mg/m(2), unless grade 3 toxicities other than alopecia arose. Results: Between March 2002 and September 2005, 46 patients were recruited (median age: 62.3 years). A total of 512 cycles of chemotherapy were administered (median: 9 cycles/patient; range: 3-41). Median follow-up was 16.2 months. Altogether, 27 patients had received prior chemotherapy: 24 with an oxaliplatin-based regimen; seven with CPT11; and five with LV5FU2 or oral 5FU. Doses of 260 mg/m(2) were used in 17 patients, 240 mg/m(2) in seven, 220 mg/m(2) in six and 200 mg/m(2) in five, while 11 remained at 180 mg/m(2); 121 cycles used 260 mg/m(2) (24%), with 76 cycles at 240 mg/m(2) (14%), 78 cycles at 220 mg/m(2) and 58 cycles at 200mg/m(2). The objective response (OR) was 40%, with stable disease (SD) in 45% and disease progression (DP) in 11%. In the first-line therapy group, partial/complete responses were 55%, with SD in 30% and DP in 15%. In pretreated patients, OR was 30.5%, SD was 58.5% and DP was 11%. Nine patients (20%) had a therapeutic break (median: 5.1 months; range: 3-10). Overall median survival was 17 months, with 16.5 months in pretreated patients and 19.6 months in the first-line group. Toxicity grades 3-4 and overall incidence per cycle were: neutropenia, 3-22%; diarrhea, 4-22%; vomiting, 2-20%; alopecia, 20-26%; anemia, 0.2-2%; thrombocytopenia, 0-0%; and mucositis, 0.4-2.2%. Conclusion: The toxicity of high-dose CPT11+LV5FU2 chemotherapy was well tolerated when the dose was progressively increased according to individual tolerability, with 37% of patients receiving CPT11 at 260 mg/m(2). Progression-free survival (PFS) increased with higher doses of CPT11. In the chemo-naive and pretreated subgroups, the median PFS was 10.9 and 8.8 months, respectively (P=0.698, NS). Optimization of CPT11 doses in pretreated patients appears to pave the way for new treatment options. abstract_id: PUBMED:22956187 The use of high dose d,l-leucovorin in first-line bevacizumab+mFOLFIRI treatment of patients with metastatic colorectal cancer may enhance the antiangiogenic effect of bevacizumab. The role of d,l-leucovorin (d,l-LV) dose on efficacy and toxicity of first-line bevacizumab+mFOLFIRI or mFOLFIRI treatment has never been investigated in patients with metastatic colorectal cancer. This study was an investigator initiated retrospective observational investigation performed on 450 consecutive patients. The mFOLFIRI regimen consisted of irinotecan (180 mg/m(2)), d,l-LV low (200 mg/m(2)) or high (400 mg/m(2)) dose and bolus 5-fluorouracil (5-FU) (400 mg/m(2)), followed by a 46-h infusion of 5-FU (2400 mg/m(2)). The bevacizumab+mFOLFIRI regimen consisted of bevacizumab (5 mg/kg)+mFOLFIRI. The efficacy (objective response [OR], progression-free [PFS] and overall survival [OS]) and toxicity was evaluated and compared. The use of high versus low dose d,l-LV in bevacizumab+mFOLFIRI regimen improved the OR rate (63 and 38 %, respectively; P = 0.00015), median PFS (13 and 9 months, respectively; P = 0.000005) and median OS (26 and 21 months, respectively; P = 0.0058). The efficacy of mFOLFIRI and the toxicity pattern of both bevacizumab+mFOLFIRI and mFOLFIRI regimens were independent of d,l-LV dose. Beside the d,l-LV dose the bevacizumab-related hypertension was an independent marker of longer survival. The use of high d,l-LV dose in bevacizumab+mFOLFIRI regimen would enhance the antiangiogenic effect of bevacizumab and subsequently the efficacy of treatment without increasing the number of adverse events. These findings need to be further confirmed in a randomized controlled prospective trial. abstract_id: PUBMED:15033659 Phase I/II trial of irinotecan plus high-dose 5-fluorouracil (TTD regimen) as first-line chemotherapy in advanced colorectal cancer. Background: We conducted a phase I/II study of weekly irinotecan [30 min intravenous (i.v.) infusion] combined with 5-fluorouracil (5-FU 3 g/m(2) weekly 48 h i.v. infusion, TTD regimen) as first-line chemotherapy for patients with advanced colorectal cancer (CRC). Patients And Methods: The maximum tolerated dose (MTD) and the dose-limiting toxicity (DLT) in the treatment of gastrointestinal solid tumors (in phase I), and the antitumor activity and toxicity of the recommended phase I dose (in phase II) were determined. Results: Diarrhea was the DLT, and irinotecan 80 mg/m(2) plus 5-FU 3 g/m(2) was the recommended phase I dose. In phase II, the confirmed response rate was 44% [95% confidence interval (CI) 29% to 59%] and the median overall survival was 23.8 months. However, grade 3/4 diarrhea affected 59% of patients and led to withdrawal of three patients. A second cohort of patients studied using the same schedule but with a reduced 5-FU starting dose of 2.25 g/m(2) showed improved tolerance (the incidence of grade 4 diarrhea decreased from 28% to 11% and overall grade 3/4 diarrhea to 56%, with no patient withdrawals) but the confirmed response rate was 28% (95% CI 14% to 45%) and median overall survival was 17.2 months. Conclusions: We found weekly irinotecan 80 mg/m(2) plus TTD regimen (5-FU 2.25 g/m(2) given as 48-h i.v. infusion) to be a feasible and active combined chemotherapy for the first-line treatment of advanced colorectal cancer. abstract_id: PUBMED:37932443 A phase-II study based on dose adjustment according to UGT1A1 polymorphism: is irinotecan underdosed in first-line FOLFIRI regimen for mCRC? Purpose: Irinotecan has considerable importance in the treatment of metastatic colorectal cancer (mCRC). UDP-glucoronyltransferase (UGT) 1A1 is responsible for the inactivation of SN-38, a metabolite of irinotecan. Depending on UGT1A1 polymorphism, the activity of the UGT enzyme can be reduced leading to more frequent occurrence of adverse events related to irinotecan. The present study aimed to assess the safety and efficacy of different doses of irinotecan adjusted according to UGT1A1 polymorphism. Methods: Thirty-four patients treated with FOLFIRI as first-line treatment for mCRC were included in this study. The irinotecan dosage was adapted on the basis of UGT1A1 polymorphisms: *1/*1 (370 mg/m2); *1/*28 (310 mg/m2), and *28/*28 (180 mg/m2). The incidence of grades 3 and 4 toxicities (neutropenia, febrile neutropenia, and diarrhoea) was recorded. Response was assessed according to the RECIST 1.1 criteria. Results: On the basis of UGT1A1 genotyping, 20 patients were *1/*1 (58.8%), 12 were *1/*28 (35.3%) and 2 were *28/*28 (5.9%). Seven patients experienced at least one severe toxicity, i.e., 21% of the population, amounting to eleven adverse events. Concerning the response rate, 15 patients (44%) had partial or complete response. Conclusion: This study demonstrates that mCRC patients treated with FOLFIRI can tolerate a higher dose of irinotecan than the standard dose, i.e., &gt; 180 mg/m2, on the basis of their UGT1A1 genotype, without increased toxicities. Trial Registration: NCT01963182 (registered on 16/10/2013, Clermont-Ferrand, France). abstract_id: PUBMED:24764659 Role of cetuximab in first-line treatment of metastatic colorectal cancer. The treatment of metastatic colorectal cancer (mCRC) has evolved considerably in the last decade, currently allowing most mCRC patients to live more than two years. Monoclonal antibodies targeting the epidermal growth factor receptor (EGFR) and vascular endothelial growth factor play an important role in the current treatment of these patients. However, only antibodies directed against EGFR have a predictive marker of response, which is the mutation status of v-Ki-ras2 Kirsten rat sarcoma viral oncogene homolog (KRAS). Cetuximab has been shown to be effective in patients with KRAS wild-type mCRC. The CRYSTAL study showed that adding cetuximab to FOLFIRI (regimen of irinotecan, infusional fluorouracil and leucovorin) significantly improved results in the first-line treatment of KRAS wild-type mCRC. However, results that evaluate the efficacy of cetuximab in combination with oxaliplatin-based chemotherapy in this setting are contradictory. On the other hand, recent advances in the management of colorectal liver metastases have improved survival in these patients. Adding cetuximab to standard chemotherapy increases the response rate in patients with wild-type KRAS and can thus increase the resectability rate of liver metastases in this group of patients. In this paper we review the different studies assessing the efficacy of cetuximab in the first-line treatment of mCRC. abstract_id: PUBMED:15452550 Phase I dose-escalation trial of irinotecan with continuous infusion 5-FU first line, in metastatic colorectal cancer. This single-centre phase I trial was designed to determine the maximum tolerated dose of irinotecan and the recommended dose to use in combination with a fixed dose of 5-fluorouracil (5-FU) administered as a protracted venous infusion, for the first-line treatment of metastatic colorectal cancer (CRC). Tolerability and efficacy were secondary end points. In all, 22 patients, median age 57 years, were treated with escalating, weekly doses of irinotecan (50, 75, 100 and 85 mg m(-2)) in combination with 250 mg m(-2) 5-FU administered as a continuous infusion. All patients had measurable disease. The combination was well tolerated up to an irinotecan dose of 75 mg m(-2). However, three out of five patients at the 100 mg m(-2) irinotecan dose level had their dose reduced due to multiple grade 2 toxicities, and eventually one patient stopped treatment due to grade 3 diarrhoea and multiple grade 2 toxicities. Subsequent patients were recruited at an irinotecan dose level of 85 mg m(-2). The overall response rate was 55%, comprising one complete and 11 partial responses (PRs). Six patients also achieved sustained stable disease (SD), giving a clinical benefit (complete response/PR/SD) response of 82%. The median duration of response was 238 days (8.5 months) and median time to progression was 224 days (8.0 months). Two patients who achieved PRs underwent partial hepatectomies. Thus, irinotecan (85 mg m(-2)) combined with a continuous infusion of 5-FU (250 mg m(-2)) is an active and well-tolerated regimen for the treatment of metastatic CRC. It represents an effective treatment for patients who require close supervision and support, throughout their initial exposure to chemotherapy for this disease, and this dose combination was recommended for an ongoing phase II study. abstract_id: PUBMED:32829105 Determination of the UGT1A1 polymorphism as guidance for irinotecan dose escalation in metastatic colorectal cancer treated with first-line bevacizumab and FOLFIRI (PURE FIST). Aim: Uridine diphosphate glucuronosyltransferase 1A1 (UGT1A1) polymorphism plays a crucial role in the increased susceptibility of patients to irinotecan and its toxicity. This study is a multicenter, randomised clinical trial comparing the clinical outcomes and adverse events (AEs) in metastatic colorectal cancer (mCRC) patients treated with bevacizumab plus FOLFIRI with or without UGT1A1 genotyping and irinotecan dose escalation as the first-line therapy. Methods: The control group received conventional biweekly FOLFIRI plus bevacizumab without UGT1A1 genotyping, whereas the study group received the same regimen with irinotecan dose escalation based on UGT1A1 genotyping. The primary end-point was progression-free survival (PFS), and secondary end-points were overall response rate (ORR), disease control rate (DCR), overall survival (OS), AEs and metastasectomy rate. Results: Over a median follow-up of 26.0 months (IQR, 17.0-35.0 months), study group (n = 107) was superior to the control group (n = 106) in PFS, OS, ORR, DCR, and metastasectomy rate (all P &lt; 0.05). Furthermore, there were no significant differences in AEs ≥ grade III between the two groups, even with the 1.36-fold increase in the relative dose intensity of irinotecan in the study group. Dose escalation of irinotecan, an independent factor of ORR (P &lt; 0.001) and DCR (P = 0.006), improved PFS in mCRC patients with wild-type and mutant KRAS (P = 0.007 and P = 0.019, respectively). Conclusion: The current study revealed that mCRC patients, regardless of KRAS gene status, with UGT1A1 genotyping can tolerate escalated doses of irinotecan and potentially achieve a more favourable clinical outcome without significantly increased toxicities. Clinical Trial Registration: NCT02256800. abstract_id: PUBMED:27446583 First-line cetuximab-based chemotherapies for patients with advanced or metastatic KRAS wild-type colorectal cancer. Colorectal cancer (CRC) is one of the most commonly occurring cancers worldwide. A burgeoning number of studies have demonstrated that the addition of cetuximab to another standard first-line regimen markedly improves the outcome of CRC treatment. However, at present, the efficacy and safety of cetuximab-based combination chemotherapy has not been well described in Japan. The aim of the present study was to evaluate the efficacy and safety of first-line chemotherapies that included cetuximab for patients with advanced or metastatic Kirsten rat sarcoma viral oncogene homolog (KRAS) wild-type CRC in Japan. This prospective multicenter observational study was conducted at 13 affiliated medical institutions. A total of 64 patients were enrolled between 2010 and 2013. The patients met the following criteria for eligibility: i) histologically confirmed, advanced or metastatic KRAS wild-type CRC; and ii) cetuximab-based chemotherapies administered as a first-line treatment. First-line cetuximab-based treatments were administered as follows: 29 patients (45.3%) received a combination of infusional fluorouracil, leucovorin and oxaliplatin; 14 patients (21.9%) received a combination of capecitabine and oxaliplatin; and 10 patients (15.6%) received a combination of infusional fluorouracil, leucovorin and irinotecan. The overall response rate (including complete plus partial responses) was 50% (32/64 patients). Initially, 48 lesions were diagnosed as unresectable. Among those, 13 lesions (27.1%) were converted to a resectable status following cetuximab-based combination chemotherapy treatments. The median overall survival time and the progression-free survival time were 1,189 and 359 days, respectively. The most frequent grade 3/4 adverse event was neutropenia, which occurred in 20.3% of the patients. The incidence of grade 3/4 skin toxicity was 17.2% (11/64 patients). Cetuximab-based therapies may represent a promising first-line regimen for patients with advanced or metastatic KRAS wild-type CRC in Japan. In addition, this combination was associated with a low incidence of serious toxicities. Answer: The abstracts provided suggest that high-dose irinotecan has been explored as both a first-line treatment and beyond for metastatic colorectal cancer (mCRC). High-dose irinotecan as a single agent has been investigated as a first-line therapy, with a study showing that it can be safely administered to approximately two-thirds of patients with advanced CRC following a selective first cycle (PUBMED:12439596). Another study evaluated the effect of increasing doses of irinotecan in combination with LV5FU2 in clinical practice, indicating that high-dose irinotecan plus LV5FU2 chemotherapy was well tolerated when the dose was progressively increased according to individual tolerability (PUBMED:21109376). Moreover, genotype-directed studies have been conducted to determine the maximum tolerated dose of irinotecan based on UGT1A1 genotype, suggesting that the standard dose of irinotecan in FOLFIRI could be lower than the dose that can be tolerated by patients with certain UGT1A1 genotypes (PUBMED:21654688). A phase II study also demonstrated that mCRC patients treated with FOLFIRI could tolerate a higher dose of irinotecan than the standard dose based on their UGT1A1 genotype, without increased toxicities (PUBMED:37932443). Additionally, the use of high-dose d,l-leucovorin in combination with bevacizumab and mFOLFIRI was found to potentially enhance the antiangiogenic effect of bevacizumab and improve treatment efficacy without increasing adverse events (PUBMED:22956187). Another study with a phase I/II trial of irinotecan plus high-dose 5-fluorouracil (TTD regimen) as first-line chemotherapy showed promising results, although the optimal dose required balancing efficacy and toxicity (PUBMED:15033659). Overall, these studies indicate that there is a trend towards exploring and potentially adopting higher doses of irinotecan, tailored to individual patient tolerability and genetic makeup, in the treatment of mCRC beyond just the first line of therapy. However, it is important to note that the optimization of irinotecan doses and the combination with other agents such as bevacizumab, 5-FU, and cetuximab should be carefully considered to maximize efficacy while minimizing toxicity (PUBMED:32829105, PUBMED:27446583).
Instruction: Does the novel PET/CT imaging modality impact on the treatment of patients with metastatic colorectal cancer of the liver? Abstracts: abstract_id: PUBMED:26099671 Impact of Fluorodeoxyglucose PET/Computed Tomography on the Management of Patients with Colorectal Cancer. Colorectal cancer is one of the most common malignancies in the Western world. Most colorectal cancers show fluorodeoxyglucose (FDG) uptake, although in mucinous adenocarcinoma uptake may be limited. A literature search was conducted regarding the impact of FDG-PET/computed tomography (CT) on management and outcome in patients with colorectal carcinoma. FDG-PET/CT can have a significant clinical impact on patient management in various stages of the disease. In patients with suspected recurrent disease and patients with liver metastases who might be eligible for surgery FDG-PET/CT can have more benefit than conventional imaging. FDG-PET/CT can be a useful modality to monitor treatment response. abstract_id: PUBMED:27625903 Granuloma Mimicking Local Recurrence on PET/CT after Liver Resection of Colorectal Liver Metastasis: A Case Report. Positron emission tomography-computed tomography (PET/CT) improves the diagnostic interpretation of fluorine-18 fluorodeoxyglucose (18F-FDG ) PET and CT in oncologic patients and has an impact on both diagnostic and therapeutic aspects of patient management. However, false positive findings from the PET/CT imaging should be taken into consideration as they mislead physicians into improper therapeutic actions. We present a 48-year-old female patient with a history of left colectomy for colorectal cancer and subsequent liver metastasectomy. After one year of follow-up, she presented with a highly suspicious lesion in the liver, which was confirmed on PET/CT as a metastatic liver tumor. Consequently, the patient underwent surgical excision of the tumor, and the definitive histological diagnosis showed a granulomatous tissue with giant cells and foreign body tissue reaction. Based on this report, we briefly review the dangerous pitfalls from radiological and PET/CT imaging concerning the preoperative diagnostic workup examination, as they may significantly alter the treatment plan in oncologic patients. abstract_id: PUBMED:15570208 Does the novel PET/CT imaging modality impact on the treatment of patients with metastatic colorectal cancer of the liver? Objective: To compare the diagnostic value of contrast-enhanced CT (ceCT) and 2-[18-F]-fluoro-2-deoxyglucose-PET/CT in patients with metastatic colorectal cancer to the liver. Background: Despite preoperative evaluation with ceCT, the tumor load in patients with metastatic colorectal cancer to the liver is often underestimated. Positron emission tomography (PET) has been used in combination with the ceCT to improve identification of intra- and extrahepatic tumors in these patients. We compared ceCT and a novel fused PET/CT technique in patients evaluated for liver resection for metastatic colorectal cancer. Methods: Patients evaluated for resection of liver metastases from colorectal cancer were entered into a prospective database. Each patient received a ceCT and a PET/CT, and both examinations were evaluated independently by a radiologist/nuclear medicine physician without the knowledge of the results of other diagnostic techniques. The sensitivity and the specificity of both tests regarding the detection of intrahepatic tumor load, extra/hepatic metastases, and local recurrence at the colorectal site were determined. The main end point of the study was to assess the impact of the PET/CT findings on the therapeutic strategy. Results: Seventy-six patients with a median age of 63 years were included in the study. ceCT and PET/CT provided comparable findings for the detection of intrahepatic metastases with a sensitivity of 95% and 91%, respectively. However, PET/CT was superior in establishing the diagnosis of intrahepatic recurrences in patients with prior hepatectomy (specificity 50% vs. 100%, P = 0.04). Local recurrences at the primary colo-rectal resection site were detected by ceCT and PET/CT with a sensitivity of 53% and 93%, respectively (P = 0.03). Extrahepatic disease was missed in the ceCT in one third of the cases (sensitivity 64%), whereas PET/CT failed to detect extrahepatic lesions in only 11% of the cases (sensitivity 89%) (P = 0.02). New findings in the PET/CT resulted in a change in the therapeutic strategy in 21% of the patients. Conclusion: PET/CT and ceCT provide similar information regarding hepatic metastases of colorectal cancer, whereas PET/CT is superior to ceCT for the detection of recurrent intrahepatic tumors after hepatectomy, extrahepatic metastases, and local recurrence at the site of the initial colorectal surgery. We now routinely perform PET/CT on all patients being evaluated for liver resection for metastatic colorectal cancer. abstract_id: PUBMED:26044293 Respiratory gated PET/CT of the liver: A novel method and its impact on the detection of colorectal liver metastases. Purpose: To evaluate the diagnostic performance of a new method for respiratory gated positron emission tomography (rgPET/CT) for colorectal liver metastases (CRLM), secondly, to assess its additional value to standard PET/CT (PET/CT). Materials And Methods: Forty-three patients scheduled for resection of suspected CRLM were prospectively included from September 2011 to January 2013. None of the patients had previously undergone treatment for their CRLM. All patients underwent PET/CT and rgPET/CT in the same session. For rgPET/CT an in-house developed electronic circuit was used which displayed a color-coded countdown for the patient. The patients held their breath according to the countdown and only the data from the inspiration breath-hold period was used for image reconstruction. Two independent and blinded readers evaluated both PET/CT and rgPET/CT separately. The reference standard was histopathological confirmation for 73 out of 131 CRLM and follow-up otherwise. Results: Reference standard identified 131 CRLM in 39/43 patients. Nine patients accounted for 25 mucinous CRLM. The overall per-lesion sensitivity for detection of CRLM was for PET/CT 60.0%, for rgPET/CT 63.1%, and for standard+rgPET/CT 67.7%, respectively. Standard+rgPET/CT was overall significantly more sensitive for CRLM compared to PET/CT (p=0.002) and rgPET/CT (p=0.031). The overall positive predictive value (PPV) for detection of CRLM was for PET/CT 97.5%, for rgPET/CT 95.3%, and for standard+rgPET/CT 93.6%, respectively. Conclusion: Combination of PET/CT and rgPET/CT improved the sensitivity significantly for CRLM. However, high patient compliance is mandatory to achieve optimal performance and further improvements are needed to overcome these limitations. The diagnostic performance of the evaluated new method for rgPET/CT was comparable to earlier reported technically more complex and expensive methods. abstract_id: PUBMED:31368660 Simultaneous positron emission tomography and magnetic resonance imaging for the detection and characterisation of liver lesions in patients with colorectal cancer: A pictorial review. Patients with colorectal cancer undergo frequent diagnostic imaging to stage the extent of metastatic disease and assess response to treatment. Imaging is typically via diagnostic contrast-enhanced CT or combined FDG-PET/CT. However, recent research has demonstrated promising benefits of combined FDG-PET/MRI in oncologic imaging due to the superior soft-tissue contrast of MRI. The extent of both intrahepatic and extrahepatic disease is important in establishing treatment options for colorectal cancer patients, and FDG-PET/CT and dedicated liver imaging are often both required. FDG-PET/MRI offers the advantage of a single examination which can be completed within a similar duration as dedicated liver MRI imaging. This improves patient convenience and anatomical co-registration between PET and MRI imaging and provides a potential cost benefit. The diagnostic benefits of FDG-PET/MRI include the simultaneous characterisation of focal liver lesions, exclusion of extrahepatic disease, the detection of additional hepatic metastases and extrahepatic disease, and the multi-parametric assessment of treatment response. This pictorial review highlights examples of these benefits. abstract_id: PUBMED:24332574 Optimal imaging sequence for staging in colorectal liver metastases: analysis of three hypothetical imaging strategies. Background: Computed tomography (CT), positron emission tomography CT (PET-CT) and magnetic resonance imaging (MRI) all play a role in the management of colorectal liver metastases (CRLM), but inappropriate over investigation can lead to delays in treatment and additional cost. This study aimed to determine the optimal sequence for pre-operative imaging pathway to minimise delays to treatment and healthcare costs. Methods: All patients with colorectal liver metastases referred to a single tertiary liver specialist multidisciplinary team (MDT) between 2008 and 2011 were examined. Primary data of clinical and radiological outcomes of all patients were analysed. These data were used to construct and test 3 hypothetical imaging strategies - 'Upfront', 'Sequential' and 'Hybrid' models. Results: Six hundred and forty four consecutive patients were included. One hundred and sixty five patients were excluded for curative resection following the initial CT review. Subsequently 167/433 patients did not proceed to hepatectomies. Eighty (47.9%) of these patients had extra-hepatic disease identified on PET-CT, and 29 were due to the exclusion by MRI liver. A resectable pattern of liver disease on initial CT did not exclude patients with occult disease on PET-CT. Based on cost analysis, assessment of initial CT, followed by MDT with subsequent PET-CT and MRI imaging thereafter (Hybrid model), was associated with the shortest time-to-decision and lowest cost. Conclusions: Resectable pattern of liver metastases should not solely be used to determine the application of PET-CT for staging. Hybrid model is associated with the lowest cost and shortest time-to-treatment. abstract_id: PUBMED:26622057 Diagnostic performance of CT, MRI and PET/CT in patients with suspected colorectal liver metastases: the superiority of MRI. Background: Meticulous imaging of colorectal liver metastases (CRLM) is mandatory to optimize outcome after liver resection. However, the detection of CRLM is still challenging. Purpose: To evaluate prospectively if magnetic resonance imaging (MRI) with diffusion-weighted and Gd-EOB-DTPA-enhanced sequences had a better diagnostic performance for CRLM compared to computed tomography (CT) and fluorine-18 fluorodeoxyglucose positron emission tomography (PET/CT). Material And Methods: Forty-six patients scheduled for resection of suspected CRLM were evaluated prospectively from September 2011 to January 2013. None of the patients had undergone previous treatment for their CRLM. Multiphase CT, liver MRI with diffusion-weighted and dynamic Gd-EOB-DTPA-enhanced sequences and low-dose PET/CT were performed. Two independent, blinded readers evaluated the examinations. The reference standard was histopathological confirmation (81/140 CRLM) or follow-up. Results: A total of 140 CRLM and 196 benign lesions were identified. On a per-lesion basis, MRI had the significantly highest sensitivity overall and for CRLM &lt; 10 mm (P &lt; 0.001). Overall sensitivity/specificity and PPV/NPV were 68%/94% and 89%/81% for CT, 90%/87% and 82%/93% for MRI, and 61%/99% and 97%/78% for PET/CT. For CRLM &lt; 10 mm it was 16%/96% and 54%/80% for CT, 74%/88% and 64%/93% for MRI, and 9%/98% and 57%/79% for PET/CT. Conclusion: MRI had the significantly highest sensitivity compared with CT and PET/CT, particularly for CRLM &lt; 10 mm. Therefore, detection of CRLM should be based on MRI. abstract_id: PUBMED:20829538 Diagnostic imaging of colorectal liver metastases with CT, MR imaging, FDG PET, and/or FDG PET/CT: a meta-analysis of prospective studies including patients who have not previously undergone treatment. Purpose: To obtain diagnostic performance values of computed tomography (CT), magnetic resonance (MR) imaging, fluorine 18 fluorodeoxyglucose (FDG) positron emission tomography (PET), and FDG PET/CT in the detection of colorectal liver metastases in patients who have not previously undergone therapy. Materials And Methods: A comprehensive search was performed for articles published from January 1990 to January 2010 that fulfilled the following criteria: a prospective study design was used; the study population included at least 10 patients; patients had histopathologically proved colorectal cancer; CT, MR imaging, FDG PET, or FDG PET/CT was performed for the detection of liver metastases; intraoperative findings or those from histopathologic examination or follow-up were used as the reference standard; and data for calculating sensitivity and specificity were included. Study design characteristics, patient characteristics, imaging features, reference tests, and 2 × 2 tables were recorded. Results: Thirty-nine articles (3391 patients) were included. Variation existed in study design characteristics, patient descriptions, imaging features, and reference tests. The sensitivity estimates of CT, MR imaging, and FDG PET on a per-lesion basis were 74.4%, 80.3%, and 81.4%, respectively. On a per-patient basis, the sensitivities of CT, MR imaging, and FDG PET were 83.6%, 88.2%, and 94.1%, respectively. The per-patient sensitivity of CT was lower than that of FDG PET (P = .025). Specificity estimates were comparable. For lesions smaller than 10 mm, the sensitivity estimates for MR imaging were higher than those for CT. No differences were seen for lesions measuring at least 10 mm. The sensitivity of MR imaging increased significantly after January 2004. The use of liver-specific contrast material and multisection CT scanners did not provide improved results. Data about FDG PET/CT were too limited for comparisons with other modalities. Conclusion: MR imaging is the preferred first-line modality for evaluating colorectal liver metastases in patients who have not previously undergone therapy. FDG PET can be used as the second-line modality. The role of FDG PET/CT is not yet clear owing to the small number of studies. Supplemental Material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.10100729/-/DC1. abstract_id: PUBMED:15517139 PET/CT for colorectal and hepatic tumors This contribution presents clinical and technical aspects of combining positron emission tomography (PET) and computed tomography (CT) for patients with colorectal tumors and characterization of unclear liver foci. In which manner and for which patients combined PET/CT is superior to PET or CT alone is also discussed. PET/CT can fulfil most prerequisites for imaging in pre- and postoperative management of patients with colorectal tumors and best meets the desire for optimal imaging procedures. Some of the disadvantages encountered in frequently employed CT can be overcome by the combination of PET and CT while increasing both sensitivity in detecting lesions and specificity in their characterization. Questions regarding treatment response offer an opportunity for devising novel study concepts and initiating research on new PET tracers. Although few publications are available, we are of the opinion that the combination of functional and anatomical imaging provided by PET/CT can improve both preoperative management and aftercare. To this end, however, optimum cooperation between practitioners of nuclear medicine and radiology is imperative. abstract_id: PUBMED:27593246 PET/MR Imaging in Cancers of the Gastrointestinal Tract. PET/computed tomography (PET/CT) is an established hybrid imaging technique for staging and follow-up of gastrointestinal (GI) tract malignancies, especially for colorectal carcinoma. Dedicated hybrid PET/MR imaging scanners are currently available for clinical use. Although they will not replace regular use of PET/CT, they may have utility in selected cases of GI tract malignancies. The superior soft tissue contrast resolution and depiction of anatomy and the functional information obtained from diffusion-weighted imaging (DWI) provided by MR imaging in PET/MR imaging are advantages over CT of PET/CT for T staging and follow-up of rectal carcinoma and for better characterization of liver lesions. Functional information from DWI and use of liver-specific MR imaging contrast agents are an added advantage in follow-up of liver metastases after systemic and locoregional treatment. New radiotracers will improve the utility of PET/MR imaging in staging and follow-up of tumors, which may not be [18F]-2-fluoro-2-deoxy-d-glucose avid, such as hepatocellular carcinoma and neuroendocrine tumors. PET/MR imaging also has application in selected cases of cholangiocarcinoma, gallbladder cancer, and pancreatic carcinoma for initial staging and follow-up assessment. Answer: Yes, the novel PET/CT imaging modality does impact the treatment of patients with metastatic colorectal cancer of the liver. PET/CT has been shown to have a significant clinical impact on patient management in various stages of the disease, particularly in patients with suspected recurrent disease and those with liver metastases who might be eligible for surgery. It can be more beneficial than conventional imaging in these scenarios and can also be a useful modality to monitor treatment response (PUBMED:26099671). PET/CT is superior to contrast-enhanced CT (ceCT) for the detection of recurrent intrahepatic tumors after hepatectomy, extrahepatic metastases, and local recurrence at the site of the initial colorectal surgery. New findings in PET/CT resulted in a change in the therapeutic strategy in 21% of the patients, indicating its significant impact on treatment decisions (PUBMED:15570208). However, it is important to note that false positive findings from PET/CT imaging should be considered, as they can mislead physicians into improper therapeutic actions, such as in the case of a granuloma mimicking local recurrence (PUBMED:27625903). Moreover, the combination of PET/CT with respiratory gated PET/CT (rgPET/CT) has been shown to improve sensitivity significantly for colorectal liver metastases (CRLM), although high patient compliance is required for optimal performance (PUBMED:26044293). Additionally, simultaneous PET/MRI has demonstrated promising benefits in oncologic imaging due to the superior soft-tissue contrast of MRI, which can be advantageous for the characterization of liver lesions and the detection of additional hepatic metastases and extrahepatic disease (PUBMED:31368660). In summary, PET/CT and related hybrid imaging techniques like PET/MRI have a substantial impact on the treatment of patients with metastatic colorectal cancer of the liver by improving the accuracy of staging, detecting recurrences, and guiding therapeutic strategies.
Instruction: CD44 and its v6 spliced variant in lung tumors: a role in histogenesis? Abstracts: abstract_id: PUBMED:9210706 CD44 and its v6 spliced variant in lung tumors: a role in histogenesis? Background: CD44 is a polymorphic family of cell surface glycoproteins with a variety of functions including participation in cell adhesion and migration as well as modulation of cell-matrix interactions. Expression of the standard form of CD44 (CD44s) and its variant isoforms has been shown in both normal and neoplastic tissue and holds promise as a prognostic indicator. Methods: The authors investigated the expression of CD44s and its v6 isoform (CD44v6) immunohistochemically in 7 fetal lungs (gestational age between 11-36 weeks) and in 80 lung tumors of various histologic types, degrees of differentiation, and clinical stages. Results: In the fetal lung, CD44v6 was expressed as membranous and luminal staining of epithelial cells throughout gestation and basal staining of bronchial epithelium late in gestation. Nonneoplastic adult lung showed CD44v6 expression that was restricted to epithelial cells with membranous staining of basal bronchial cells and squamous metaplasia as well as basolateral membranous staining of type 2 pneumocytes. CD44s showed similar but less intense staining and was in addition present on lymphocytes and macrophages. Tumorlets and neuroepithelial bodies were CD44v6 negative. Nearly all squamous cell carcinomas (97%) were positive for CD44v6 with patterns similar to squamous metaplasia and with more intense staining at the periphery of tumor nests. Most adenocarcinomas (90%) were CD44v6 negative whereas most bronchioloalveolar cell carcinomas (71%) were CD44v6 positive with patterns similar to that in type 2 pneumocytes. Most large cell carcinomas (71%), carcinoid tumors (67%), and all small cell carcinomas were CD44v6 negative. CD44v6 expression did not correlate with clinical stage. CD44v6 expression in lymph node metastases was identical to that of the primary tumor. Conclusions: The results of the current study show that CD44v6 is localized differently in fetal and adult lung, suggesting a difference in function. In the fetal lung, it may modulate growth factors important in morphogenesis and maturation. In the adult nonneoplastic lung, CD44v6 is associated with stem cells, namely basal cells and type 2 pneumocytes, and may act to anchor these cells to the matrix and be important in migration during repair or neoplasia. In addition, CD44v6 expression is maintained throughout tumorigenesis in squamous cell carcinoma and bronchioloalveolar cell carcinoma, suggesting a histogenetic relationship between the stem cells and the respective tumors. Conversely, most neuroendocrine tumors and the cells of the dispersive neuroendocrine system do not express CD44v6, implying a separate histogenetic lineage in these tumors. abstract_id: PUBMED:12399124 Reduced expression of CD44 v3 and v6 is related to invasion in lung adenocarcinoma. We have previously reported that the histological pattern of invasion is correlated with the prognosis of surgically treated patients of lung adenocarcinoma. On the other hand, several clinicopathologic studies have shown that CD44 variant isoforms are associated with invasion and metastasis in human malignant tumors. The expression of CD44 variant isoforms v3 and v6 was analyzed in 93 Japanese lung adenocarcinoma patients by immunostaining to study the relationship between their expression and the invasion in lung adenocarcinoma. The specimens were histologically categorized into three groups. Both the invasive lesion and the noninvasive lesion were observed in 49 out of 93 cases (group I). Twenty cases were noninvasive carcinoma growing mainly in a lepidic pattern (group II). Twenty-three cases were invasive carcinoma which showed no frankly noninvasive lesion growing in a lepidic pattern (group III). The significant reduced expression of CD44 v3 and v6 was observed in the invasive lesion compared with the noninvasive lesion in adenocarcinoma of group I (P &lt; 0.05). Although reduced expression of CD44 v3 and v6 was observed in the invasive carcinoma of group III compared with the noninvasive carcinoma of group II, it was not significant (P = 0.0693 for v3, P = 0.0827 for v6). The pattern of expression of CD44 v3 was significantly concordant with that of CD44 v6 (P &lt; 0.0001). Our results suggest that reduced expression of CD44 v3 and v6 is associated with the invasion in the lung adenocarcinoma. abstract_id: PUBMED:24716905 Correlation between expression of cell adhesion molecules CD₄₄ v6 and E-cadherin and lymphatic metastasis in non- small cell lung cancer. Objective: To explore the relationship between expressions of cell adhesion molecules CD44 v6 and E-cadherin (E-cad) and lymphatic metastasis in non-small cell lung cancer (NSCLC). Materials And Methods: Eighty- seven tissue samples obtained from patients with primary NSCLC were collected in our hospital from Dec., 2007 to Dec., 2012, and the expressions of CD44 v6 and E-cad gene proteins in these samples were detected by immunohistochemical method. Results: In the tissue without lymphatic metastasis, the positive expression rate of CD44 v6 was significantly lower, whereas the normal expression rate of E-cad was notably higher than that with lymphatic metastasis (55.6% vs. 78.4%, 47.2% vs. 21.6%), and both differences had statistical significance (P&lt;0.05). Besides, CD44 v6 and E-cad expressions had a significant correlation in the NSCLC tissue with lymphatic metastasis (P&lt;0.05). Conclusions: The positive expression of CD44 v6 and abnormal expression of E-cad may play a very important role in promoting lymphatic metastasis of NSCLC, with synergistic effect. Hence, detection of CD44 v6 and E-cad expressions is conductive to judging the lymphatic metastasis in NSCLC. abstract_id: PUBMED:25120740 Prognostic value of CD44 expression in non-small cell lung cancer: a systematic review. Background: CD44 is a potentially interesting prognostic marker and therapeutic target in non-small cell lung cancer (NSCLC). Although the expression of CD44 has been reported to correlate with poor prognosis of NSCLC in most literatures, some controversies still exist. Since the limited patient numbers within independent studies, here we performed a meta-analysis to clarify the correlations between CD44 expression and prognosis and clinicopathological features in NSCLC. Methods: Relevant literatures were identified using PubMed, EMBASE and CNKI (China National Knowledge Infrastructure) databases (up to February 2014). Data from eligible studies were extracted and included into meta-analysis using a random effects model. Studies were pooled. Summary hazard ratios (HR) and clinical parameters were calculated. Results: We performed a final analysis of 1772 patients from 23 evaluable studies for Prognostic Value and 2167 patients from 28 evaluable studies for clinicopathological features. Our study shows that the pooled hazard ratio (HR) of overexpression CD44-V6 for overall survival in NSCLC was 1.63 [95% confidence interval (CI): 1.20-2.21] by univariate analysis and 1.29 (95% CI: 0.71-2.37) by multivariate analysis.The pooled HR of overexprssion panCD44 for overall survival in NSCLC was 1.53 (95% CI: 0.58-4.04) by univariate analysis and 3.00 (95% CI: 1.53-5.87) by multivariate analysis. Overexpression of CD44-V6 is associated with tumor differentiation (poor differentiation, OR = 1.66, 95% CI: 1.12-2.45), tumor histological type [squamous cell carcinomas (SCC), OR = 2.6, 95% CI: 1.63-5.02], clinical TMN stage (TMN stage III, OR = 2.22, 95% CI: 1.44-3.43) and lymph node metastasis (N1-3, 3.52, 95% CI: 2.08-5.93) in patients with NSCLC. However, there was no significant association between CD44-V6 and tumor size [T category, OR = 1.42, 95% CI: 0.73-2.78]. Conclusion: Our meta-analysis showed that CD44-V6 is an efficient prognostic factor for NSCLC. Overexpression of CD44-V6 was significantly associated with tumor differentiation, tumor histological type, clinical TMN stage and lymph node metastasis. However, there was no significant association between CD44-V6 and tumor size. Large prospective studies are now needed to confirm the clinical utility of CD44 as an independent prognostic marker. abstract_id: PUBMED:11263866 CD44 and its v6 spliced variant in lung carcinomas: relation to NCAM, CEA, EMA and UP1 and prognostic significance. CD44 is a polymorphic family of cell surface glycoproteins that was recently reported to have important role in cell adhesion and migration as well as modulation of cell-matrix interactions. Thus, expression of CD44 has been proposed to be associated with malignant behavior of tumors like invasive growth and formation of metastasis. The expression of CD44s and its v6 isoform (CD44v6) was determined immunohistochemically in 106 lung tumors of various histophenotypes, degrees of differentiation, and clinical stages. The results were compared with the expression of NCAM, CEA, EMA and UP1 and with clinicopathological parameters including patients' survival. CD44s was expressed in all histophenotypes of non-small cell lung carcinomas (NSCLC) with tendency being squamous cell lung carcinoma (SqCC) &gt; bronchioloalveolar adenocarcinoma (BAC) &gt; conventional adenocarcinoma (ConAC) (91, 66.7 and 38.9%, respectively). Almost identical distribution of positivity revealed CD44v6 in all three subgroups of NSCLC mentioned above (91, 66.7 and 36.1%, respectively). In the subgroup of neuroendocrine tumors, CD44s and CD44v6 were restrictedly expressed in small cell lung carcinomas (2/14 tumors), while all 3 typical carcinoids were strongly positive for these markers. Expression of NCAM and CEA was significantly higher in adenocarcinoma subgroup than those in SqCC subgroup (45.7 and 75% vs. 14.8 and 39%, respectively). NCAM expression was also significantly different in BACs and in ConACs (69.2 vs. 36.4%, p &lt; 0.05). The expression of CD44 was related to the differentiation of SqCC. The carcinomas with keratinization were CD44 positive. Adenocarcinomas producing mucin were CD44 negative. The expression of CD44, NCAM, CEA, EMA and UP1 did not correlate with lymph node metastasis and disease stage. CD44V6 was the only marker that its expression was closely related to patients' survival. The absence CD44v6 but not CD44s in NSCLC group was associated with significantly longer survival of patients compared to patients with CD44v6 positive tumors. This difference was even higher in tumors negative for CD44v6 and simultaneously NCAM and/or CEA positive. The data of this study suggest that CD44v6 might be an independent prognostic factor in NSCLC. Moreover, our data give another evidence of diverse role of CD44 in the differentiation and progression of non-small cell lung carcinomas and neuroendocrine carcinomas of the lung. abstract_id: PUBMED:9493443 Expression of CD44 standard and CD44 variant 6 in human lung cancer We immunohistochemically examined the expression of CD44 standard (CD44 st) and CD44 variant 6 (CD44 v6) in 112 cases of primary lung cancer, and their relationship to the clinical milieu, including the clinical stage. In 46 cases of squamous cell carcinoma, expression of CD44 st was observed in 45.7% of the cases, and expression of CD44 v6 was observed in 60.9%. In 43 cases of adenocarcinoma, positive staining of CD44 st and CD44 v6 was seen in 2.3% and 4.7% of the cases, respectively. None of 21 small cell carcinomas was positive for CD44 st or CD44 v6. In squamous cell carcinomas, the expression of CD44 st and CD44 v6 was observed at a rate significantly higher than in other histologic type. Most specimens positive for CD44 st stained positively for CD44 v6. Therefore, it seems likely that the CD44 expression observed in squamous cell carcinoma of the lung was a variant CD44 containing the domain encoded by variant exon 6. The expression of CD44 v6 was not related to the clinical stage. Significant association between CD44 v6 and differentiation of squamous cell carcinoma was seen; 2/7 (28.6%) for poorly differentiated, 19/31 (61.3%) for moderately differentiated, and 7/8 (87.5%) for well differentiated squamous cell carcinomas (p = 0.02 by trend test). It was previously reported that CD44 st and CD44 v6 were expressed in both normal bronchial epithelium and squamous cell metaplasia. These results suggest that the expression of CD44 v6 in squamous cell carcinoma of the lung may reflect the immunohistochemical characteristics of the tissue from which such carcinoma emerge. abstract_id: PUBMED:8649806 Expression of CD44 splice variants in normal respiratory epithelium and bronchial carcinomas: no evidence for altered CD44 splicing in metastasis. Expression of alternatively spliced CD44 adhesion molecules has been implicated in metastatic spread of various rodent and human tumors. To determine whether specific CD44 splice variants contribute to metastatic spread of bronchial cancers, we compared the expression of CD44 splice variants in normal bronchial epithelium and bronchial cancers, including tumors which already spread to regional lymph nodes or distant organs. Variant CD44 expression was analysed by immunohistochemistry using variant exon-specific monoclonal antibodies. The precise composition of CD44 transcripts was delineated by exon-specific RT-PCR. The concurring data obtained by both methods revealed that high levels of standard CD44 and variants v5 and v6 as well as low levels of variants v7 and v10 are expressed both in normal bronchial epithelium and squamous cell lung cancers. No CD44 expression was observed in the highly metastatic small cell lung cancers and adenocarcinomas with the exception of bronchioalveolar cancers showing weak expression of standard CD44. These data suggest that expression of alternatively spliced CD44 molecules in the bronchial tract is related to the distinct differentiation of the respiratory epithelium. No correlation between expression of specific CD44 splice variants and metastasis of bronchial cancers was observed. abstract_id: PUBMED:7630011 Expression of CD44 alternative splicing variants in lung cancer Expression of isoforms of the CD44 is generated by alternative splicing of CD44 gene; CD44H: lacks all 10 alternative exons, CD44R: the alternative exons v8 to v10, CD44V: other group of variants which contains the alternative exon v6. In some tumors such as colorectal cancer, breast cancer, non-Hodgkin lymphoma, and melanoma, over-expressed CD44 isoform which contains such alternative-spliced variant exons may play a causative role in tumor metastasis. In lung cancer, however, the role of CD44 variants in tumor progression and metastasis is uncertain. In our study and reported literature, no definite correlation was observed between the expression of specific CD44 isoform and tumor progression or metastasis of lung cancer. abstract_id: PUBMED:7515025 Expression of alternatively spliced forms of the CD44 extracellular-matrix receptor on human lung carcinomas. Expression of isoforms of the CD44 hyaluronan receptor/lymph-node endothelial receptor by human tumour cells is thought to play a role in tumour growth and metastasis. These isoforms which vary in the length of the extracellular domain are generated by differential RNA splicing that involves the 10 alternative exons (v1 to v10) encoding the membrane proximal region of the molecule. Several tumours have been shown to over-express CD44 containing the v6 exon, and this, together with other evidence, has led to the suggestion that v6 may play a causative role in tumour metastasis. In this report we have compared the expression of CD44 isoforms between different lung tumour lines, including SCLC, squamous-cell carcinoma, adenocarcinoma and mesothelioma, using both RT-PCR and fluorescent antibody staining with a panel of CD44 exon-specific monoclonal antibodies (MAbs). Our results show large differences in vCD44 expression between individual tumour lines. Little or no vCD44 containing the metastasis-associated v6 exon was detected in most tumours, including the highly metastatic SCLC lines. Indeed, the SCLC lines and some squamous-cell carcinomas contained only very low levels of either vCD44 or CD44H, indicating that CD44 expression may not always correlate with tumour development or dissemination. One of the squamous-cell carcinomas studied (HOTZ) was found to express a complex mixture of CD44 splice variants similar to the immortalized normal bronchial epithelial line BEAS-2B. Cloning and sequencing of vCD44 from the HOTZ cell line yielded several splice variants that have also been identified on leukaemic cells, normal keratinocytes and activated peripheral-blood lymphocytes. abstract_id: PUBMED:9792156 CD44v3 and v6 variant isoform expression correlates with poor prognosis in early-stage vulvar cancer. Expression of alternatively spliced CD44 isoforms has been reported to correlate with poor prognosis in human squamous cell cancers, i.e. squamous cell cancer of the lung and cervix. The aim of this study was to evaluate whether CD44 isoform expression is a prognostic factor in early-stage squamous cell cancer of the vulva. Seventy cases of squamous cell carcinoma of the vulva International Federation of Gynaecology and Obstetrics (FIGO) stage I were examined immunohistochemically for expression of CD44 isoforms. We used four different variant exon sequence-specific murine monoclonal antibodies to epitopes encoded by exons v3, v5, v6 and v7-8 of human variant CD44. The correlation of CD44 expression with histological grade and disease-free and overall survival was investigated. CD44 isoforms CD44v3, CD44v5, CD44v6 and CD44v7-8 were detected in 28% (20/70), 47% (33/70), 33% (23/70) and 17% (12/70) of the tumour samples respectively. Patients suffering from tumours expressing CD44v6 had a poorer relapse-free (log-rank test, P = 0.02) and overall survival (log-rank test, P = 0.03). Likewise, patients suffering from tumours expressing CD44v3 had a poorer relapse-free (log-rank test, P = 0.04) and overall survival (log-rank test, P = 0.01). Expression of CD44v5 and CD44v7-8 did not compromise the patients' outcome. Histological grade did not correlate with CD44 isoform expression. Immunohistochemically detected expression of CD44 isoforms containing variant exon v6 or v3 is correlated with a poor relapse-free and overall survival in FIGO stage I vulvar cancer patients. Answer: CD44 and its v6 spliced variant appear to play a significant role in the histogenesis of lung tumors. CD44 is a cell surface glycoprotein involved in cell adhesion, migration, and modulation of cell-matrix interactions. The expression of CD44 and its variant isoforms, such as CD44v6, has been observed in both normal and neoplastic tissues and is considered a potential prognostic indicator (PUBMED:9210706). In fetal lung, CD44v6 expression is seen in epithelial cells throughout gestation, suggesting a role in morphogenesis and maturation. In adult nonneoplastic lung, CD44v6 is associated with stem cells, including basal cells and type 2 pneumocytes, and may be important in cell anchoring and migration during repair or neoplasia (PUBMED:9210706). The expression patterns of CD44v6 in lung tumors vary with histologic types. Nearly all squamous cell carcinomas are positive for CD44v6, with patterns similar to squamous metaplasia, while most adenocarcinomas are negative. Bronchioloalveolar cell carcinomas, on the other hand, show a high positivity for CD44v6, resembling the pattern in type 2 pneumocytes. This suggests a histogenetic relationship between the stem cells and the respective tumors. In contrast, neuroendocrine tumors and cells of the dispersive neuroendocrine system generally do not express CD44v6, implying a separate histogenetic lineage (PUBMED:9210706). Reduced expression of CD44v6 has been associated with invasion in lung adenocarcinoma, indicating its potential role in tumor progression (PUBMED:12399124). Furthermore, the expression of CD44v6 has been correlated with lymphatic metastasis in non-small cell lung cancer (NSCLC), suggesting its involvement in the metastatic process (PUBMED:24716905). A systematic review has also highlighted the prognostic value of CD44 expression in NSCLC, with overexpression of CD44v6 being associated with poor differentiation, squamous cell carcinoma histological type, advanced clinical TNM stage, and lymph node metastasis (PUBMED:25120740). Overall, the differential expression of CD44 and its v6 variant in lung tumors supports the notion that they may have a role in the histogenesis and progression of lung cancers. The expression of CD44v6 is maintained throughout tumorigenesis in certain types of lung cancers, such as squamous cell carcinoma and bronchioloalveolar cell carcinoma, which further supports the idea of a histogenetic link between these cancers and their cells of origin (PUBMED:9210706). The expression of CD44v6 has also been linked to patient survival in NSCLC, with the absence of CD44v6 expression being associated with significantly longer survival. This suggests that CD44v6 might serve as an independent prognostic factor in NSCLC (PUBMED:11263866). In squamous cell carcinomas of the lung, the expression of CD44v6 is significantly higher than in other histologic types and is associated with the degree of differentiation, indicating that CD44v6 expression may reflect the immunohistochemical characteristics of the tissue from which the carcinoma originates (PUBMED:9493443). However, it is important to note that the expression of CD44 splice variants does not always correlate with metastasis. For example, in bronchial cancers, no evidence was found for altered CD44 splicing in metastasis, suggesting that the role of CD44 variants in lung cancer may be more complex and not solely related to metastatic spread (PUBMED:8649806). In summary, CD44 and its v6 spliced variant are implicated in the histogenesis of lung tumors, with their expression patterns providing insights into the origins and progression of different lung cancer types. CD44v6, in particular, has been associated with certain lung cancer subtypes and may have prognostic significance, although its role in metastasis is not entirely clear and may vary among different lung cancer histologies.
Instruction: Does a Directive to an Internet Site Enhance the Doctor-Patient Interaction? Abstracts: abstract_id: PUBMED:26135078 Does a Directive to an Internet Site Enhance the Doctor-Patient Interaction? A Prospective Randomized Study for Patients with Carpal Tunnel Syndrome. Background: Sixty-two percent of patients would like their doctor to recommend a specific web site to find health information, but only 3% of patients receive such recommendations. We investigated whether providing patients with an Internet web-site link recommended by their physician would improve patient knowledge and satisfaction. Our hypothesis was that directing patients to a reliable web site would improve both. Methods: Sixty patients with a new diagnosis of carpal tunnel syndrome were prospectively randomized into two groups. Twenty-three patients in the control group had a traditional physician office visit and received standard care for carpal tunnel syndrome. Thirty-seven patients in the treatment group received a handout that directed them to the American Society for Surgery of the Hand (ASSH) web page on carpal tunnel syndrome in addition to the standard care provided in the office visit. Patients later completed a ten-question true-or-false knowledge questionnaire and a six-item satisfaction survey. Differences in scores were analyzed using two-sample t tests. Results: Less than half (48%) of the patients who were given the Internet directive reported that they had visited the recommended web site. The mean scores on the knowledge assessment (6.84 of 10 for the treatment group and 6.96 of 10 for the control group) and the satisfaction survey (4.49 of 5 for the treatment group and 4.43 of 5 for the control group) were similar for both groups. The mean score for knowledge was similar for the patients who had used the ASSH web site and for those who had not (6.89 and 6.97 respectively). Moreover, compared with patients who had not used the Internet at all to learn about carpal tunnel syndrome, patients who used the Internet scored 6.6% better (mean score, 7.14 for those who used the Internet compared with 6.70 for those who had not; p &gt; 0.05). Regardless of Internet usage, most patients scored well on the knowledge assessment and reported a high level of satisfaction. Conclusions: Whether the patient was given a handout or had visited the ASSH or other Internet web sites, the knowledge and satisfaction scores for all patients were similar. Since the physician was the common denominator in both groups, the results indicate that the patient-physician relationship may be more valuable than the Internet in providing patient education. Clinical Relevance: Effective communication between patients and practitioners is at the cornerstone of delivering excellent care and building trusting relationships. This study examines whether reliable Internet information should be embraced as a tool to enhance patient-surgeon communication in a clinical context. abstract_id: PUBMED:25515386 Doctor, know that your patient has Googled: Internet and reason for medical consultation The internet is increasingly used as a source of medical information, not only by healthcare professionals but by patients as well. We describe the case of Patient A, a 32-year-old male with symptoms of sinusitis. He found a treatment on the internet and was informed when to consult a general practitioner. We also describe Patient B, a 31-year-old female with symptoms of fatigue after the delivery of her second child. Her search on the internet led to several referrals to a medical specialist due to her conviction that her symptoms were caused by a hormonal imbalance. We conclude that medical information on the internet can support both the doctor and the patient, or it can present an obstacle to proper communication during a consultation. Awareness by medical professionals of the Googling behaviour of their patients may be helpful in detecting the underlying question and worries of the patient. abstract_id: PUBMED:32417891 Internet Disruptions in the Doctor-Patient Relationship. The ubiquitous access by patients to online information about health issues is disrupting the traditional doctor-patient relationship in fundamental ways. The knowledge imbalance has shifted and the last nails are being hammered into the coffin of medical paternalism. Ready access to Dr Google has many positive aspects but the risk of undiscerning acceptance by patients of unscientific, out-of-date or biased information for their decision-making remains. In turn this may feed into the content of the legal duty of care for doctors and contribute to a need for them to inquire sensitively into the sources of information that may be generating surprising or apparently illogical patient treatment choices. In addition, patients, those related to patients, and others have the potential to publish on the Internet incorrect and harmful information about doctors. A number of influential decisions by courts have now established the legitimacy of medical practitioners taking legal proceedings for defamation and injunctive relief to stop vituperative and vindictive online publications that are harming them personally, reputationally and commercially. Furthermore, disciplinary accountability has been imposed on doctors for intemperate, disrespectful online postings. All of these factors are contributing to a disruptive recalibration of the dynamics between doctors and their patients. abstract_id: PUBMED:31552851 Internet and doctor-patient relationship: Cross-sectional study of patients' perceptions and practices. Background: With the rapid rolling out of the information highway, an increasing number of patients are accessing the Internet for medical information. Against this background, the present study was undertaken. Objectives: To ascertain patients' use and opinion on impact of Internet on doctor-patient relationship. Methods: A cross-sectional study was done. A total of 709 patients was interviewed, 307 from urban and 402 from rural field practice areas. Institutional ethical approval was obtained before data collection. Categorical data were summarized by percentages with 95% confidence intervals (CIs). Quantitative data were summarized by mean and standard deviation. Associations were explored using odds ratio (OR) with 95% CI for categorical data and two sample t-test for quantitative data. Results: Internet for medical information was used by 50.35% of the patients (95% CI = 46.68, 54.02). More urban patients, i.e., 79.48% used Internet compared to rural patients, i.e., 28.11%. This difference was significant, OR = 9.9 (95% CI = 6.9, 14.0; P &lt; 0.0001). Users of Internet had about 4 years more schooling than nonusers. This was significant, P &lt; 0.0001. More users believed that this trend will improve the doctor-patient relations (51.26%), compared to nonusers (17.05%). This difference was significant, OR = 5.11, 95% CI = 3.61, 7.22, P &lt; 0.0001. Conclusions: A large proportion of patients used Internet to get medical information, significantly more urban patients compared to rural patients. The implication of this is that doctors in times to come will be dealing with patients empowered by online health information. abstract_id: PUBMED:20391071 Doctor-patient communication about cancer-related internet information. This article explores the effect of doctor-patient communication about cancer-related Internet information on self-reported outcomes. Two hundred and thirty cancer patients and caregivers completed an online survey regarding their experiences searching for and discussing with their doctors cancer-related Internet information. Participants who assertively introduced the Internet information in a consultation were more likely to have their doctor agree with the information. When doctors showed interest and involvement and took the information seriously, participants were less likely to report a desire to change the doctor's response. Taking the information seriously was also associated with greater satisfaction. This preliminary evidence that the doctor's response is associated with patient outcomes indicates the potential for improving patient-centered communication. In an effort to maximize patient-centered communication, doctors should be encouraged to take their patients and the information they present seriously, as well as show their patients that they are interested and involved. abstract_id: PUBMED:19670004 Physicians' perception of the effects of Internet health information on the doctor-patient relationship. The objective of the study was to determine physician's perception of the effects of health information on the internet on doctor-patient relationship. Online questionnaire with 25 items were sent to the Korean physicians' e-mail, and 493 replied. Eight-nine percent of the Korean physicians reported they had experiences of patients discussing the Internet health information. They perceived that Internet health information may enhance the patients' knowledge about their health. However, they perceived that Internet health information may have variety of negative effects such as; heightening the cost of health care by adopting the inappropriate health service utilisation (56.2%); making the patients over-concerned about their health (74.5%); damaging the time efficiency of the visit (60.9%). The physicians deemed that those informations were not relevant to the patients' health condition (42.7%), and even were not correct (39.0%). Physicians' perception of the Internet health information is both positive and negative, and they perceive the overall effects on doctor-patient relationship as neutral. More physicians think the discussion could be the hindrance on the efficient time management during their visits. However, more physicians have positive perception of the effects on the quality of care and patient outcomes which is promising. abstract_id: PUBMED:18067435 An Internet method to assess cancer patient information needs and enhance doctor-patient communication: a pilot study. Background: We previously reported that doctor-patient communication in the cancer context may be suboptimal. We therefore developed measures to assess patient communication preferences and established feasibility of an Internet-based intervention to improve communication. Methods: Cancer patients completed an Internet-based survey about communication preferences, with a summary provided to the physician before the consultation. Patients completed a follow-up survey to assess consultation content and satisfaction. Results: Study procedures were feasible, measures exhibited strong internal consistency, and patients expressed satisfaction with the intervention. Conclusion: The Internet offers an opportunity to assess patient preferences and prompt physicians about individual patient informational needs prior to the clinical encounter. abstract_id: PUBMED:28243527 Patient satisfaction with doctor-patient interaction and its association with modifiable cardiovascular risk factors among moderately-high risk patients in primary healthcare. Background: The outcomes of the physician-patient discussion intervene in the satisfaction of cardiovascular disease risk patients. Adherence to treatment, provision of continuous care, clinical management of the illness and patients' adjustment are influenced by satisfaction with physician-patient interaction. This study aims to determine the patient satisfaction with doctor-patient interaction and over six months after following prevention counselling, its associations with modifiable cardiovascular risk factors amongst moderately-high risk patients in a primary healthcare clinic in Kelantan, Malaysia. Methods: A prospective survey was conducted amongst patients with moderately-high cardiovascular risk. A total of 104 moderately-high risk patients were recruited and underwent structured prevention counselling based on the World Health Organization guideline, and their satisfaction with the doctor-patient interaction was assessed using 'Skala Kepuasan Interaksi Perubatan-11,' the Malay version of the Medical Interview Satisfaction Scale-21. Systolic blood pressure, total cholesterol and high-density lipoprotein cholesterol were measured at baseline and at a follow-up visit at six months. Descriptive analysis, paired t test and linear regression analyses were performed. Results: A total of 102 patients responded, giving a response rate of 98.1%. At baseline, 76.5% of the respondents were satisfied with the relation with their doctor, with the favourable domain of distress relief (85.3%) and rapport/confidence (91.2%). The unfavourable domain was interaction outcome, with satisfaction in only 67.6% of the respondents. Between the two visits, changes had occurred in total cholesterol (P = 0.022) and in systolic blood pressure (P &lt; 0.001). Six months after the initial visits, no relationship existed between patient satisfaction scores and changes in modifiable cardiovascular risks. Discussion: The 'Skala Kepuasan Interaksi Perubatan-11' which represents a component of the interpersonal doctor-patient relationship can be used to assess improvements of the medical skills and in medical training to enhance the quality of therapeutic communication. abstract_id: PUBMED:36334494 Online health communities and the patient-doctor relationship: An institutional logics perspective. Taking an institutional logics perspective, this study investigates how "internet-informed" patients manage tensions between the logic of personal choice and the logic of medical professionalism as they navigate treatment decisions and the patient-doctor relationship. Based on 44 semi-structured interviews with members of an online health community for people with diabetes, this study finds that patients exercise a great deal of agency in evaluating healthcare options not only by activating the logic of personal choice but also by appropriating the logic of medical professionalism. Furthermore, patients are strategic in deciding what community advice to share with their doctor or nurse depending on the healthcare professionals' reaction to the logic of personal choice. In contrast to many previous studies that emphasise patient consumerism fuelled by information on the Internet, this study provides a more nuanced picture of patient-doctor relationship engendered by patients' participation in online health communities. abstract_id: PUBMED:21598810 The effect of the Internet on the patient-doctor relationship from the patient's perspective: a survey from primary care. Background: Internet use by patients as a source of information on health and disease is expanding rapidly with obvious effects on the doctor-patient relationship. Many of these effects remain undocumented or are poorly understood. Objectives: To assess the use of the internet for health information by patients in primary care in Israel and their perception of the effects of internet use on their relationship with their doctor. Methods: A cross-sectional survey was conducted among a convenience sample of patients visiting 10 primary care clinics in central Israel using a questionnaire developed for this survey. The survey examined attitudes to using the internet for health-related information and attitudes to sharing this information with doctors. Associations between demographic variables, internet use and patient satisfaction with the doctor's response were tested using the chi-square statistic and t-tests. Results: Completed questionnaires were received from 138 patients; the response rate was 69%. Patients in the study sample had a high rate of internet access (87%), with many using the internet as a source of health information (41%) although most patients using the internet never share this information with their doctor (81%). Among those who share information with the doctor, most felt that this has a positive effect on the relationship (870/%). Few patients reported being referred to websites by the doctor (28%). Conclusions: Internet use is prevalent in this population, though physicians may be unaware of this. Future study could examine the effects of doctors who ask patients actively about their internet use and inform them of relevant health information sources online. Answer: Based on the provided abstracts, the answer to whether a directive to an internet site enhances the doctor-patient interaction is not straightforward and seems to depend on various factors. The study described in PUBMED:26135078 found that providing patients with a link to a reliable website (the American Society for Surgery of the Hand web page on carpal tunnel syndrome) did not significantly improve patient knowledge or satisfaction compared to standard care. The results suggested that the patient-physician relationship might be more valuable than the internet in providing patient education. Conversely, PUBMED:25515386 and PUBMED:32417891 discuss the increasing use of the internet by patients for medical information and how this can support or hinder communication during consultations. Awareness of patients' "Googling" behavior can help physicians understand their patients' concerns and questions. PUBMED:31552851 indicates that a significant proportion of patients use the internet for medical information, with more urban patients doing so compared to rural patients. This suggests that doctors will increasingly encounter patients empowered with online health information. PUBMED:20391071 highlights that when doctors take the internet information presented by patients seriously and show interest, it can lead to greater patient satisfaction and less desire to change the doctor's response. PUBMED:19670004 reflects physicians' mixed perceptions of internet health information, noting both positive and negative effects on the doctor-patient relationship, with some concern about the efficiency of visits. PUBMED:18067435 discusses an internet-based intervention to improve communication by assessing patient preferences and providing this information to the physician before consultations, which was found to be feasible and satisfactory to patients. PUBMED:28243527 found no relationship between patient satisfaction with doctor-patient interaction and changes in modifiable cardiovascular risks over six months, suggesting that satisfaction with interaction may not directly influence health outcomes. PUBMED:36334494 explores how patients from an online health community for diabetes manage tensions between personal choice and medical professionalism in their treatment decisions and interactions with healthcare professionals. Lastly, PUBMED:21598810 reports that while many patients use the internet for health information, most do not share this information with their doctor, and few are directed to websites by their doctor. In summary, while the internet can be a valuable tool for patient education and empowerment, its impact on enhancing the doctor-patient interaction is complex and multifaceted, with studies showing varying results. It appears that the effectiveness of directing patients to internet sites may depend on how this information is integrated into the consultation and the responsiveness of the healthcare provider to the patient's use of online information.
Instruction: Effect of psychiatric illness and labour market status on suicide: a healthy worker effect? Abstracts: abstract_id: PUBMED:15965145 Effect of psychiatric illness and labour market status on suicide: a healthy worker effect? Study Objective: To describe the association between labour market status and death by suicide with focus on admission with a psychiatric disorder. Design: Nested case-control study. Data from routine registers. Setting: Entire Danish population. Participants: 9011 people aged 25-60 years who committed suicide during 1982-1997 and 180 220 matched controls. Main Results: In the general population, not being fully employed is associated with a twofold to threefold increased relative risk of death by suicide, compared with being fully employed. In contrast, fully employed people who have been first admitted to a psychiatric hospital within the past year are at increased suicide risk. Patients who are unemployed, social benefits recipients, disability pensioners, or otherwise marginalised on the labour market have a suicide risk of 0.60 (95% CI: 0.46 to 0.78), 0.41 (0.23 to 0.74), 0.70 (0.45 to 1.08), and 0.86 (0.53 to 1.41), respectively. Although a similar risk decrease is found in women, men, people younger than 30 years, people older than 45 years, and in people who become unemployed, the reversed effect attenuates with time since admission, and little association is seen when a marginal structural model is applied. Conclusions: Although the results show an increased suicide mortality associated with unemployment and labour market marginalisation in the general population, the results suggest little or an inverse association between unemployment and suicide in people with psychiatric illness. The associations seen suggest the need to consider healthy worker selection effects when studying the causal pathway from unemployment and psychiatric illness to suicide. abstract_id: PUBMED:28220213 Labour market marginalisation subsequent to suicide attempt in young migrants and native Swedes. Purpose: This study aimed to compare young individuals who differed in terms of birth region and history of suicide attempt regarding socio-demographic and healthcare factors, and with regard to their risks of subsequent unemployment, sickness absence and disability pension. Methods: Prospective cohort study based on register linkage of 2,801,558 Swedish residents, aged 16-40 years in 2004, without disability pension and with known birth country, followed up 2005-2011. Suicide attempters treated in inpatient care during 2002-2004 (N = 9149) were compared to the general population of the same age without attempt 1987-2011 (N = 2,792,409). Hazard ratios (HR) and 95% confidence intervals (CIs) for long-term unemployment (&gt;180 days), sickness absence (&gt;90 days), and disability pension were calculated with Cox regression, adjusted for several risk markers. Results: Compared to Swedish natives with suicide attempt, migrants of non-Western origin with attempt received less specialised mental healthcare. Distinct differences between native Swedes and migrants were present for the three labour market outcomes, but differences between migrant subgroups were inconsistent. As compared to native Swedes without attempts, non-European migrants with suicide attempt had adjusted HRs and CIs for subsequent unemployment 2.8 (2.5-3.1), sickness absence 2.0 (1.7-2.3) and disability pension 2.2 (1.8-2.6). Respective estimates for natives with suicide attempt were 2.0 (1.9-2.1); 2.7 (2.6-2.9) and 3.4 (3.2-3.6), respectively. Conclusions: Migrant suicide attempters receive less specialised mental health care before their attempt than native Swedes, and their marginalzation patterns are different. Healthcare and policy makers need to take the differential risk profile for migrant and native populations into account. abstract_id: PUBMED:25102855 Future risk of labour market marginalization in young suicide attempters--a population-based prospective cohort study. Background: Research on future labour market marginalization following suicide attempt at young age is scarce. We investigated the effects of suicide attempts on three labour market outcomes: unemployment, sickness absence and disability pension. Methods: We conducted a prospective cohort study based on register linkage of 1,613,816 individuals who in 1994 were 16-30 years old and lived in Sweden. Suicide attempters treated in inpatient care during the 3 years preceding study entry, i.e. 1992-94 (N=5649) were compared with the general population of the same age without suicide attempt between 1973 and 2010 (n=1,608,167). Hazard ratios (HRs) for long-term unemployment (&gt;180 days), sickness absence (&gt;90 days) and disability pension in 1995-2010 were calculated by Cox regression models, adjusted for a number of parental and individual risk markers, and stratified for previous psychiatric inpatient care not due to suicide attempt. Results: The risks for unemployment [HR 1.58; 95% confidence interval (CI) 1.52-1.64], sickness absence (HR 2.16; 2.08-2.24) and disability pension (HR 4.57; 4.34-4.81) were considerably increased among suicide attempters. There was a dose-response relationship between number of suicide attempts and the risk of disability pension, for individuals both with or without previous psychiatric hospitalizations not due to suicide attempts. No such relationship was present with regard to unemployment. Conclusions: This study highlights the strong association of suicide attempts with future marginalization from the labour market, particularly for outcomes that are based on a medical assessment. Studies that focus only on unemployment may largely underestimate the true detrimental impact of suicide attempt on labour market marginalization. abstract_id: PUBMED:26784886 Medical and Social Determinants of Subsequent Labour Market Marginalization in Young Hospitalized Suicide Attempters. Background: Individuals with a history of suicide attempt have a high risk for subsequent labour market marginalization. This study aimed at assessing the effect of individual and parental factors on different measures of marginalization. Methods: Prospective cohort study based on register linkage of 5 649 individuals who in 1994 were 16-30 years old, lived in Sweden and were treated in inpatient care for suicide attempt during 1992-1994. Hazard ratios (HRs) for labour market marginalization defined as long-term unemployment (&gt;180 days), sickness absence (&gt;90 days), or disability pension in 1995-2010 were calculated with Cox regression. Results: Medical risk factors, particularly any earlier diagnosed specific mental disorders (e.g., schizophrenia: HR 5.4 (95% CI: 4.2, 7.0), personality disorders: HR 3.9, 95% CI: 3.1, 4.9), repetitive suicide attempts (HR 1.6, 95% CI: 1.4, 1.9) were associated with a higher relative risk of disability pension. Individual medical factors were of smaller importance for long-term sickness absence, and of only marginal relevance to long-term unemployment. Country of birth outside Europe had an opposite effect on disability pension (HR 0.6, 95% CI: 0.4, 0.8) and long-term unemployment (HR 1.5, 95% CI: 1.3, 1.8). Female sex was positively correlated with long-term sickness absence (HR 1.6, 95% CI: 1.4, 1.7), and negatively associated with long-term unemployment (HR: 0.8, 95% CI: 0.7, 0.9). Conclusions: As compared to disability pension, long-term sickness absence and unemployment was more strongly related to socio-economic variables. Marginalization pathways seemed to vary with migration status and sex. These findings may contribute to the development of intervention strategies which take the individual risk for marginalization into account. abstract_id: PUBMED:25516610 Suicide among first-generation and second-generation immigrants in Sweden: association with labour market marginalisation and morbidity. Background: Previous research suggests that first-generation immigrants have a lower suicide risk than those both born in Sweden and with both parents born in Sweden (natives), while the suicide risk in the second generation seems higher. The aim of this study was to investigate to what extent suicide risk in first-generation and second-generation (both parents born abroad) and intermediate-generation (only one parent born abroad) immigrants compared with natives is associated with sociodemographic factors, labour market marginalisation and morbidity. Methods: A prospective population-based cohort study of 4 034 728 individuals aged 16-50 years was followed from 2005 to 2010. HRs for suicide were calculated for first-generation, intermediate-generation and second-generation immigrants compared with natives. Analyses were controlled for sociodemographic factors, morbidity and labour market marginalisation. Results: The HR of suicide was significantly lower in first-generation immigrants (HR 0.83 CI 0.76 to 0.91), and higher in second-generation (HR 1.32, CI 1.15 to 1.52) and intermediate-generation immigrants (HR 1.20, CI 1.08 to 1.33) in comparison to natives. The excess risk was explained by differences in sociodemographics, morbidity and labour market marginalisation. In the fully adjusted models, a higher HR remained only for the Nordic second generation (HR 1.29, CI 1.09 to 1.52). There were no sex differences in HRs. Conclusions: The risk of suicide was shown to be lower in the first generation and higher in the second generation compared with natives. The higher HR in the Nordic second generation was not explained by differences in sociodemographics, labour market marginalisation and morbidity. Further research is warranted to investigate factors underlying this excess risk. abstract_id: PUBMED:29036335 Period effects in the risk of subsequent labour market marginalisation in young suicide attempters. Background: Suicide attempt in young age is associated with subsequent labour market marginalisation, but little is known about how marginalisation is affected by changes in suicide attempt rates and social insurance legislation and by age differences. Methods: Prospective cohort study based on register linkage of &gt; 2.4 million Swedish residents per birth cohort, aged 19-40 years in 1999; 2004 and 2009, respectively, and followed up for 4 years. Suicide attempters treated in inpatient care in the three years preceding study entry (n &gt; 7000 per cohort) were compared with the general population of the same age without attempt (1987 to end of follow-up). Hazard ratios (HR) and 95% confidence intervals for long-term unemployment (&gt;180 days), sickness absence (&gt;90 days) and disability pension were calculated with Cox regression, adjusted for several risk markers. Additional analyses were stratified by age (below/above 30 years). Results: Across all cohorts, suicide attempt was associated with subsequent labour market marginalisation. Estimates were generally highest for disability pension [e.g. 2009 cohort: adjusted (a) HR = 2.7], followed by sickness absence (2009 cohort: aHR = 2.3) and unemployment (2009 cohort: aHR = 1.5). aHRs were higher in the 2004 and 2009 cohorts compared with the 1999 cohort. For disability pension, for example, aHRs were 2.39, 3.90 and 2.68 for the 1999, 2004 and 2009 cohorts, respectively. Stratification revealed marginal age differences. Conclusion: It seems to have become more difficult for suicide attempters to establish themselves on the labour market in later cohorts, which might result from changes in social insurance regulations. There were no considerable age differences. abstract_id: PUBMED:28360588 The Effect of Violence on the Diagnoses and the Course of Illness Among Female Psychiatric Inpatients. Introduction: The aim of the study was to determine the rate of exposure to domestic violence among female inpatients at any period of their lives; to investigate the effect of different forms of violence on the diagnoses and the course of the illness. Method: The study was conducted on 102 female inpatients treated at Bakirkoy Research and Training Hospital for Psychiatry, Neurology and Neurosurgery. The Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I) was administered and socio-demographic and clinical data was collected. A form designed for the assessment of violence was used to evaluate domestic violence. Results: Ninety patients reported that they had been subjected to some kind of violence at some period of their lives. The parents or husbands were the most frequently reported persecutors. Seventy-three patients reported that they had been subjected to violence before the onset of their illness. Seventy-one had been subjected to physical, 79 to verbal, 42 to sexual, 52 to economic violence, and 49 to constraints on social relationship formation. Comorbid diagnosis of post traumatic stress disorder (PTSD) was related to all types of violence. The rate of suicide attempt was found to be significantly related to verbal-emotional violence. Only 12 patients had previously reported being subjected to domestic violence to their psychiatrist. Conclusion: Domestic violence, an often overlooked phenomenon, is prevalent among women with psychiatric disorders. Subjection to domestic violence is found to be correlated with PTSD and suicidal attempt. abstract_id: PUBMED:26800990 Crisis, suicide and labour productivity losses in Spain. Objectives: Suicide became the first cause of death between the ages of 15 and 44 in Spain in the year 2013. Moreover, the suicide rate in Spain went up by more than 9 % with respect to the previous year. This increase could be related to the serious economic recession that Spain has been experiencing in recent years. In this sense, there is a lack of evidence to help assess to what extent these suicides have a social cost in terms of losses in human capital. Firstly, this article examines the relationship between the variables related to the economic cycle and the suicide rates in the 17 Spanish regions. Secondly, an estimate is made of the losses in labour productivity owing to these suicides. Methodology: In this article, panel data models are used to consider different variables related to the economic cycle. Demographic variables and the suicide rates for regions across Spain from 2002 to 2013 also come into play. The present and future production costs owing to premature death from suicide are calculated using a human capital model. These costs are valued from the gross salary that an individual no longer receives in the future at the very moment he or she leaves the labour market. Results: The results provide a strong indication that a decrease in economic growth and an increase in unemployment negatively affect suicide rates. Due to suicide, 38,038 potential years of working life were lost in 2013. This has an estimated cost of over 565 million euros. Conclusions: The economic crisis endured by Spain in recent years has played a role in the higher suicide rates one can observe from the data in official statistics. From a social perspective, suicide is a public health problem with far-reaching consequences. abstract_id: PUBMED:36748828 Hospital inpatient suicides: A retrospective comparison between psychiatric and non-psychiatric inpatients in Milan healthcare facilities. Introduction: Inpatient suicide in hospitals is a worrying phenomenon that has received little attention. This study retrospectively explored the socio-demographic, clinical, and suicide-related characteristics of hospital inpatient suicides in Milan, Italy, which were collected at the Institute of Forensic Medicine during a twenty-eight-year period (1993-2020). In particular, this study compared the features of hospital inpatient suicides in patients with and without psychiatric diagnoses. Methods: Data were collected through the historical archive, annual registers, and autopsy reports, in certified copies of the originals deposited with the prosecutors of the courts. Results: Considering the global sample, inpatients were mainly men (N = 128; 64.6%), with a mean age of 56.7 years (SD ± 19.8), of Italian nationality (N = 176; 88.9%), admitted to non-psychiatric wards (N = 132; 66.7%), with a single illness (N = 111; 56.1%), treated with psychotropic medications (N = 101; 51%), who used violent suicide methods (N = 177; 89.4%), died of organic injuries (N = 156; 78.8%), and outside the buildings (N = 114; 72.7%). Comparing psychiatric and non-psychiatric inpatients, suicide cases with a non-psychiatric diagnosis were predominantly men (N = 48; 76.2%), hospitalized in non-psychiatric wards (N = 62; 98.4%), assuming non-psychotropic drugs (N = 37; 58.7%), and died in outside hospital spaces (N = 54; 85.7%). Conclusions: A fuller characterization of suicide among hospitalized inpatients requires systematic and computerized data gathering that provides for specific information. Indeed, this could be valuable for inpatient suicide prevention strategies as well as institutional policies. abstract_id: PUBMED:12823086 The suicide risk of discharged psychiatric patients. Background: The suicide risk of psychiatric patients fluctuated along the course of their illness and was found to be high in the immediate post-discharge period in some settings. The epidemiology and psychiatric services for the suicide population in Hong Kong have differed from those of the West (i.e., low youth suicide rate, high elderly suicide rate, high female/male ratio, and heavily government-subsidized psychiatric service). This study examined the suicide rates within a year of discharge from psychiatric inpatient care in Hong Kong. Method: Discharges from all psychiatric hospitals or psychiatric wards in general hospitals in Hong Kong from 1997 through 1999 were followed up for suicides (ICD-9, E950-E959) and "undetermined" causes of deaths (E980-E989) by record linkage with the Coroner's Court until their deaths or Dec. 31, 2000. The suicide rates (/1000 person-years at risk) and standardized mortality ratios (SMRs; assigning a value of 1 to the same age- and sex-specific suicide rates in the general population) were calculated. Results: 21,921 patients (aged over 15 years) were discharged from psychiatric hospitals from 1997 through 1999. Two hundred eighty patients committed suicide within 1 year of discharge; 85 suicides (30%) occurred within 28 days after discharge. The SMRs for suicide in the first 28 days after discharge were 178 (95% CI = 132 to 235) for females and 113 (95% CI = 86 to 147) for males. These rates were 4.0 (95% CI = 2.7 to 5.6) times higher for females and 4.6 (95% CI = 3.2 to 6.3) times higher for males than the rate in the rest of the year. Young adults had higher SMRs than the elderly. No specific diagnoses had higher suicidal risk than others. Calculations including undetermined causes of deaths (N = 53) gave similar results. Conclusion: The immediate post-discharge period carries a high risk of suicide for psychiatric patients. The high-risk groups are young adults and females. No diagnosis appears to carry a particularly high risk. Answer: The relationship between psychiatric illness, labour market status, and suicide risk has been explored in various studies, and the findings suggest a complex interaction between these factors. In the general population, not being fully employed is associated with a two to threefold increased relative risk of death by suicide compared to being fully employed. However, this association appears to be different for individuals with psychiatric illness. Fully employed people who have been admitted to a psychiatric hospital within the past year are at an increased risk of suicide. In contrast, patients with psychiatric illness who are unemployed, social benefits recipients, disability pensioners, or otherwise marginalized on the labour market have a lower suicide risk compared to fully employed psychiatric patients (PUBMED:15965145). This inverse association between unemployment and suicide in people with psychiatric illness suggests the need to consider healthy worker selection effects when studying the causal pathway from unemployment and psychiatric illness to suicide. The healthy worker effect refers to the observation that employed populations tend to have lower mortality rates than the general population because seriously ill or disabled individuals are less likely to be employed. Furthermore, research indicates that individuals with a history of suicide attempt face significant labour market marginalization, with increased risks for unemployment, sickness absence, and disability pension. This marginalization is particularly pronounced for outcomes based on a medical assessment, such as disability pension (PUBMED:25102855). Medical risk factors, especially diagnosed specific mental disorders and repetitive suicide attempts, are associated with a higher relative risk of disability pension, while socio-economic variables are more strongly related to long-term sickness absence and unemployment (PUBMED:26784886). Additionally, migrant suicide attempters, particularly non-Western migrants, receive less specialized mental health care before their attempt than native Swedes and face different patterns of marginalization in the labour market (PUBMED:28220213). The risk of suicide is lower in first-generation immigrants and higher in second-generation immigrants compared with natives, with the excess risk in the second generation not fully explained by differences in sociodemographics, labour market marginalization, and morbidity (PUBMED:25516610). Overall, these findings highlight the need for tailored intervention strategies that take into account the individual risk for marginalization and the complex interplay between psychiatric illness, employment status, and suicide risk.
Instruction: Are patients with asthma at increased risk of coronary heart disease? Abstracts: abstract_id: PUBMED:33371008 Coronary heart disease and heart failure in asthma, COPD and asthma-COPD overlap. Introduction: We investigated risk of coronary heart disease and heart failure in phenotypes of obstructive airway disease. Methods: Among 91 692 participants in the Copenhagen General Population Study, 42 058 individuals were classified with no respiratory disease, and 11 988 individuals had different phenotypes of obstructive airways disease: asthma with early onset or late-onset, chronic obstructive pulmonary disease (COPD) with forced expiratory volume in one second (FEV1) above or below 50% of predicted value (%p) or asthma-COPD overlap (ACO). Results: During a mean follow-up of 5.7 years we registered 3584 admissions for coronary heart disease and 1590 admissions for heart failure. Multivariable Cox regression analyses of time to first admission were used with a two-sided p value of 0.05 as significance level. Compared with no respiratory disease the highest risks of coronary heart disease and heart failure were observed in ACO with late-onset asthma and FEV1 &lt;50% p, HR=2.2 (95% CI 1.6 to 3.0), and HR=2.9 (95% CI 2.0 to 4.3), respectively. In COPD with FEV1 above 50% p the HRs were 1.3 (95% CI 1.2 to 1.5) for coronary heart disease and 1.9 (95% CI 1.6 to 2.3) for heart failure. Asthma associated with increased risks of coronary heart disease and heart failure, however, in asthma without allergy the HR was 1.1 (95% CI 0.7 to 1.6) for coronary heart disease while individuals with allergy had an HR of 1.4 (95% CI 1.1 to 1.6). Conclusions: Risks of coronary heart disease and heart failure were increased in asthma, COPD and ACO. In asthma, the risk of coronary heart disease depended on presence of allergy. We suggest that cardiovascular risk factors should be assessed systematically in individuals with obstructive airway disease with the potential to facilitate targeted treatments. abstract_id: PUBMED:15131088 Are patients with asthma at increased risk of coronary heart disease? Background: Inflammation plays a role in the pathogenesis of athero-thrombosis. Because of the chronic, inflammatory nature of asthma, we hypothesized a possible link asthma and prospective risk of coronary heart disease (CHD). Methods: We performed a cohort study among 70 047 men and 81 573 women, 18-85 years old, enrolled in a large managed care organization in Northern California. Asthma was ascertained by self-report at baseline in 1964-1973 and/or interim hospitalization for asthma during follow-up. The primary endpoint was combined non-fatal or fatal CHD. Results: After a median follow-up time of 27 years, and adjusting for age, race/ethnicity, education level, smoking status, alcohol consumption, body mass index, serum total cholesterol, white blood cell count, hypertension, diabetes, and history of occupational exposures, asthma was associated with a 1.22-fold (95% CI: 1.14, 1.31) increased hazard of CHD among women. This association was seen both in never and in ever smoking women, and in younger and older women. By contrast, asthma was not associated with CHD among men (multivariate-adjusted hazard ratio = 0.99; 95% CI: 0.93, 1.05). Conclusions: Asthma was independently associated with a modest but statistically significant increased hazard of CHD among women. Further studies are warranted to confirm or refute these preliminary epidemiological findings. abstract_id: PUBMED:23139248 Adult asthma and risk of coronary heart disease, cerebrovascular disease, and heart failure: a prospective study of 2 matched cohorts. Asthma has been associated with increased cardiovascular disease (CVD) risk. The authors ascertained the association of asthma with CVD and the roles that sex, concurrent allergy, and asthma medications may play in this association. They assembled a cohort of 203,595 Northern California adults with asthma and a parallel asthma-free referent cohort (matched 1:1 on age, sex, and race/ethnicity); both cohorts were followed for incident nonfatal or fatal CVD and all-cause mortality from January 1, 1996, through December 31, 2008. Each cohort was 66% female and 47% white. After adjustment for age, sex, race/ethnicity, cardiac risk factors, and comorbid allergy, asthma was associated with a 1.40-fold (95% confidence interval (CI): 1.35, 1.45) increased hazard of coronary heart disease, a 1.20-fold (95% CI: 1.15, 1.25) hazard of cerebrovascular disease, a 2.14-fold (95% CI: 2.06, 2.22) hazard of heart failure, and a 3.28-fold (95% CI: 3.15, 3.41) hazard of all-cause mortality. Stronger associations were noted among women. Comorbid allergy predicted CVD but did not synergistically increase the CVD risk associated with asthma. Only asthma patients using asthma medications (particularly those on oral corticosteroids alone or in combination) were at enhanced risk of CVD. In conclusion, asthma was prospectively associated with increased risk of major CVD. Modifying effects were noted for sex and asthma medication use but not for comorbid allergy. abstract_id: PUBMED:21272803 Increased risk for coronary heart disease, asthma, and connective tissue diseases in inflammatory bowel disease. Background And Aims: Patients with inflammatory bowel diseases (IBD) show increased risk for other immune-mediated diseases such as arthritis, ankylosing spondylitis, and some pulmonary diseases. Less is known about the prevalence of other chronic diseases in IBD, and the impact of comorbidity on health-related quality of life (HRQoL). Methods: The study population comprised 2831 IBD patients recruited from the National Health Insurance register and from a patient-association register. Study subjects completed generic 15D and disease-specific IBDQ questionnaires. The Social Insurance Institution of Finland provided data on other chronic diseases entitling patients to reimbursed medication. For each study subject, two controls, matched for age, sex, and hospital district, were chosen. Results: A significant increase existed in prevalence of connective tissue diseases, pernicious anemia and asthma. Furthermore, coronary heart disease (CHD) occurred significantly more frequently in IBD patients than in their peers (p=0.004). The difference was, however, more clearly seen in females (p=0.014 versus 0.046 in males). Active and long-lasting IBD were risk factors. Concomitant other chronic diseases appeared to impair HRQoL. Asthma, hypertension and psychological disorders had an especially strong negative impact on HRQoL, as observed with both the generic and disease-specific HRQoL tools. Conclusions: In addition to many immune-mediated diseases, CHD appeared to be more common in IBD than in control patients, especially in females. The reason is unknown, but chronic inflammation may predispose to atherosclerosis. This finding should encourage more efficacious management of underlying cardiovascular risk factors, and probably also inflammatory activity in IBD. abstract_id: PUBMED:28433577 Asthma and risk of coronary heart disease: A meta-analysis of cohort studies. Background: Few studies have investigated the incidence of coronary heart disease (CHD) in patients with asthma, and their results remain inconclusive. Objective: To conduct a meta-analysis to determine whether asthma increases the risk of CHD. Methods: A systematic literature search of the PubMed and Embase databases from inception to August 2016, complemented with references screening of relevant articles and reviews, was performed to identify eligible studies. Only longitudinal cohort studies were included in our meta-analysis. Results: The retrieval process yielded 7 studies (12 asthma cohorts) with 495,024 patients. Data pooling across the cohorts revealed that asthma was associated with an increased risk of CHD (hazard ratio [HR], 1.42; 95% confidence interval [CI], 1.30-1.57; P &lt; .001), without significant heterogeneity across the studies (I2 = 26%, P = .19). This epidemiologic association was more pronounced in female than in male patients (female: HR, 1.50; 95% CI, 1.41-1.59; male: HR, 1.31; 95% CI, 1.16-1.47; P for interaction = .046). In addition, subgroup and sensitivity analyses supported the positive correlation between asthma and incident CHD. Conclusion: Asthma is related to an increased incidence of CHD, particularly in women. Clinicians should be aware of this association when faced with a patient with asthma. Further investigations are required to examine how this excess risk should be managed in routine practice. abstract_id: PUBMED:22152514 Undiagnosed airflow limitation in patients at cardiovascular risk. Background: Chronic obstructive pulmonary disease (COPD) and cardiovascular diseases (CVD) share risk factors and impair each other's prognosis. Aims: To assess the prevalence of airflow limitation (AL) compatible with COPD in a population at cardiovascular risk and to identify determinants of AL. Methods: All consecutive patients referred to the cardiovascular prevention unit of a university hospital in 2009 were studied in a cross-sectional analysis. Patients answered questionnaires on socioeconomic status, medical history and lifestyle, and underwent extensive physical examinations, biological measures and spirometry testing. AL was defined as FEV1/FVC&lt;0.70, without any history of asthma. Determinants of AL were assessed using logistic regression. Results: The sample comprised 493 participants (mean age 57.4±11.1 years); 60% were men, 18% were current smokers, 42% were ex-smokers and 10% of patients had a history of CVD. Ten-year risk of coronary heart disease (CHD) according to the Framingham equation was intermediate (10-20%) for 25% of patients and high (&gt;20%) for 10%. Prevalence of AL was 5.9% (95% confidence interval [CI] 4.0-8.3%) in the whole population and 4.3% (2.6-6.6%) among subjects in primary cardiovascular prevention. AL was independently associated with CVD (adjusted odds ratio 4.18, 95% CI 1.72-10.15; P=0.002) but not with Framingham CHD risk. More than 80% of patients screened with AL had not been diagnosed previously and more than one in two patients was asymptomatic. Conclusion: Patients with CVD are at increased risk of AL and thus should benefit from AL screening as they are frequently asymptomatic. abstract_id: PUBMED:16061703 Asthma and incident cardiovascular disease: the Atherosclerosis Risk in Communities Study. Background: A possible association between asthma and cardiovascular disease has been described in several exploratory studies. Methods: The association of self-reported, doctor diagnosed asthma and incident cardiovascular disease was examined in a biracial cohort of 45-64 year old adults (N = 13501) followed over 14 years. Results: Compared with never having asthma, the multivariate adjusted hazard ratio (HR) of stroke (n = 438) was 1.50 (95% CI 1.04 to 2.15) for a baseline report of ever having asthma (prevalence 5.2%) and 1.55 (95% CI 0.95 to 2.52) for current asthma (prevalence 2.7%). The relative risk of stroke was 1.43 (95% CI 1.03 to 1.98) using a time dependent analysis incorporating follow up reports of asthma. Participants reporting wheeze attacks with shortness of breath also had greater risk for stroke (HR = 1.56, 95% CI 1.18 to 2.06) than participants without these symptoms. The multivariate adjusted relative risk of coronary heart disease (n = 1349) was 0.87 (95% CI 0.66 to 1.14) for ever having asthma, 0.69 (95% CI 0.46 to 1.05) for current asthma at baseline, and 0.88 (95% CI 0.69 to 1.11) using the time dependent analysis. Conclusions: Asthma may be an independent risk factor for incident stroke but not coronary heart disease in middle aged adults. This finding warrants replication and may motivate a search for possible mechanisms that link asthma and stroke. abstract_id: PUBMED:2789974 Type A behaviour pattern: specific coronary risk factor or general disease-prone condition? While the association between Type A behaviour pattern and coronary heart disease (CHD) has been abundantly investigated, the question of the specificity of this association remains virtually unexplored. The present study addressed this question by examining, in a sample of 1949 male and female adults, the relationship between JAS Type A measurement and self-reported diseases (i.e. CHD, scarlatina, rheumatoid arthritis, asthma, diseases of the liver, diseases of the gall-bladder, thyroid troubles, tuberculosis, peptic ulcer, renal disease, hypertension and diabetes). Type A subjects were found to report not only more CHD, but also more peptic ulcers, thyroid problems, asthma and rheumatoid arthritis. Globally, more Type A than Type B subjects reported having been ill, and the average number of reported diseases per person was higher among Type As than among Type Bs. These results were obtained in spite of the fact that Type A subjects in this study were markedly younger than Type Bs, and in spite of the empirically based reputation of the former to be symptom deniers rather than symptom reporters. Overall, the data supported the view that Type A behaviour pattern is a general disease-prone condition rather than merely a specific coronary risk factor. abstract_id: PUBMED:37580807 Causal relationship between PCSK9 inhibitor and autoimmune diseases: a drug target Mendelian randomization study. Background: In addition to decreasing the level of cholesterol, proprotein convertase subtilis kexin 9 (PCSK9) inhibitor has pleiotropic effects, including immune regulation. However, the impact of PCSK9 on autoimmune diseases is controversial. Therefore, we used drug target Mendelian randomization (MR) analysis to investigate the effect of PCSK9 inhibitor on different autoimmune diseases. Methods: We collected single nucleotide polymorphisms (SNPs) of PCSK9 from published genome-wide association studies statistics and conducted drug target MR analysis to detect the causal relationship between PCSK9 inhibitor and the risk of autoimmune diseases. 3-Hydroxy-3-methylglutaryl-assisted enzyme A reductase (HMGCR) inhibitor, the drug target of statin, was used to compare the effect with that of PCSK9 inhibitor. With the risk of coronary heart disease as a positive control, primary outcomes included the risk of systemic lupus erythematosus (SLE), rheumatoid arthritis (RA), myasthenia gravis (MG), multiple sclerosis (MS), asthma, Crohn's disease (CD), ulcerative colitis (UC), and type 1 diabetes (T1D). Results: PCSK9 inhibitor significantly reduced the risk of SLE (OR [95%CI] = 0.47 [0.30 to 0.76], p = 1.74 × 10-3) but increased the risk of asthma (OR [95%CI] = 1.15 [1.03 to 1.29], p = 1.68 × 10-2) and CD (OR [95%CI] = 1.38 [1.05 to 1.83], p = 2.28 × 10-2). In contrast, HMGCR inhibitor increased the risk of RA (OR [95%CI] = 1.58 [1.19 to 2.11], p = 1.67 × 10-3), asthma (OR [95%CI] = 1.21 [1.04 to 1.40], p = 1.17 × 10-2), and CD (OR [95%CI] = 1.60 [1.08 to 2.39], p = 2.04 × 10-2). Conclusions: PCSK9 inhibitor significantly reduced the risk of SLE but increased the risk of asthma and CD. In contrast, HMGCR inhibitor may be a risk factor for RA, asthma, and CD. abstract_id: PUBMED:30420114 Adult outcomes of being born late preterm or early term - What do we know? The literature on adult outcomes of people born late preterm (LPT, 34-36 completed weeks) or early term (ET, 37-38 weeks) was reviewed. In PubMed, 9547 articles were identified; 53 were eligible. Of these, 12 were based on clinical cohorts, 32 on medical birth register linkages, and nine on historical birth cohorts; 48 out of 53 on Nordic countries; 50 out of 53 reported on LPT and eight out of 53 reported on ET. LPT plus ET have increased early (&lt;45 years) adult all-cause mortality. Despite increased cardiometabolic risk factors and slightly lower cardiorespiratory fitness in LPT, no studies showed increased risk for coronary heart disease, some showed increased risk for stroke, and all showed increased risk for type 2 diabetes. Most show increased risk for asthma and decreased allergic rhinitis. LPT have slightly lower cognitive abilities and higher rates of several mental disorders; ET have intermediate values. LPT and ET adults have slightly lower education, occupational status, and income. We recommend that authors report findings of LPT/ET separately from those born more preterm. Answer: Yes, patients with asthma are at increased risk of coronary heart disease (CHD). Several studies have investigated this association and have found a positive correlation between asthma and the incidence of CHD. For instance, a cohort study among men and women enrolled in a large managed care organization in Northern California found that asthma was associated with a 1.22-fold increased hazard of CHD among women, although this association was not seen among men (PUBMED:15131088). Another prospective study of matched cohorts in Northern California reported that asthma was associated with a 1.40-fold increased hazard of coronary heart disease after adjustment for various factors (PUBMED:23139248). Additionally, a meta-analysis of cohort studies concluded that asthma is related to an increased incidence of CHD, particularly in women, with a hazard ratio of 1.42 for the overall asthma population, and the association being more pronounced in female patients (PUBMED:28433577). Furthermore, a study in the Copenhagen General Population Study found that compared with no respiratory disease, the highest risks of coronary heart disease and heart failure were observed in individuals with asthma-COPD overlap (ACO) with late-onset asthma and FEV1 <50% predicted value, with a hazard ratio of 2.2 for CHD (PUBMED:33371008). Another study on inflammatory bowel disease patients, who also have a higher prevalence of asthma, showed that coronary heart disease occurred significantly more frequently in these patients compared to controls, especially in females (PUBMED:21272803). These findings suggest that there is a significant association between asthma and an increased risk of coronary heart disease, and this risk is particularly evident in women with asthma. Clinicians should be aware of this association and consider cardiovascular risk factors when treating patients with asthma (PUBMED:28433577; PUBMED:23139248; PUBMED:15131088; PUBMED:33371008; PUBMED:21272803).
Instruction: Do proxies reflect patients' health concerns about urinary incontinence and gait problems? Abstracts: abstract_id: PUBMED:16305748 Do proxies reflect patients' health concerns about urinary incontinence and gait problems? Background: While falls and urinary incontinence are prevalent among older patients, who sometimes rely on proxies to provide their health information, the validity of proxy reports of concern about falls and urinary incontinence remains unknown. Methods: Telephone interviews with 43 consecutive patients with falls or fear of falling and/or bothersome urinary incontinence and their proxies chosen by patients as most knowledgeable about their health. The questionnaire included items derived from the Medical Outcomes Study Short Form 12 (SF-12), a scale assessing concerns about urinary incontinence (UI), and a measure of fear of falling, the Falls Efficacy Scale (FES). Scores were estimated using items asking the proxy perspective (6 items from the SF-12, 10 items from a UI scale, and all 10 FES items). Proxy and patient scores were compared using intraclass correlation coefficients (ICC, one-way model). Variables associated with absolute agreement between patients and proxies were explored. Results: Patients had a mean age of 81 years (range 75-93) and 67% were female while proxies had a mean age of 70 (range 42-87) and 49% were female. ICCs were 0.63 for the SF-12, 0.52 for the UI scale, and 0.29 for the FES. Proxies tended to understate patients' general health and incontinence concern, but overstate patients' concern about falling. Proxies who lived with patients and those who more often see patients more closely reflected patient FES scores compared to those who lived apart or those who saw patients less often. Internal consistency reliability of proxy responses was 0.62 for the SF-12, 0.86 for the I-QOL, and 0.93 for the FES. In addition, construct validity of the proxy FES scale was supported by greater proxy-perceived fear of falling for patients who received medical care after a fall during the past 12 months (p &lt; .05). Conclusion: Caution should be exercised when using proxies as a source of information about older patients' health perceptions. Questions asking about proxies' views yield suboptimal agreement with patient responses. However, proxy scales of UI and fall concern are internally consistent and may provide valid independent information. abstract_id: PUBMED:35831723 Gait, falls, cognitive function, and health-related quality of life after shunt-treated idiopathic normal pressure hydrocephalus-a single-center study. Background: Normal pressure hydrocephalus (NPH) is a neurological disorder, characterized by gait- and balance disturbance, cognitive deterioration, and urinary incontinence, combined with ventricular enlargement. Gait ability, falls, cognitive status, and health-related quality of life pre and post surgery have not previously been studied at Karolinska University Hospital. Methods: One hundred and eighteen patients with iNPH that underwent shunt surgery at Karolinska University Hospital during the years from 2016 to 2018 were included. Results of walking tests, test for cognitive function, and self-estimated health-related quality of life, before and 3 months after surgery, were collected retrospectively as a single-center study. Results: Walking ability, cognitive function, and health-related quality of life significantly increased 3 months after shunt surgery. A positive significant correlation was seen between a higher self-estimated quality of life and walking ability. Conclusions: Patients with suspected iNPH treated with shunt surgery at Karolinska University Hospital improved their walking ability and cognitive functioning 3 months after shunt surgery. A positive significant correlation was seen between a higher self-estimated quality of life and walking ability but not with increased cognitive function. We then concluded that the selection of patients for shunting maintained a high standard. abstract_id: PUBMED:31129388 Are we supererestimating gait assessments of patients with idiopathic normal-pressure hydrocephalus? Introduction: Idiopathic normal pressure hydrocephalus (iNPH) is a syndrome characterized by a triad composed of cognitive alteration, urinary incontinence, and gait impairment associated with ventricular enlargement and normal cerebrospinal fluid pressure. Gait impairment is among the earliest symptoms; however, the reliability of the evaluation is not well-established and no consensus has been reaching regarding variables that should be analyzed and which parameters should be considered to accurately assess post-intervention improvement. Research Question: Are the degree of repeatability, standard error of measurement, and minimum detectable change considered to detect changes in gait variables in iNPH patients? Methods: A total of 84 iNPH patients with a mean age of 77.1 (±6.4) years were analyzed. Gait deviation index (GDI), speed, cadence, cycle time, stride length, single support, and first and second double support were chosen as the variables to be analyzed. Statistical analysis was performed by an independent evaluator, with gait repeatability assessed by the intraclass correlation coefficient (ICC) and the standard error of measure (SEM). Results: ICC values were 0.76-0.85 with excellent repeatability, while SEM demonstrated that the variables with best repeatability were the GDI (mean, 4.94; 95% confidence interval (CI), 4.63-5.43), representing a 7.65% mean relative error of the measurement (mean, 0.05 m; 95% CI, 0.05-0.06), and stride length (mean 0.05 m; 95% CI, 0.05-0.06), with a 7.69% mean relative error. Significance: We concluded that GDI and stride length were the variables with the best repeatability and lower variability in the gait of iNPH patients. abstract_id: PUBMED:26775149 Quantitative evaluation of changes in gait after extended cerebrospinal fluid drainage for normal pressure hydrocephalus. Idiopathic normal pressure hydrocephalus (iNPH) is characterized by gait instability, urinary incontinence and cognitive dysfunction. These symptoms can be relieved by cerebrospinal fluid (CSF) drainage, but the time course and nature of the improvements are poorly characterized. Attempts to prospectively identify iNPH patients responsive to CSF drainage by evaluating presenting gait quality or via extended lumbar cerebrospinal fluid drainage (eLCD) trials are common, but the reliability of such approaches is unclear. Here we combine eLCD trials with computerized quantitative gait measurements to predict shunt responsiveness in patients undergoing evaluation for possible iNPH. In this prospective cohort study, 50 patients presenting with enlarged cerebral ventricles and gait, urinary, and/or cognitive difficulties were evaluated for iNPH using a computerized gait analysis system during a 3day trial of eLCD. Gait speed, stride length, cadence, and the Timed Up and Go test were quantified before and during eLCD. Qualitative assessments of incontinence and cognition were obtained throughout the eLCD trial. Patients who improved after eLCD underwent ventriculoperitoneal shunt placement, and symptoms were reassessed serially over the next 3 to 15months. There was no significant difference in presenting gait characteristics between patients who improved after drainage and those who did not. Gait improvement was not observed until 2 or more days of continuous drainage in most cases. Symptoms improved after eLCD in 60% of patients, and all patients who improved after eLCD also improved after shunt placement. The degree of improvement after eLCD correlated closely with that observed after shunt placement. abstract_id: PUBMED:35437107 Can gait outcomes be predicted early after a stroke? Purpose: To determine the ability of clinical measures collected within 72 hours of neurological insult to predict independent gait 6 and 12 months after a stroke. Methods: Patients with a confirmed stroke diagnosis were eligible for inclusion in this prospective cohort study. Sitting balance, National Institutes of Health Stroke Scale (NIHSS) motor leg, NIHSS motor arm, and Motricity Index (MI) were measured within 72 hours post-stroke. Follow-up assessments were conducted at 6 and 12 months post-stroke to measure gait recovery. Results: A total of 78 patients were included at baseline for analysis. At 6 and 12 months, 38% (n = 38) and 35% (n = 42) of patients used a gait aid, and 80% and 87% were independently ambulant, respectively. Sitting balance, NIHSS motor leg, and NIHSS motor arm were not significantly associated with ambulation at 6 or 12 months or with the use of a gait aid. Thrombolysis was significantly associated with independent outdoors ambulation at 6 months (p = .011). A worse MI score was significantly associated with a higher number of falls at 6 months (p &lt; .010) but not with the need for a gait aid. The number of falls at 6 months was independently predicted by urinary incontinence post-stroke (p &lt; .001), NIHSS leg score (p &lt; .005), and depression and anxiety while in acute care (p &lt; .005). Conclusions: Clinical bedside assessments may be less important in predicting safe, independent gait than previously thought. Urinary incontinence and poor mental health should be addressed in the hospital. Increased utilization of reperfusion techniques may alter functional recovery patterns. abstract_id: PUBMED:20429324 Gait apraxia. Gait apraxia is most commonly a part of the Hakimov triad (gait apraxia, urinary incontinence, dementia) in normotensive hydrocephalus (NPH), although it may be a symptom of some other conditions. In our case the patient was a long term Parkinson's disease sufferer who developed normotensive hydrocephalus and consequently gait apraxia. Only after a third successive evacuation of the CSF his gait apraxia improved (Fig. 1, Ref. 15). Full Text (Free, PDF) www.bmj.sk. abstract_id: PUBMED:38364278 A Wearable Gait-Analysis Device for Idiopathic Normal-Pressure Hydrocephalus (INPH) Monitoring. Idiopathic Normal Pressure Hydrocephalus (iNPH) is a progressive neurologic disorder (fluid build-up in the brain) that affects 0.2-5% of the UK population aged over 65. Mobility problems, dementia and urinary incontinence are symptoms of iNPH but often these are not properly evaluated, and patients receive the wrong diagnosis. Here, we describe the development and testing of a wearable device that records and analyses a patient's gait. The movement patterns, expressed as quantitative data, allow clinicians to improve the non-invasive diagnosis of iNPH as well as monitor the management of patients undergoing treatment. The wearable sensor system comprises a miniature electronic unit that attaches to one ankle of the patient via a simple Velcro strap. The unit monitors acceleration along three axes with a sample rate of 60 Hz and transmits the data via a Bluetooth communication link to a tablet or smart phone running the Android and the iOS operating systems. The software package extracts statistics based on stride length, stride height, distance walked and speed. Analysis confirmed that the system achieved an average accuracy of at least 98% for gait tests conducted over distances 9 m. Using this device will improve the diagnostic process and management of iNPH and the treatment and management of this condition. abstract_id: PUBMED:22693146 The relationship between urinary bladder control and gait in women. Aims: Urinary incontinence and OAB are associated with increased falls risk in older people suggesting a potential relationship between bladder functioning and control of gait. To begin to understand the possible interaction between gait and bladder control this exploratory study aimed to examine the effects of controlling the bladder on gait parameters in healthy adult women. Methods: Thirty-six continent women (mean age 50.8 ± 15.8 years), participated in this observational cohort study. Subjects walked three times along an electronic walkway under three different bladder conditions; first desire to void (FDV), strong desire to void (SDV), and post void (PV). Spatial and temporal parameters of gait and continence status were recorded for each condition. Results: A significant reduction in gait velocity (P &lt; 0.025) was found at the SDV compared with the PV condition. Stride length decreased significantly (P &lt; 0.001) at the SDV compared with the FDV and PV conditions. No significant differences were found between FDV and PV conditions. In addition, the variability of gait increased significantly with respect to cadence (P &lt; 0.05) and stride times (P &lt; 0.05) at the SDV compared to the PV condition. This was not observed between the FDV and the PV conditions, nor the FDV and the SDV. Conclusion: In healthy continent women, speed and rhythmicity of gait are different when a strong desire to void is experienced. This suggests an interaction may exist between urinary bladder control and control of gait. Further investigation is necessary to understand this relationship and begin to explain the increased risk of falls associated with urinary bladder functioning. abstract_id: PUBMED:11915235 Gait analysis of idiopathic normal pressure hydrocephalus Normal pressure hydrocephalus (NPH) is a clinical syndrome associated with dementia, gait disturbance and urinary incontinence. Gait disturbance is usually the initial sign and most important symptom, but its objective evaluation has not been established. We analyzed the gait of an idiopathic NPH before and after ventricular shunting with the gait analysis system. Before shunting, the stride was short and irregular, and the truncal movement was unsteady. Three-dimensional patterns of angular relationships between 3 joints, namely the ankle, knee and hip were small and irregular. The vector profile of floor reaction force showed a monophasic pattern with absence of the peak at toe-off. After shunting, the step enlarged and the truncal movement was steady. The three-dimensional patterns of angular relationships between the 3 joints were nearly normalized. The vector profile of floor reaction force showed an appearance of the peak at toe-off, which formed a biphasic pattern, similar to the pattern of a normal person. The gait analysis is a useful method to evaluate gait disturbance in idiopathic NPH. abstract_id: PUBMED:35391549 High Periventricular T1 Relaxation Times Predict Gait Improvement After Spinal Tap in Patients with Idiopathic Normal Pressure Hydrocephalus. Purpose: The diagnosis of idiopathic normal pressure hydrocephalus (iNPH) can be challenging. Aim of this study was to use a novel T1 mapping method to enrich the diagnostic work-up of patients with suspected iNPH. Methods: Using 3T magnetic resonance imaging (MRI) we prospectively evaluated rapid high-resolution T1 mapping at 0.5 mm resolution and 4 s acquisition time in 15 patients with suspected iNPH and 8 age-matched, healthy controls. T1 mapping in axial sections of the cerebrum, clinical and neuropsychological testing were performed prior to and after cerebrospinal fluid tap test (CSF-TT). T1 relaxation times were measured in 5 predefined periventricular regions. Results: All 15 patients with suspected iNPH showed gait impairment, 13 (86.6%) showed signs of cognitive impairment and 8 (53.3%) patients had urinary incontinence. Gait improvement was noted in 12 patients (80%) after CSF-TT. T1 relaxation times in all periventricular regions were elevated in patients with iNPH compared to controls with the most pronounced differences in the anterior (1006 ± 93 ms vs. 911 ± 77 ms; p = 0.023) and posterior horns (983 ± 103 ms vs. 893 ± 68 ms; p = 0.037) of the lateral ventricles. Montreal cognitive assessment (MoCA) scores at baseline were negatively correlated with T1 relaxation times (r &lt; -0.5, p &lt; 0.02). Higher T1 relaxation times were significantly correlated with an improvement of the 3‑m timed up and go test (r &gt; 0.6 and p &lt; 0.03) after CSF-TT. Conclusion: In iNPH-patients, periventricular T1 relaxation times are increased compared to age-matched controls and predict gait improvement after CSF-TT. T1 mapping might enrich iNPH work-up and might be useful to indicate permanent shunting. Answer: Proxies do not always accurately reflect patients' health concerns about urinary incontinence and gait problems. A study that involved telephone interviews with patients who had falls or fear of falling and/or bothersome urinary incontinence, along with their proxies, found that proxies tended to understate patients' general health and incontinence concerns but overstate patients' concerns about falling. The intraclass correlation coefficients (ICCs) indicated moderate agreement for general health (SF-12) and urinary incontinence concerns, but poor agreement for fear of falling (FES). Proxies who lived with patients or saw them more often were more likely to reflect patient concerns accurately, particularly regarding fear of falling. The study concluded that caution should be exercised when using proxies as a source of information about older patients' health perceptions, as proxy responses may not always align with patient responses (PUBMED:16305748).
Instruction: Do continuous infusions of omeprazole and ranitidine retain their effect with prolonged dosing? Abstracts: abstract_id: PUBMED:8276209 Do continuous infusions of omeprazole and ranitidine retain their effect with prolonged dosing? Background/aims: Prolonged infusions of H2-antagonists are commonly used in intensive care units, although little is known about their antisecretory efficacy beyond the initial 24 hours of dosing. The aim of this study was to assess the antisecretory effects of infusions of ranitidine and omeprazole for a period of 72 hours. Methods: Twelve healthy volunteers received individually titrated 72-hour intravenous infusions of omeprazole, ranitidine, or placebo in a double-blind, crossover study. Gastric pH and dosing requirements were compared. Results: The median percentage of time with pH &gt; 4 (interquartile range) was 93% (88%-95%) on day 1 and 96% (94%-99%) on day 3 with omeprazole and 67% (56%-78%) and 43% (31%-51%), respectively, with ranitidine (both P &lt; 0.001 vs. omeprazole). The mean doses (+/- SD) required on days 1 and 3 for omeprazole were 235.8 +/- 44 mg and 134.0 +/- 37 mg (P &lt; 0.0001), and ranitidine doses were 502.5 +/- 76 mg and 541.8 +/- 25 mg, respectively (P = 0.05). Conclusions: Omeprazole infusions consistently maintained gastric pH above 4 over a period of 72 hours with progressively lower doses. Significant tolerance to the antisecretory effect of ranitidine infusion developed in 72 hours, which was not overcome despite individually titrated doses of more than 500 mg/24 hours. Consequently, application of pharmacodynamic results of single-day H2-blocker and proton-pump inhibitor studies to prolonged infusion trials for stress ulcer-related bleeding is inappropriate. abstract_id: PUBMED:7851185 Effect of repeated boluses of intravenous omeprazole and primed infusions of ranitidine on 24-hour intragastric pH in healthy human subjects. The aim of this study was to identify dosage regimens using intravenous omeprazole and ranitidine that would elevate and consistently maintain intragastric pH &gt; 6 in the first 24 hr of therapy. In 19 healthy, fasting human subjects using continuous 24-hr gastric pH-metry, we studied two dosages of primed infusions of ranitidine (50 mg bolus followed by infusion of either 3 or 6 mg/kg body wt/24 hr) and six regimens of intravenous omeprazole (80-200 mg in 24 hr in two to five boluses). Only the two ranitidine infusions and high doses of omeprazole (&gt; or = 160 mg/day as four or five boluses) raised the intragastric median pH above 5.4. There was no significant difference in the median intragastric pH after high dose ranitidine and high doses of omeprazole. Considerable interindividual variation in intragastric pH was observed after omeprazole therapy. The percentage of intragastric pH &gt; 6.0 during the 24-hr study was lower after omeprazole (35-42%) than after high-dose ranitidine (58%). We conclude that it is possible to raise intragastric pH &gt; 6.0 by use of either primed ranitidine infusion or by repeated boluses of omeprazole. However, maintenance of this high pH in the first 24 hr is difficult with both, more so with omeprazole. abstract_id: PUBMED:7895928 Effect of intravenous infusion of omeprazole and ranitidine on twenty-four-hour intragastric pH in patients with a history of duodenal ulcer. The effect on intragastric pH of two different dose regimens of continuous intravenous infusion of omeprazole (4 or 8 mg/h after a bolus of 80 mg), and ranitidine (0.25 mg/kg/h after a bolus of 50 mg) was studied in 10 patients with duodenal ulcer disease in symptomatic remission. The pH was monitored over 24-hour periods during fasting in a cross-over, randomised design including a baseline period. With the high omeprazole dose it was possible to maintain a pH &gt; or = 4 in all patients but 1 and 6 of the patients also maintained a pH &gt; or = 6. The lower dose of omeprazole seemed to be somewhat less effective. Continuous infusion of ranitidine was as efficient as the higher omeprazole infusion although with a tendency to decreased pH levels towards the end of the 24-hour period. Thus, in order to obtain consistently high pH levels of 4-6 over a prolonged period a continuous infusion of omeprazole, an 80-mg bolus plus a continuous infusion of 8 mg/h seem to be needed. abstract_id: PUBMED:10022628 Effect of repeated injection and continuous infusion of omeprazole and ranitidine on intragastric pH over 72 hours. Objective: In healthy subjects and patients with bleeding peptic ulcers, ranitidine and omeprazole, given parenterally, achieve high intragastric pH values on the first day of therapy. However, data on the antisecretory effect beyond the first 24 h is scanty. In addition, the superiority of either infusion or injection of omeprazole remains unproven. Thus, we have compared the antisecretory effect of high dose omeprazole and ranitidine infusion and injection over the critical first 72 h. Methods: A total of 34 healthy volunteers were randomized into a double-blind crossover 72 h intragastric pH-metry study (data compared: median pH, percentage of time with pH &gt;4 and pH &gt;6). Omeprazole-infusion: initial bolus of 80 mg + 8 mg/h; omeprazole-injection: initial bolus of 80 mg + 40 mg/6 h; Ranitidine-infusion: initial bolus of 50 mg + 0.25 mg/kg/h; ranitidine-injection: 100 mg/6 h. Results: Omeprazole-infusion versus ranitidine-infusion: on day 1: median pH 6.1 vs 5.1 (p = 0.01) and 95% vs 70% was pH &gt;4 (p &lt; 0.01); on day 2: median pH 6.2 vs 3.2 (p &lt; 0.01); and 100% vs 38% was pH &gt;4 (p &lt; 0.01); on day 3: median pH 6.3 vs 2.7 (p &lt; 0.01); 100% vs 26% was pH &gt;4 (p &lt; 0.01). Injections of both drugs were significantly less effective than the infusions on day 1. Thereafter, omeprazole injection was almost as effective as omeprazole infusion, whereas ranitidine injection and infusion were equally effective. Conclusion: Our study shows, for the first time, that omeprazole infusion was significantly superior to all other regimens by having a high median pH &gt;6 on each day. The tolerance effect of ranitidine, however, led to a rapid loss of antisecretory activity on days 2 and 3, rendering it inappropriate for situations in which high intragastric pH-levels appear to be essential. abstract_id: PUBMED:10383501 The effect of Helicobacter pylori eradication on intragastric pH during dosing with lansoprazole or ranitidine. Background: The antisecretory effect of omeprazole on intragastric pH is decreased in the absence of Helicobacter pylori. Aim: To investigate the effect of H. pylori eradication on intragastric pH during lansoprazole or ranitidine dosing in 41 asymptomatic H. pylori-positive subjects. Method: Two groups of healthy H. pylori-positive volunteers were investigated. One group was dosed with lansoprazole 30 mg at 08.00 hours for at least 8 days, before and after 2 weeks of placebo-controlled double-blind eradication therapy using ranitidine bismuth citrate 400 mg b.d. and clarithromycin 500 mg b.d. The other group was dosed with ranitidine 300 mg at 23.00 hours for at least 8 days using the same trial design. An upper endoscopy was performed to establish H. pylori status by rapid urease test, culture and histology before both periods of dosing. Twenty-four hour intragastric pH recording was performed on the final day of all periods of dosing. Results: H. pylori eradication significantly decreased the intragastric pH reached during lansoprazole treatment throughout all periods of the day. Intragastric pH during ranitidine treatment was not affected by H. pylori eradication, except for the late-night period. Conclusion: H. pylori eradication has a more pronounced effect on the acid-inhibiting properties of lansoprazole than on those of ranitidine. abstract_id: PUBMED:8038351 Comparison of acid inhibition by either oral high-dose ranitidine or omeprazole. Background: High-dose once daily oral omeprazole dosing can inhibit acid secretion almost completely but several days elapse before maximum efficacy is established. The acid inhibitory effect obtained with high doses of a histamine H2-receptor antagonist is built up rapidly but has the tendency to fade--the term 'tolerance' has been applied to characterize this phenomenon. Methods: To obtain more information on the dynamics of acid inhibition during prolonged dosing, we compared the acid suppressory effects of oral high-dose omeprazole with high-dose ranitidine. Twenty-eight healthy volunteers were randomly assigned to a 2-week dosing with omeprazole or ranitidine in a double-blind, double-dummy, parallel-group study design. Omeprazole was given as 1 capsule of 40 mg mane and ranitidine as 2 tabs of 150 mg q.d.s. The median 24-h pH, daytime pH and night-time pH were measured by ambulatory continuous 24-h pH metry on days -8, -6, 1, 2, 7 and 14. Results: High reproducibility was observed for the two baseline acidity measurements. Ranitidine exerted its peak acid suppressant effect on day 1 of dosing; the degree of acid inhibition faded from day 2 to 7, with no significant change thereafter. The decline in antisecretory activity was more pronounced during the day than the night. In contrast, acid inhibition by omeprazole increased throughout the first week, and antisecretory activity was stable thereafter. Despite the considerable differences in median intragastric pH values at the end of the 14-day study, plasma gastrin levels were elevated to a similar degree with both medications. Conclusions: This study confirms the 'tolerance' phenomenon previously observed with high-dose histamine H2-receptor antagonist dosing. The dynamics with which it occurs exclude a typical exaggerated first-dose response. Prolonged high-dose histamine H2-receptor dosing compromises the feedback mechanism regulating gastrin release, whilst this is maintained during dosing with omeprazole. abstract_id: PUBMED:9155573 Efficacy of primed infusions with high dose ranitidine and omeprazole to maintain high intragastric pH in patients with peptic ulcer bleeding: a prospective randomised controlled study. Background: In healthy subjects, continuous infusions of high dose ranitidine and omeprazole produce high intragastric pH values. Aim: To test the hypothesis that both drugs also maintain high intragastric pH values in patients with bleeding ulcers. Patients And Methods: In two parallel studies, 20 patients with bleeding duodenal ulcers and 20 patients with bleeding gastric ulcers were randomly assigned to receive either ranitidine (0.25 mg/kg/hour after a bolus of 50 mg) or omeprazole (8 mg/hour after a bolus of 80 mg) for 24 hours. Intragastric pH was continuously recorded with a glass electrode placed 5 cm below the cardia. Results: Both drugs rapidly raised the intragastric pH above 6. During the second 12 hour period, however, the percentage of time spent below a pH of 6 was 0.15% with omeprazole and 20.1% with ranitidine (p = 0.0015) in patients with duodenal ulcer; in patients with gastric ulcer it was 0.1% with omeprazole and 46.1% with ranitidine (p = 0.002). Conclusions: Primed infusions of omeprazole after a bolus produced consistently high intragastric pH values in patients with bleeding peptic ulcers, whereas primed infusions with ranitidine were less effective during the second half of a 24 hour treatment course. This loss of effectiveness may be due to tolerance. abstract_id: PUBMED:2001826 Nocturnal intragastric acidity during and after a period of dosing with either ranitidine or omeprazole. The magnitude and duration of changes in nocturnal intragastric acidity caused by 25 days of dosing with the antisecretory drugs ranitidine and omeprazole were investigated in a double-blind study of 22 healthy subjects. Nocturnal intragastric acidity was studied before (twice), during (on day 25), and after (every 3 days for 21 days) dosing with either 300 mg ranitidine at night or 40 mg omeprazole every morning. Three and six days after withdrawal of dosing with ranitidine, median integrated nocturnal intragastric acidity was increased significantly (17% and 14%, P = 0.01 and P = 0.05, respectively) compared with before dosing. Three days after withdrawal of dosing with omeprazole, median integrated nocturnal intragastric acidity was decreased significantly (-23%, P = 0.003). Compared with before dosing, no significant differences were seen in the ranitidine group between days 9 and 21 or the omeprazole group between days 6 and 21 after cessation of dosing. Fasting plasma gastrin concentration was measured on the morning of each study; compared with before treatment, the only significant elevations occurred on the last day of dosing with omeprazole (before, 4 pmol/L; during, 7 pmol/L). It is concluded that rebound intragastric hyperacidity after dosing with 300 mg ranitidine at night or sustained hypoacidity after dosing with 40 mg omeprazole every morning reflect transient disturbances of gastric function that are unlikely to be of clinical importance. abstract_id: PUBMED:7557078 Turnover of the gastric H+,K(+)-adenosine triphosphatase alpha subunit and its effect on inhibition of rat gastric acid secretion. Background & Aims: The rate of turnover and the effect of inhibition of acid secretion on the turnover of gastric H+,K(+)-adenosine triphosphatase (ATPase) is unknown. The aim of this study was to determine the turnover of the alpha subunit of gastric H+,K(+)-ATPase in rats under control conditions and during inhibition of acid secretion by ranitidine or omeprazole. Methods: The turnover of the alpha subunit of the ATPase was determined by measuring the loss of incorporated 35S-methionine. This was compared with the rate of recovery of K(+)-stimulated ATPase activity in the omeprazole-treated animals. Results: The half-life of the alpha subunit was 54 hours. A 1-week treatment with omeprazole had no significant effect, but the half-life increased to 125 hours (P &lt; 0.01) after continuous ranitidine infusion. After omeprazole treatment, K(+)-stimulated ATPase activity recovered with a half-time of 15 hours. Conclusions: The turnover of the gastric ATPase subunit was independent of omeprazole inhibition but was prolonged by ranitidine. The effect of ranitidine suggests that the resting pump in tubulovesicles may turn over more slowly than the stimulated pump in the secretory canaliculus. The rapid recovery of ATPase activity compared with turnover after omeprazole is caused by both H+,K(+)-ATPase synthesis and loss of covalently bound drug. abstract_id: PUBMED:28842999 Rethinking the laryngopharyngeal reflux treatment algorithm: Evaluating an alternate empiric dosing regimen and considering up-front, pH-impedance, and manometry testing to minimize cost in treating suspect laryngopharyngeal reflux disease. Objectives/hypothesis: Empiric proton pump inhibitor (PPI) trials for laryngopharyngeal reflux (LPR) are common. A majority of the patients respond to acid suppression. This work intends to evaluate once-daily, 40 mg omeprazole and once-nightly, 300 mg ranitidine (QD/QHS) dosing as an alternative regimen, and use this study's cohort to evaluate empiric regimens prescribed for LPR as compared to up-front testing with pH impedance multichannel intraluminal impedance (MII) with dual pH probes and high-resolution manometry (HRM) for potential cost minimization. Study Design: Retrospective cohort review and cost minimization study. Methods: A chart review identified patients diagnosed with LPR. All subjects were treated sequentially and outcomes recorded. Initial QD/QHS dosing increased after 3 months to BID if no improvement and ultimately prescribed MII and HRM if they failed BID dosing. Decision tree diagrams were constructed to determine costs of two empiric regimens and up-front MII and HRM. Results: Ninety-seven subjects met the criteria. Responders and nonresponders to empiric therapy were identified. Seventy-two subjects (74%) responded. Forty-eight (67% of responders and 49% of all) improved with QD/QHS dosing. Forty-nine (51%) subjects escalated to BID dosing. Twenty-four subjects (33% of responders and 25% of all) improved on BID therapy. Twenty-five subjects (26%) did not respond to acid suppression. Average weighted cost was $1,897.00 per patient for up-front testing, $3,033.00 for initial BID, and $3,366.00 for initial QD/QHS. Conclusions: An alternate QD/QHS regimen improved the majority who presented with presumed LPR. Cost estimates demonstrate that the QD/QHS regimen was more expensive than the initial BID high-dose PPI for 6 months. Overall per-patient cost appears less with up-front MII and HRM. Level Of Evidence: 4. Laryngoscope, 127:S1-S13, 2017. Answer: Continuous infusions of omeprazole and ranitidine do not retain their effect equally with prolonged dosing. Studies have shown that omeprazole infusions consistently maintain gastric pH above 4 over a period of 72 hours with progressively lower doses, indicating that its antisecretory effect is retained or even enhanced with prolonged dosing (PUBMED:8276209). In contrast, significant tolerance to the antisecretory effect of ranitidine infusion develops within 72 hours, which is not overcome despite individually titrated doses of more than 500 mg/24 hours (PUBMED:8276209). This suggests that ranitidine's effect diminishes with prolonged infusion, and its ability to maintain high intragastric pH levels is less effective over time (PUBMED:9155573). Furthermore, the superiority of omeprazole infusion over ranitidine, in terms of maintaining a high median pH, has been demonstrated over the critical first 72 hours (PUBMED:10022628). Omeprazole infusion was significantly superior to all other regimens by having a high median pH >6 on each day, while the tolerance effect of ranitidine led to a rapid loss of antisecretory activity on days 2 and 3 (PUBMED:10022628). This indicates that omeprazole is more appropriate for situations where maintaining high intragastric pH levels is essential. In summary, while omeprazole infusions retain their antisecretory effect with prolonged dosing, ranitidine infusions do not, due to the development of tolerance.
Instruction: Are nurse injectors the new norm? Abstracts: abstract_id: PUBMED:30474470 Comparing Injecting Risk Behaviors of Long-Term Injectors with New Injectors in Tehran, Iran. Background: Global estimates suggest there are 15.6 million people who inject drugs (PWID) of whom 17.8% are living with HIV.Few studies have characterized newly-onset injectors with long-term injectors and its association with injecting risk behaviors. Objectives: We examined the relationship between length of injection and risk behaviors among people who inject drugs (PWID) in Tehran, Iran. Methods: A cross-sectional study was conducted among PWID, from March to August 2016 in Tehran, Iran. PWID were recruited by convenience and snowball sampling from five Drop-in Centers (DIC) located in the south of Tehran. Our primary independent variable was length of injecting career, defined as the number of months since injecting initiation. Those defined as new injectors (were injecting for less than 18 months), and long-term injectors (as injecting drugs for more than 18 months). We reported the adjusted odds ratio (aOR) point estimate and 95% confidence interval (CI95%) as the effect measure. The level of significance used in multiple logistic regression model was 0.05. We used STATA v. 11 for all analyzes. Results: The analytical sample comprised of 500 participants (100% male). Mean (±SD) age of PWID with a length of injection history was 31.2 ± 7.2 years. Overall, 270 (54%) (CI95%: 49.6%, 58.4%) of participants were long-term injectors. The average age of drug use initiation among long-term injectors group was lower as compared to new injectors group (31.2 vs. 29.4, p &lt; 0.001). The odds of distributive syringe sharing among new injectors were two times higher than long-term injectors (AOR = 2.1, 95% CI 1.4-4.7). The odds of receptive syringe sharing were lower among new injectors group (AOR = 0.7, CI95% 0.2-0.87), compared to long-term injectors. New injectors had higher odds of reusing their own syringes (OR = 2.8, 95% CI: 1.4-5.7; p = 0.01). Conclusions: Improvements in harm reduction service provision can occur through taregted risk reduction education for new injectors focusing on reducing distributive syringe sharing among them. abstract_id: PUBMED:31619140 Comparing injecting and sexual risk behaviors of long-term injectors with new injectors: A meta-analysis. The present meta-analysis aimed to investigate the effect of injection duration on injection and sexual high-risk behaviors among people who inject drugs (PWID), in order to inform development of intensive HIV prevention services for selected PWID sub-populations. We searched PubMed, Science Direct, Web of Science, and Cochrane electronic databases independently in December 2018. After reviewing for duplication, full-texts of selected articles were assessed for eligibility using certain Population, Intervention, Comparator, Outcomes (PICO) criteria. We used fixed and random-effects meta-analysis models to estimate the pooled prevalence, pooled odds ratio (OR) and 95% confidence intervals (CI). Our result indicated significant association between age of injection initiation &gt; 17 years, frequency of drug injection &gt; 5 times/day, injection by others, having sex partner, history of imprisonment with new injectors (OR = 0.93, 95%CI = 0.87-0.98), (OR = 0.51, 95%CI = 0.29-0.73), (OR = 1.11, 95%CI = 1.05-1.17), (OR = 2.08, 95%CI = 1.02-3.14) and (OR = 1.20, 95%CI = 1.03-1.37). Our research found that new injectors were more likely to report frequency of injections injected by others, has sex partner and prison detention. Our findings are significant for policy makers and public health practitioners to implement and design HIV prevention programs among PWID with shorter periods of injection. The findings of the present study extend our knowledge about new injection drug users, the significance of assured behaviors at IDUs' initial injection, and the educational importance of syringe exchange programs. abstract_id: PUBMED:24947473 Are nurse injectors the new norm? Purpose: As Botox(®)/filler use has increased in recent years, a growing number of nonaesthetic health professionals have emerged to perform these procedures. Since studies have shown that patients identify training as the most important factor in considering these procedures, this study seeks to summarize the perspective of plastic surgeons regarding these paradigm shifts. Methods: In the summer of 2013, an eight-question survey was sent to members of ISAPS, ASAPS, and ASPS (approximately 26,113 plastic surgeons globally). Two questions assessed practice location and membership affiliation and six questions assessed various healthcare practitioners' capability to administer Botox, fillers, and vaccines (control). Healthcare practitioners included plastic surgeons and dermatologists, gynecologists, dentists, nurses in plastic surgery and dermatology, or nurses in other fields. Results: On three e-mail notifications, 14,184 plastic surgeons opened the survey and 882 responded: 36.6 % from North America, 29.1 % from Europe, 12.9 % from South America, 10.1 % from Asia, 4.5 % from the Middle East, 3.4 % from Australia, 1.9 % from Africa, and 1.6 % from Central America. Seventy-seven percent believed nurses were not as capable as plastic surgeons in administering Botox; 81 % felt the same for fillers. Conversely, 84 % agreed that nurses were as capable as plastic surgeons in administering vaccines. Plastic surgeons ranked nurses in other fields (48 %) as most capable in administering vaccines, then plastic surgeons (42 %), nurses of plastic surgeons (9 %), gynecologists (1 %), and dentists (&lt;1 %). When asked about Botox/fillers, responders ranked plastic surgeons (98 %) most capable, then nurses in plastic surgery (2 %), gynecologists (&lt;1 %), dentists (&lt;1 %), and nurses in other fields (&lt;1 %). When asked to rank according to patient perception, the order remained the same. Conclusion: Based on responses from over 880 plastic surgeons from around the world, plastic surgeons consider themselves and dermatologists the most capable injectors. However, they still believe nurses in other fields to be the most capable of administering vaccines. This dichotomy may define the role of various practitioners in an increasingly more competitive injectable environment to improve patient satisfaction and outcomes. Given that the majority of growth in cosmetic injectables is being driven by providers other than plastic surgeons and dermatologists, further clarification on training requirements and practice guidelines may be necessary to ensure a consistent, reproducible experience for the patient. abstract_id: PUBMED:33180620 Setting up a successful nurse-led intravitreal injections service: pearls from Swindon. The demand for performing intravitreal injections has increased in recent years, prompting the need for more nurse training in their administration. The Great Western Hospitals NHS Trust in Swindon has developed a structured nurse training programme and now has 8 independent nurse injectors trained to undertake injections independently; nurse practitioners now contribute upwards of 85% of the total number of injections. The authors have also demonstrated the financial benefits of using injection assistant devices and shown the positive impact such devices have on training. In September 2019, the authors organised the first course to offer nurses and doctors hands-on experience in administering injections, using the Swindon training model to provide participants with a structured approach to learn how to perform intravitreal injections safely. Nurses made up 96% of participants; the remainder were doctors and managers; 6% had never performed an intravitreal injection; of units where they had, disposable drapes and a speculum were used in 71% of these. The number of injections performed per session at participants' units at the time they attended the course was: 17 or more injections=46%, 13-14=39%, and 11-12=15%. The course was rated 8.9/10 overall for content, with 85% very likely to recommend it to colleagues. All participants indicated that using the Swindon model made them feel confident to deliver injections safely. The authors demonstrated that using a structured training protocol and intravitreal assistant device improves the quality of nurse training and increases confidence in administering intravitreal injections. abstract_id: PUBMED:25709157 Pressure injectors for radiologists: A review and what is new. Pressure Injectors are used routinely in diagnostic and interventional radiology. Advances in medical science and technology have made it is imperative for both diagnostic as well as interventional radiologists to have a thorough understanding of the various aspects of pressure injectors. Further, as many radiologists may not be fully conversant with injections into ports, central lines and PICCs, it is important to familiarize oneself with the same. It is also important to follow stringent operating protocols during the use of pressure injectors to prevent complications such as contrast extravastion, sepsis and air embolism. This article aims to update existing knowledge base in this respect. abstract_id: PUBMED:25189296 Are nurse injectors the new norm? N/A abstract_id: PUBMED:25209529 Commentary on "are nurse injectors the new norm?". N/A abstract_id: PUBMED:7853064 Nurse practitioners, certified nurse midwives, and nurse anesthetists: changing care in acute care hospitals in New York City. To respond to the shrinking pool of primary care physicians and to demands from managed care programs for cost containment, hospitals in New York City have increased their use of nurse practitioners, certified nurse midwives, and nurse anesthetists, creating an increased demand for these personnel. We report here on a survey of hospitals and schools of nursing in New York City and present findings on (a) current use of, and projected demand for nurse practitioners (NPs), certified nurse midwives (midwives) and nurse anesthetists (anesthetists) in hospitals in New York City; (b) the practice patterns of NPs, midwives, and anesthetists currently employed in hospitals; and (c) current and projected enrollment and curriculum in NP, midwifery, and anesthetist education programs in the New York metropolitan area. abstract_id: PUBMED:12172490 The triumph and continuing struggle of nurse practitioners in New South Wales, Australia. Finally nurse practitioners in New South Wales, Australia, are legally recognized after more than a decade of struggle. In this article the authors provide an update on the nurse practitioner movement, including the continuing conflict with the medical profession as nursing strives to achieve full implementation and acceptance of the nurse practitioner role in New South Wales. Discussion includes nurse practitioner role legislation and authorization process. The issues and concerns expressed by the medical and nursing professions are outlined, and the authors make recommendations to strengthen and support the advanced professional role and status of nurse practitioners. abstract_id: PUBMED:9928732 Risk behavior and HIV infection among new drug injectors in the era of AIDS in New York City. Objective: To examine HIV risk behavior and HIV infection among new initiates into illicit drug injection in New York City. Design And Methods: Cross-sectional surveys of injecting drug users (IDUs) recruited from a large detoxification treatment program (n=2489) and a street store-front research site (n=2630) in New York City from 1990 through 1996. Interviews covering demographics, drug use history, and HIV risk behavior were administered; serum samples were collected for HIV testing. Subjects were categorized into two groups of newer injectors: very recent initiates (just began injecting through 3 years) and recent initiates (injecting 4-6 years); and long-term injectors (injecting &gt; or = 7 years). Results: 954 of 5119 (19%) of the study subjects were newer injectors, essentially all of whom had begun injecting after knowledge about AIDS was widespread among IDUs in the city. New injectors were more likely to be female and white than long-term injectors, and new injectors were more likely to have begun injecting at an older age (median age at first injection for very recent initiates, 27 years; median age at first injection for recent initiates, 25 years; compared with median age at first injection for long-term injectors, 17 years). The newer injectors generally matched the long-term injectors in frequencies of HIV risk behavior; no significant differences were found among these groups on four measures of injection risk behavior. HIV infection was substantial among the newer injectors: HIV prevalence was 11% among the very recent initiates and 18% among the recent initiates. Among the new injectors, African Americans, Hispanics, females, and men who engaged in male-male sex were more likely to be infected. Conclusions: The new injectors appear to have adopted the reduced risk injection practices of long-term injectors in the city. HIV infection among new injectors, however, must still be considered a considerable public health problem in New York City. Answer: The question of whether nurse injectors are the new norm is addressed in several of the provided abstracts. According to the abstract from PUBMED:24947473, there has been an increase in nonaesthetic health professionals, including nurses, performing Botox®/filler injections. The study surveyed plastic surgeons' perspectives on this trend and found that while plastic surgeons consider themselves and dermatologists the most capable injectors, they acknowledge that nurses are as capable as plastic surgeons in administering vaccines. This suggests that nurses are increasingly taking on roles traditionally held by doctors, at least in the context of administering certain types of injections. The abstract from PUBMED:33180620 further supports the idea that nurse injectors are becoming more common, at least in the context of intravitreal injections. It describes a successful nurse-led intravitreal injections service where nurse practitioners contribute to the majority of injections. This indicates that with proper training, nurses can independently perform these procedures, which were traditionally done by doctors. The abstract from PUBMED:7853064 discusses the increased use of nurse practitioners, certified nurse midwives, and nurse anesthetists in hospitals in New York City, which is a response to the shortage of primary care physicians and the demands of managed care programs for cost containment. This suggests a broader trend of nurses taking on more advanced roles in healthcare settings. The abstract from PUBMED:12172490 discusses the legal recognition of nurse practitioners in New South Wales, Australia, after a long struggle, indicating a shift towards greater acceptance and integration of nurse practitioners into the healthcare system. In summary, the provided abstracts suggest that nurse injectors are increasingly becoming a norm in various healthcare settings, taking on roles that involve administering injections, which were traditionally performed by physicians. This shift seems to be driven by a combination of factors, including the need for cost containment, the shortage of primary care physicians, and the recognition of the capabilities of nurses with appropriate training (PUBMED:24947473, PUBMED:33180620, PUBMED:7853064, PUBMED:12172490).
Instruction: Can pharmaco-DSA of the kidney replace intraoperative rapid biopsy diagnosis? Abstracts: abstract_id: PUBMED:8679731 Can pharmaco-DSA of the kidney replace intraoperative rapid biopsy diagnosis? Question: Can primary nephrectomy be performed without preliminary sample excision of the tumor if pharmaco-angiography of the kidney has demonstrated the typical tumor vascularization? Material And Method: To clarify this question in 32 patients with "displacing mass" of the kidney, verified in sonography and computer-tomography, or hematuria of unknown origin, we prospectively performed and additional pharmaco-angiography of the respective kidney. Results: In 18 patients with tumor vascularization in the pharmaco-angiography, intraoperatively we found 15 malignant renal cell carcinomas, 1 patient with transitional cell carcinoma of the renal pelvis, 1 leiomyosarcoma, and 1 high-differentiated tumor of only 2 cm in diameter with unclear dignity, which was treated by enucleation. Conclusion: In case of an intrarenal lesion of more than 3 cm in diameter and additional tumor vascularization seen in selective pharmaco-angiography, the kidney undoubtedly can be removed by primary nephrectomy without a preliminary sample excision to confirm the diagnosis. For tumors with a diameter of less than 3 cm and additional tumor-vascularization, the option should be enucleation. If there is a "tumor" without typical malignant vascularization, the exploration by sample excision should be performed. Depending on the histological result the tumor should be removed by enucleation or nephrectomy. abstract_id: PUBMED:21187882 Role of scrape cytology in the intraoperative diagnosis of tumor. Background: Rapid diagnosis of surgically removed specimens has created many controversies and a single completely reliable method has not yet been developed. Histopathology of a paraffin section remains the ultimate gold standard in tissue diagnosis. Frozen section is routinely used by the surgical pathology laboratories for intraoperative diagnosis. The use of either frozen section or cytological examination alone has an acceptable rate (93-97%) of correct diagnosis, with regard to interpretation of benign versus malignant. Aim: To evaluate the utility of scrape cytology for the rapid diagnosis of surgically removed tumors and its utilisation for learning cytopathology. Materials And Methods: 75 surgically removed specimens from various organs and systems were studied. Scrapings were taken from each specimen before formalin fixation and stained by modified rapid Papanicolaou staining. Results: Of the 75 cases studied, 73 could be correctly differentiated into benign and malignant tumors, with an accuracy rate of 97.3%. Conclusions: Intraoperative scrape cytology is useful for intraoperative diagnosis of tumor, where facilities for frozen section are not available. The skill and expertise developed by routinely practicing intraoperative cytology can be applied to the interpretation of fine needle aspirate smears. Thus, apart from its diagnostic role, intraoperative cytology can become a very useful learning tool in the field of cytopathology. abstract_id: PUBMED:7046285 Intraoperative rapid frozen section diagnosis of brain biopsies (author's transl) Cryostat sections guarantee a correct diagnosis in 98.4% of our brain biopsy cases. In skilled hands, small pieces of tissue can yield excellent sections, and in our laboratory cryostat sections are made as a rapid routine method of diagnosis with the advantage that the material then can be used for chemical, histochemical, cytophotometric or ultrastructural studies as well as for tissue culture. abstract_id: PUBMED:33866524 Biopsy findings after detection of de novo donor-specific antibodies in renal transplant recipients: a single center experience. Background: De novo donor-specific antibodies (DSA) are associated with an increased risk of antibody-mediated rejection and a substantial reduction of allograft survival. We hypothesized that detection of DSA should prompt a biopsy even in the absence of proteinuria and loss of estimated glomerular filtration rate (eGFR). However, data on a population without proteinuria or loss of kidney function is scant, and this is the main novelty of our study design. Methods: Single center retrospective analysis on biopsy findings after detection of de novo DSA. One-hundred-thirty-two kidney and pancreas-kidney transplant recipients were included. Eighty-four of these patients (63.6%) underwent allograft biopsy. At the time of biopsy n = 50 (59.5%) had a protein/creatinine ratio (PCR) &gt; 300 mg/g creatinine and/or a loss of eGFR ≥ 10 ml/min in the previous 12 months, whereas 40.5% did not. Diagnosis of rejection was performed according to Banff criteria. Results: Seventy-seven (91.7%) of the biopsies had signs of rejection (47.6% antibody mediated rejection (ABMR), 13.1% cellular, 20.2% combined, 10.7% borderline). Among subjects without proteinuria or loss of eGFR ≥ 10 ml/min/a (n = 34), 29 patients (85.3%) showed signs of rejection (44.1% antibody mediated (ABMR), 14.7% cellular, 11.8% combined, 14.7% borderline). Conclusion: The majority of subjects with de novo DSA have histological signs of rejection, even in the absence of proteinuria and deterioration of graft function. Thus, it appears reasonable to routinely perform an allograft biopsy after the detection of de novo DSA. abstract_id: PUBMED:19483425 Intraoperative consultation and smear cytology in the diagnosis of brain tumours. Background: Intraoperative smear cytology provides a rapid and reliable intraoperative diagnosis and guidance to the neurosurgeon during surgical resection and lesion targeting. It also helps the surgeon to monitor and modify the approach at surgery. Objectives: 1) To assess the utility of intraoperative smear cytology and correlate with the final histopathological diagnosis. 2) To describe the cytomorphological features of common brain tumours in smear preparation. Materials And Methods: The material for this study was obtained from 100 consecutive biopsies of central nervous system neoplasms sent for intraoperative consultation. Smears were prepared from the biopsy samples sent in isotonic saline for immediate processing. The smears were stained by the rapid Haematoxylin and Eosin method. The cytomorphological features were noted and correlated with paraffin section findings. Results: Of the total 100 cases, 86 showed accuracy when compared with histopathological diagnosis. This was comparable with other studies. Of the remaining, two cases were frank errors, 12 cases showed partial correlation, with five cases showed incomplete typing of the cell type and seven, discrepancy in grading of tumours. The error percentage was 14%. Correlation with clinical details and radiological findings were helpful in improving the accuracy rate. Conclusions: Smear technique is a fairly accurate, relatively safe, rapid, simple, easily reproducible and cost effective tool to diagnose brain tumours. Smear cytology is of great value in intraoperative consultation of central nervous system pathology. abstract_id: PUBMED:36284616 Intraoperative application of a new-generation 3D IV-DSA technology in resection of a hemorrhagic cerebellar AVM. Although intravenous digital subtraction angiography (IV-DSA), cone-beam CT, and rotational angiography are well-established technologies, using them in a single system in the hybrid operating room to acquire high-quality noninvasive 3D images is a recent development. This video demonstrates microsurgical excision of a ruptured cerebellar arteriovenous malformation (AVM) in a 66-year-old male followed by intraoperative IV-DSA acquisition using a new-generation system (Artis Icono). IV-DSA confirmed in real time that no residual remained following excision without the need to reposition the patient. To the best of the authors' knowledge, this is the first surgical video to demonstrate the simplified workflow and application of this technology in neurovascular surgery. The video can be found here: https://youtu.be/bo5ya9DQQPw. abstract_id: PUBMED:3366455 Pancreatic acinar ectasia and intraoperative needle biopsy. Intraoperative needle biopsy of the pancreas showing pancreatic acinar ectasia can present a problem in differential diagnosis from pancreatic carcinoma. Although this event has previously been described as an incidental postmortem finding, with the increasing use of intraoperative pancreatic biopsy, it is probable that it will be encountered more frequently. The surgical pathologist must be able to distinguish this entity from well-differentiated primary pancreatic adenocarcinoma on frozen section. abstract_id: PUBMED:2796345 Intraoperative pancreatic biopsy--a diagnostic dilemma. The intraoperative diagnostic dilemma of pancreatic cancer vs. chronic pancreatitis often remains unresolved. In the literature, diagnosis based on intraoperative pancreatic biopsy is a matter of controversy. Our study comprised 70 patients with a suspected space occupying pancreatic process who were operated on with the primary goal of arriving at a speedy and precise diagnosis, according to which the appropriate surgery for the specific patient would be performed. Frozen section showed that 44 patients had malignancy of the pancreas; in three patients, there was a false-negative result. In four patients, the histological picture was consistent with chronic pancreatitis. In the remaining 19 patients, biopsy was not performed and the diagnosis was made on the basis of intraoperative inspection and palpation of pancreas. On reviewing the patients, we found that the surgical risk involved in biopsy is quite low and that a high price was paid for diagnosis by inspection and palpation alone. The risk vs. benefit aspect of intraoperative pancreatic biopsy is discussed. abstract_id: PUBMED:3523725 Intraoperative sonography to localize a kidney tumor The authors present the case of a 61-year old man with microhematuria, in whom the preoperative diagnostic work-up (urography, percutaneous sonography, CT, retrograde pyelogram) had led to the diagnosis of a right sided renal tumour. Intraoperatively a kidney tumour was neither seen nor palpated. However, the mass was clearly outlined by intraoperative sonography. The following nephrectomy revealed an intrarenal adenocarcinoma of the kidney of 2.5 cm diameter. abstract_id: PUBMED:16316946 Value of electron microscopy in kidney biopsy diagnosis. Kidney biopsy reports given during 2003 were collected from the authors' pathology database. A total of 111 biopsies were performed. Five tumor samples were not studied with electron microscopy (EM). Of the remaining 106 biopsies, 85 were studied with EM. EM was not performed in 10/24 transplant biopsies, or in 11/82 cases of suspected primary kidney disease. The role of EM was evaluated by grouping the samples in 3 categories: (1) EM was essential for diagnosis, (2) EM contributed to the interpretation and cleared uncertainties, and (3) EM had no influence on the diagnostic process. In transplant biopsies EM influenced the final diagnosis in 86% of cases (category 2). In biopsies performed for primary kidney disease EM was essential for diagnosis in 18.3% clearly contributed in 53.5%, and had no influence on the final diagnosis in 28.2% of cases. The study suggests that the importance of EM has not decreased during the last few years. Because only about 25% of the EM reports did not have any influence on the diagnostic process, it is recommended that kidney biopsy protocols should include EM in all biopsy cases, or at least tissue should be reserved for EM studies of all cases. Because of the influence of EM on the diagnostic process the need for EM in pathology training should be emphasized. Answer: Pharmaco-DSA (pharmaco-digital subtraction angiography) of the kidney has been evaluated as a potential alternative to intraoperative rapid biopsy diagnosis in certain cases. According to the study in abstract PUBMED:8679731, pharmaco-angiography of the kidney was able to demonstrate typical tumor vascularization in patients with a "displacing mass" of the kidney. The results indicated that in the presence of an intrarenal lesion larger than 3 cm in diameter with additional tumor vascularization seen in selective pharmaco-angiography, primary nephrectomy could be performed without preliminary sample excision to confirm the diagnosis. However, for tumors smaller than 3 cm with additional tumor vascularization, enucleation was suggested. If there was a "tumor" without typical malignant vascularization, exploration by sample excision was recommended. While pharmaco-DSA can provide valuable information regarding tumor vascularization, it is not a direct tissue diagnosis method like intraoperative rapid biopsy diagnosis. Techniques such as scrape cytology (PUBMED:21187882), rapid frozen section diagnosis (PUBMED:7046285), and smear cytology (PUBMED:19483425) have been shown to have high accuracy rates in differentiating between benign and malignant tumors and providing rapid intraoperative diagnoses. These methods are based on the examination of actual tissue samples, which can offer a definitive diagnosis that imaging techniques alone cannot provide. Therefore, while pharmaco-DSA can be a useful tool in the preoperative assessment of renal masses and may allow for primary nephrectomy in certain cases, it does not completely replace the need for intraoperative rapid biopsy diagnosis, which remains the gold standard for tissue diagnosis (PUBMED:21187882, PUBMED:7046285, PUBMED:19483425). It is important to consider the size and vascularization characteristics of the lesion, as well as the availability and reliability of rapid diagnostic techniques, when deciding on the appropriate approach for each individual case.
Instruction: Is glycoprotein IIb/IIIa antagonism as effective in women as in men following percutaneous coronary intervention? Abstracts: abstract_id: PUBMED:12354432 Is glycoprotein IIb/IIIa antagonism as effective in women as in men following percutaneous coronary intervention?. Lessons from the ESPRIT study. Objective: The study was done to determine whether eptifibatide, a platelet glycoprotein (GP) IIb/IIIa antagonist, prevents ischemic complications following percutaneous coronary interventions (PCIs) in women as well as in men. Background: Eptifibatide reduces ischemic complications after nonurgent coronary stent interventions. Methods: We compared outcomes in women (n = 562) and men (n = 1,502) enrolled in the Enhanced Suppression of the Platelet GP IIb/IIIa Receptor with Integrilin Therapy (ESPRIT) trial of double-bolus eptifibatide during PCI. Results: Women in the ESPRIT trial were older, and more frequently had hypertension, diabetes mellitus, or acute coronary syndromes, but were less likely to have prior PCI or coronary artery bypass graft surgery. The primary end point, a composite at 48 h of death, myocardial infarction (MI), urgent target vessel revascularization (TVR), and unplanned GP IIb/IIIa use, occurred in 10.5% of women and 7.9% of men (p = 0.082). The composite of death, MI, or TVR after one year occurred in 24.5% of women compared with 18% of men (p = 0.0008). At 48 h, eptifibatide reduced the composite of death, MI, and TVR from 14.5% to 6.0% in women versus 9.0% to 6.8% in men. At one year, these differences persisted: 28.9% versus 20.0% for women and 19.5% versus 16.6% for men. No statistical interaction existed between treatment and gender at either 48 h (p = 0.063) or one year (p = 0.2). Bleeding occurred more commonly in women (5.5% vs. 2.6%, p = 0.002), and was more common in eptifibatide-treated women. After adjustment for age, weight, and hypertension, no interaction between treatment and gender was present. Conclusion: Eptifibatide is effective to prevent ischemic complications of PCI in women and may eliminate gender-related differences in PCI outcomes. abstract_id: PUBMED:32933875 Sex Differences in Acute Bleeding and Vascular Complications Following Percutaneous Coronary Intervention Between 2003 and 2016: Trends From the Dartmouth Dynamic Registry. Background: Women undergoing percutaneous coronary intervention (PCI) are at higher risk for bleeding and vascular complications than men. Multiple approaches have been utilized to reduce bleeding in the modern era of PCI, including radial access, reduced GP IIb/IIIa inhibitor use, increased vascular closure device use, smaller sheath size and novel antithrombotic regimens. Nevertheless, few studies have assessed the impact of these techniques on the gap between men and women for such complications following PCI. We sought to quantify bleeding and vascular complications over time between men and women. Methods: We queried The Dartmouth Dynamic Registry for consecutive PCI's performed between January 2003 and June 2016. Demographic information, procedural characteristics, and in-hospital outcomes were collected and compared between men and women over the years. Results: We reviewed 15,284 PCI cases, of which 4384 (29%) were performed in women. Radial access increased from none in 2003 to nearly 40% in 2016. Use of GP IIb/IIIa and femoral access decreased substantially over the same time. Bleeding and vascular complication rates decreased significantly in women (13.2% to 3%; 6.5% to 0.8%, respectively) and men (3.5% to 0.7%, 3.4% to 0.7%, respectively). The overall bleeding and vascular complication rates decreased more for women than men, narrowing the gender gap. Conclusions: The incidence of bleeding and vascular complications fell between 2003 and 2016 in both men and women. Vascular complications have become less common over time, and based on our analysis, there was no longer any difference between the sexes for this outcome. Bleeding following PCI has decreased in both sexes over time; however, women continue to bleed more than men. abstract_id: PUBMED:22622120 Does percutaneous coronary intervention in women provide the same results as in men? Ischemic heart disease shows gender differences, both in terms of clinical characteristics and pathophysiological mechanisms. It is still debated whether these characteristics influence the diagnostic and therapeutic approach and the outcomes in female patients treated with percutaneous coronary intervention. Percutaneous coronary intervention in women has been shown to be feasible, safe and effective as it is in men throughout the whole clinical spectrum of ischemic syndromes. There is a solid scientific evidence of a different diagnostic and therapeutic approach to women suffering from ischemic heart disease compared to men, with a tendency to undertreat female patients, despite the worst risk profile at presentation. Women, in fact, less frequently undergo coronary angiography and receive antiplatelet, antithrombotic or anti-ischemic drugs. They experience more bleedings than men after administration of glycoprotein IIb/IIIa inhibitors. Gender differences, therefore, affect more the clinical than the interventional approach. At least in part, this is due to the fact that current guidelines are based on a male model of diagnostics. It would be desirable to analyze cohorts of patients in whom the percentage of individuals of both sexes will be equally represented, or rather, exclusively female cohorts in order to formulate more targeted diagnostic and therapeutic indications. abstract_id: PUBMED:23916503 Bivalirudin in acute coronary syndromes and percutaneous coronary intervention: should we use it? Major bleeding remains a major risk factor for percutaneous coronary intervention of acute coronary syndromes and is associated with higher morbidity, mortality, prolonged hospital stay and costs. With the recognition that bleeding is an important factor in patient outcomes, the prevention of bleeding has become as important a goal as the prevention of ischaemia. The direct thrombin inhibitor bivalirudin has been shown to reduce ischaemia and importantly, is associated with less bleeding. In this article we review the evidence base that supports the use of bivalirudin across all spectrums of coronary syndromes and percutaneous coronary intervention. An algorithm for the use of bivalirudin in high risk subgroups and coronary syndromes is suggested. abstract_id: PUBMED:28582206 Antiplatelet Therapy in Percutaneous Coronary Intervention. Platelets play a key role in mediating stent thrombosis, which is the major cause of ischemic events immediately after percutaneous coronary intervention (PCI). Antiplatelet therapy is therefore the cornerstone of antithrombotic therapy after PCI. However, the use of antiplatelet agents increases bleeding risk, with more potent antiplatelet agents further increasing bleeding risk. In the past 5 years, potent and fast-acting P2Y12 inhibitors have augmented the antiplatelet armamentarium available to interventional cardiologists. This article reviews the preclinical and clinical data surrounding these new agents, and discusses the significant questions and controversies that still exist regarding the optimal antiplatelet strategy. abstract_id: PUBMED:28582207 Antithrombotic Therapy in Percutaneous Coronary Intervention. Numerous agents are available for anticoagulation during percutaneous coronary intervention (PCI). These agents have been evaluated in a variety of clinical settings, including elective, urgent, and emergent PCI. Although unfractionated heparin remains a frequent choice, accumulating data support the use of newer agents to mitigate bleeding risk, especially in the setting of femoral access and concomitant use of glycoprotein IIb/IIa receptor inhibition. With several antithrombotic agents available, an assessment must be made regarding the ischemic and bleeding risks. This article summarizes existing data examining the benefits and limitations of the various anticoagulants and guidelines for their use. abstract_id: PUBMED:27886821 Cangrelor: Pharmacology, Clinical Data, and Role in Percutaneous Coronary Intervention. In clinical trials that assessed the safety and efficacy of cangrelor during percutaneous coronary intervention (PCI), cangrelor was administered as a 30-μg/kg bolus followed by a 4-μg/kg/min infusion for at least 2 hours or the duration of the PCI, whichever was longer. Cangrelor is currently indicated as an adjunct to PCI to reduce the risk of myocardial infarction, repeat coronary revascularization, and stent thrombosis in patients who have not been treated with a P2Y12 platelet inhibitor and are not being given a glycoprotein IIb/IIIa inhibitor. abstract_id: PUBMED:29339168 Selection of P2Y12 Inhibitor in Percutaneous Coronary Intervention and/or Acute Coronary Syndrome. The P2Y12 receptor plays a critical role in the amplification of platelet aggregation in response to various agonists and stable thrombus generation at the site of vascular injury leading to deleterious ischemic complications. Therefore, treatment with a P2Y12 receptor blocker is a major effective strategy to prevent ischemic complications in high-risk patients with acute coronary syndrome (ACS) and patients undergoing percutaneous coronary intervention (PCI). The determination of optimal platelet inhibition is based on maximizing antithrombotic properties while minimizing bleeding risk and is critically dependent on individual patient's propensity for thrombotic and bleeding risks. Immediately after ACS and during PCI, where highly elevated thrombotic activity is present, a loading dose administration with a potent P2Y12 receptor blocker such as ticagrelor or prasugrel is preferred. In stable coronary artery disease patients undergoing PCI, clopidogrel is widely used. In addition, in patients with ST-segment elevation myocardial infraction who cannot take oral medications, a fast acting intravenous glycoprotein IIb/IIIa inhibitor or P2Y12 receptor blocker, cangrelor, may add clinical benefits. During long term therapy, a strategy that prevents ischemic risk while avoiding excessive bleeding risk is similarly desired. Although up to one year dual antiplatelet therapy (DAPT) is recommended in patients undergoing elective stenting, the available data support the anti-ischemic benefit of prolonged DAPT (more than1 year) in patients with prior MI. In addition to the DAPT risk calculator tool, future risk assessment methods that analyze intrinsic thrombogenicity and atherosclerotic coronary burden may further identify the optimal candidate for prolonged DAPT to improve net clinical outcomes. abstract_id: PUBMED:26049380 Re-infarction after primary percutaneous coronary intervention. Purpose Of Review: Thrombus formation, usually on a ruptured atherosclerotic plaque, is pivotal in the pathogenesis of ST segment elevation myocardial infarction (STEMI). This thrombus formation provides the milieu for re-occlusion of the infarct-related artery, the main location of re-infarction post-STEMI. Although rates of re-infarction are lower after reperfusion by primary percutaneous coronary intervention (PCI) than after fibrinolytic therapy, re-infarction remains a major cause of morbidity and mortality. Recent Findings: The predominant cause of re-infarction after primary PCI is stent thrombosis. Two recent trials [A Prospective, Randomized Trial of Ambulance Initiation of Bivalirudin vs. Heparin ± Glycoprotein IIb/IIIa Inhibitors in Patients with STEMI Undergoing Primary PCI (EUROMAX) and Unfractionated heparin versus bivalirudin in primary percutaneous coronary intervention (HEAT-PPCI)] have each reported higher rates of stent thrombosis in the first 24 h after primary PCI in patients assigned to receive bivalirudin, which affects the balance of risks and benefit of bivalirudin post-STEMI. Also, in a subanalysis of the Platelet Inhibition And Patient Outcomes trial, ticagrelor reduces re-infarction compared with clopidogrel in patients with STEMI after primary PCI. Other nonpharmacological or mechanical interventions during primary PCI, with the exception of newer-generation drug-eluting stents in the Swedish Coronary Angiography and Angioplasty Registry, have not affected rates of re-infarction. Summary: Re-infarction remains a major cause of morbidity and mortality. Re-infarction rates are altered by pharmacological strategy and stent selection in primary PCI. The design of future trials to detect possible treatment differences in relatively low event rates will provide challenges, and may require more novel strategies such as administrative data collection for patient characteristics and key outcomes. abstract_id: PUBMED:27886817 Antithrombotic Therapy to Reduce Ischemic Events in Acute Coronary Syndromes Patients Undergoing Percutaneous Coronary Intervention. Antithrombotic therapy is essential in the prevention of periprocedural death and myocardial infarction during and after percutaneous coronary intervention. In the pathogenesis of acute coronary syndromes (ACS), both platelets and the coagulation cascade play an important role. Therefore, periprocedural antithrombotic therapy is even more important in ACS than in elective PCI. The most used agents are aspirin, platelet P2Y12 blockers, platelet glycoprotein IIb/IIIa blockers, and parenteral anticoagulants. The P2Y12 blockers must be continued at least 12 months. High-risk patients should be treated with glycoprotein IIb/IIIa receptor antagonists, especially those undergoing primary angioplasty for ST-elevation acute coronary syndrome. Answer: Glycoprotein IIb/IIIa antagonism appears to be effective in women following percutaneous coronary intervention (PCI), similar to its effectiveness in men. The ESPRIT study showed that eptifibatide, a platelet glycoprotein IIb/IIIa antagonist, reduced ischemic complications after nonurgent coronary stent interventions in both women and men. Specifically, at 48 hours, eptifibatide reduced the composite of death, myocardial infarction (MI), and urgent target vessel revascularization (TVR) from 14.5% to 6.0% in women and from 9.0% to 6.8% in men. At one year, these benefits persisted, with a reduction from 28.9% to 20.0% in women and from 19.5% to 16.6% in men. There was no statistical interaction between treatment and gender at either 48 hours (p = 0.063) or one year (p = 0.2), indicating that the treatment effect was consistent across genders (PUBMED:12354432). However, it is important to note that women undergoing PCI are at higher risk for bleeding and vascular complications than men (PUBMED:32933875). Women experience more bleedings than men after administration of glycoprotein IIb/IIIa inhibitors (PUBMED:22622120). Despite this increased risk, the use of glycoprotein IIb/IIIa antagonists in women has been shown to be feasible, safe, and effective, and may help eliminate gender-related differences in PCI outcomes (PUBMED:12354432). Overall, while women may have a higher risk of bleeding complications, glycoprotein IIb/IIIa antagonism is effective in preventing ischemic complications following PCI in women, similar to its effectiveness in men.
Instruction: Does documentation in nursing records of nutritional screening on admission to hospital reflect the use of evidence-based practice guidelines for malnutrition? Abstracts: abstract_id: PUBMED:24215547 Does documentation in nursing records of nutritional screening on admission to hospital reflect the use of evidence-based practice guidelines for malnutrition? Purpose: To describe the documentation of nutrition-related data and wards referral to dieticians in a Belgian university hospital. Method: Retrospective analysis of 506 nursing records. Findings: Body weight and height are documented in 22%. "Feeding assistance" and "usual food intake pattern" are documented in 68% of all cases, and in 71% it is marked whether the patient is on a diet. Eight percent of the patients are referred to a dietician, but the indications for these referrals are not clear. Conclusion: Given the poor documentation, most likely these patients are not adequately screened for malnutrition as recommended. Implications For Nursing Practice: Nurses' documentation of nutrition-related data should be improved to facilitate treatment of malnutrition with tailored multidisciplinary interventions. abstract_id: PUBMED:18510576 Assessment and documentation of patients' nutritional status: perceptions of registered nurses and their chief nurses. Aims: To study, within municipal care and county council care, (1) chief nurses' and registered nurses' perceptions of patient nutritional status assessment and nutritional assessment/screening tools, (2) registered nurses' perceptions of documentation in relation to nutrition and advantages and disadvantages with a documentation model. Background: Chief nurses and registered nurses have a responsibility to identify malnourished patients and those at risk of malnutrition. Design And Methods: In this descriptive study, 15 chief nurses in municipal care and 27 chief nurses in county council care were interviewed by telephone via a semi-structured interview guide. One hundred and thirty-one registered nurses (response rate 72%) from 14 municipalities and 28 hospital wards responded to the questionnaire, all in one county. Results: According to the majority of chief nurses and registered nurses, only certain patients were assessed, on admission and/or during the stay. Nutritional assessment/screening tools and nutritional guidelines were seldom used. Most of the registered nurses documented nausea/vomiting, ability to eat and drink, diarrhoea and difficulties in chewing and swallowing, while energy intake and body mass index were rarely documented. However, the majority documented their judgement about the patient's nutritional condition. The registered nurses perceived the VIPS model (Swedish nursing documentation model) as a guideline as well as a model obstructing the information exchange. Differences were found between nurses (chief nurses/registered nurses) in municipal care and county council care, but not between registered nurses and their chief nurses. Conclusions: All patients are not nutritionally assessed and important nutritional parameters are not documented. Nutritionally compromised patients may remain unidentified and not properly cared for. Relevance To Clinical Practice: Assessment and documentation of the patients' nutritional status should be routinely performed in a more structured way in both municipal care and county council care. There is a need for increased nutritional nursing knowledge. abstract_id: PUBMED:12192752 Nutritional care for adults in hospital. Despite the fact that up to 40% of hospital patients may be malnourished, many nutritional referrals are inconsistent or inappropriate. Recent research has raised awareness of nutrition, but wide variations remain in the assessment and referral procedures used by hospital trusts. The final best practice statement in our series emerged from the responses of nursing and dietetic staff across Scotland. The statement focuses on five aspects of nutritional care: admission to hospital; nursing management of nutritional care; nutritional screening and documentation; criteria for nutritional referrals, and education and training. abstract_id: PUBMED:28815783 Stuck in tradition-A qualitative study on barriers for implementation of evidence-based nutritional care perceived by nursing staff. Aims And Objectives: To explore the barriers for nutritional care as perceived by nursing staff at an acute orthopaedic ward, aiming to implement evidence-based nutritional care. Background: Previous studies indicate that nurses recognise nutritional care as important, but interventions are often lacking. These studies show that a range of barriers influence the attempt to optimise nutritional care. Before the implementation of evidence-based nutritional care, we examined barriers for nutritional care among the nursing staff. Design: Qualitative study. Methods: Four focus groups with thirteen members of the nursing staff were interviewed between October 2013-June 2014. The interview guide was designed according to the Theoretical Domains Framework. The interviews were analysed using qualitative content analysis. Results: Three main categories emerged: lacking common practice, failing to initiate treatment and struggling with existing resources. The nursing staff was lacking both knowledge and common practice regarding nutritional care. They felt they protected patient autonomy by accepting patient's reluctance to eat or getting a feeding tube. The lack of nutritional focus from doctors decreased the nursing staffs focus leading to nonoptimal nutritional treatment. Competing priorities, physical setting and limited nutritional supplements were believed to hinder nutritional care. Conclusion: The results suggest that nutritional care is in a transitional state from experience- to evidence-based practice. Barriers for nutritional care are grounded in lack of knowledge among nursing staff and insufficient collaboration between nursing staff and the doctors. There is a need for nutritional education for the nursing staff and better support from the organisation to help nursing staff provide evidence-based nutritional care. Relevance To Clinical Practice: This study contributes with valuable knowledge before the implementation of evidence-based nutritional care. The study provides an understanding of barriers for nutritional care and presents explanations to why nutritional care has failed to become an integrated part of the daily treatment and care. abstract_id: PUBMED:12765669 Nutritional support in acute stroke: the impact of evidence-based guidelines. Background And Aims: Stroke patients experience multiple impairments which impair ability to eat and render them vulnerable to the deleterious sequelae of malnutrition. This study aimed to develop, implement and evaluate evidence-based guidelines for nutrition support following acute stroke using a multifaceted change management strategy. Methods: Prospective quasi-experimental design. Documentation of two groups of 200 acute stroke patients admitted to medical and care of the elderly wards of an acute NHS Trust in South London was surveyed using a checklist before and after implementation of 24 guidelines for nutritional screening, assessment and support. Guidelines were based on systematic literature review and developed by consensus in a nurse-led multiprofessional group; implemented via a context-specific, multifaceted strategy including opinion leaders and educational programmes linked to audit and feedback. STAFF OUTCOMES: Compliance with guidelines by doctors, nurses, therapists. Patient Outcomes: Changes in Barthel Index scores and Body Mass Index in hospital, infective complications, length of stay, discharge destination. Results: Statistically significant improvements in compliance with 15 guidelines occurred in the post-test group. Infective episodes showed a significant reduction in the post-test group but other patient outcomes were unaffected. Conclusions: Implementation of evidence-based guidelines for nutritional support following acute stroke using a multifaceted strategy was associated with improvements in documented practice and selected patient outcomes. abstract_id: PUBMED:25713793 Comparison of nursing records and the catholic medical center nutritional risk screening as a nutrition screening tool for intensive care unit patients. In the present study, we aimed to compare the results from nutritional risk screening based on nursing records with those using the Catholic Medical Center Nutritional Risk Screening (CMCNRS) tool. A cross-sectional study was performed involving 91 patients aged ≥ 18 years from an intensive care unit. We collected general characteristics of the patients and nutrition screening was conducted for each patient by using computerized hospital program for the nursing records as well as the CMCNRS conducted by clinical dietitians. The subjects were aged 64.0 ± 17.5 years, and 52 (57.1%) patients had a NPO (nothing by mouth) status. Neurological disease was the most common diagnosis (25.3%). Compared with the CMCNRS results from the clinical dietitians, the results for the nursing records had a sensitivity of 40.5% (95% CI 32.0-40.5) and a specificity of 100.0% (95% CI 92.8-100.0). The agreement was fair between the CMCNRS results obtained by clinical dietitians and the nursing records (k = 0.423). Analysis of the errors from the screening using the nursing records revealed significant differences for all subjective indicators (p &lt; 0.001), compared with the CMCNRS by the clinical dietitians. Thus, after assessing the methods used for nutrition screening and the differences in the search results regarding malnourished status, we noted that the nursing records had a lower sensitivity than the screening by the CMCNRS. abstract_id: PUBMED:31085072 Protocol for the implementation of a screening tool for the early detection of nutritional risk in a university hospital. Introduction: Prevalence of disease-related malnutrition in hospitals ranges from 20%-50%. Use of nutritional screening tools should be the first step in the prevention and treatment of patients at risk of malnutrition and/or undernourished. Aims: To implement a nutritional screening tool at admission to a tertiary hospital. Methods: The nutrition unit prepared a protocol for early detection of nutritional risk and selected the NRS 2002 as screening tool. The protocol was approved by the hospital committee of protocols and procedures and disseminated through the intranet. NRS 2002 was included in the diet prescription software to be implemented by the nursing staff of the hospital wards and as a direct communication system with the nutrition unit. Three phases were designed: pilot phase, implementation phase, and consolidation phase. Results: The pilot phase, NRS 2002 was implemented in 2hospital units to monitor software. The implementation phase was carried out in the same units, and all action protocols related to it were verified. The consolidation phase consisted of sequential extension of the protocol to the other hospital units. Conclusions: Implementation of nutritional screening at hospital admission is a long and complex process that requires involvement of many stakeholders. Computer software has allowed for a rapid, simple, and automatic process, so that the results of the screening are immediately available to the nursing staff of the nutrition unit and activate the nutritional protocols when required. abstract_id: PUBMED:23173662 Nutritional screening among patients with cancer in an acute care hospital: a best practice implementation project. Aim: This project sought to improve the nutritional screening practice among registered nurses in caring for adult patients with cancer. Methods: This project used the pre- and post-implementation audit strategy using Joanna Briggs Institute Practical Application of Clinical Evidence System and Getting Research into Practice (JBI-PACES) module. The audit, feedback and re-audit sequence was the strategy used to improve clinical practice. This project ran over three phases during a 5-month period from July to November 2011. Results: This project utilised three criteria from the JBI-PACES. The criteria are: (i) a validated screening tool is used to identify patients at risk for malnutrition; (ii) patients are screened upon admission using a validated screening tool; and (iii) appropriate action plans are initiated when at-risk patients for malnutrition are identified. According to the pre-implementation audit, only Criterion 1 showed 100% compliance. Criterion 2 and 3 showed 96% and 46% compliance, respectively. The audit team identified four barriers and developed action plans, which included a simplified nutritional screening tool and empowerment of the nurses for dietician referral. The post-implementation audit showed 100% compliance rate achieved for all the three criteria. Conclusion: This project used the pre- and post-audit strategy to translate evidence into practice. It demonstrated not only that implementation of best practice is possible in a busy oncology ward, but also showed a remarkable improvement in the nutritional screening of patients with cancer. abstract_id: PUBMED:25403616 Improving nutritional care: innovation and good practice. Aims: This paper presents examples of good practice in nutritional screening and care and identifies methods used to overcome contextual constraints and discusses the implications for nursing practice in hospitals. Background: Nutritional screening is an important step in identifying those at risk of malnutrition, but does not produce improved nutritional care unless it results in a care plan that is acted on. The importance of nutrition and implications for clinical care make it imperative to improve practice. Design: Qualitative investigation. Methods: Between January 2011-February 2012, focus groups were held using a semi-structured discussion guide with nine groups of health professionals (n = 80) from one hospital: four with nurses, three with doctors and two with dietitians. Discussions were audio-recorded, transcribed and coded into themes and sub-themes, which were then depicted in a thematic map and illustrated with verbatim quotes. Findings: Three strategies for sustaining effective nutritional practice emerged: establishing routines to ensure screening was undertaken; re-organizing aspects of care to promote good practice; developing innovative approaches. Issues to be addressed were the perceived disconnection between mandatory screening and the delivery of effective care, a requirement for nutrition education, organizational constraints of a large university hospital and the complexities of multidisciplinary working. Conclusion: Professionals seeking to improve nutritional care in hospitals need to understand the interaction of system and person to facilitate change. Nursing staff need to be able to exercise autonomy and the hospital system must offer enough flexibility to allow wards to organize nutritional screening and care in a way that meets the needs of individual patients. abstract_id: PUBMED:9544080 Audit of nutritional practice and knowledge. An audit of nutritional practice was carried out in one NHS trust. Findings demonstrated areas for improvement, particularly in the documentation of nutritional status and in criteria for referral for specialist help. An assessment sheet and a nursing care plan were developed to assess patients' risk of being malnourished and to provide a recognised place for recording nutrition-related information. Answer: The documentation in nursing records of nutritional screening on admission to hospital does not consistently reflect the use of evidence-based practice guidelines for malnutrition. Several studies have highlighted issues with the documentation and implementation of nutritional screening in hospital settings. A retrospective analysis of 506 nursing records in a Belgian university hospital found that body weight and height were documented in only 22% of cases, and only 8% of the patients were referred to a dietician, with unclear indications for these referrals. This suggests that patients are not adequately screened for malnutrition as recommended (PUBMED:24215547). Similarly, a study on the perceptions of registered nurses and their chief nurses revealed that nutritional assessment/screening tools and nutritional guidelines were seldom used, and important nutritional parameters were not documented. This indicates that nutritionally compromised patients may remain unidentified and not properly cared for (PUBMED:18510576). Research across Scotland focusing on nutritional care for adults in hospital also highlighted inconsistencies and inappropriate referrals, with wide variations in assessment and referral procedures used by hospital trusts (PUBMED:12192752). A qualitative study on barriers for implementation of evidence-based nutritional care perceived by nursing staff identified a lack of knowledge and common practice regarding nutritional care, insufficient collaboration between nursing staff and doctors, and competing priorities as factors hindering optimal nutritional treatment (PUBMED:28815783). Moreover, a comparison of nursing records and the Catholic Medical Center Nutritional Risk Screening (CMCNRS) tool for intensive care unit patients showed that nursing records had a lower sensitivity than the screening by the CMCNRS, indicating that nursing records may not be as effective in identifying patients at nutritional risk (PUBMED:25713793). In conclusion, the documentation in nursing records often does not reflect the use of evidence-based practice guidelines for malnutrition, and there is a need for improved nutritional screening and documentation practices in hospitals to ensure that malnourished patients are identified and receive appropriate care.
Instruction: Does comorbid posttraumatic stress disorder affect the severity and course of psychotic major depressive disorder? Abstracts: abstract_id: PUBMED:20021993 Does comorbid posttraumatic stress disorder affect the severity and course of psychotic major depressive disorder? Background: Major depressive disorder (MDD) and posttraumatic stress disorder (PTSD) are commonly comorbid conditions that result in greater severity, chronicity, and impairment compared with either disorder alone. However, previous research has not systematically explored the potential effects of the psychotic subtyping of MDD and comorbid PTSD. Method: The sample in this retrospective case-control study conducted from December 1995 to August 2006 consisted of psychiatric outpatients with DSM-IV-diagnosed psychotic MDD with PTSD, psychotic MDD without PTSD, or nonpsychotic MDD with PTSD presenting for clinic intake. Clinical indices of severity, impairment, and history of illness were assessed by trained diagnosticians using the Structured Clinical Interview for DSM-IV Axis I Disorders supplemented by items from the Schedule for Affective Disorders and Schizophrenia. Results: In terms of current severity and impairment, the psychotic MDD with PTSD (n = 34) and psychotic MDD only (n = 26) groups were similar to each other, and both tended to be more severe than the nonpsychotic MDD with PTSD group (n = 263). In terms of history of illness, the psychotic MDD with PTSD group tended to show greater severity and impairment relative to either the psychotic MDD only or nonpsychotic MDD with PTSD groups. Furthermore, the psychotic MDD with PTSD patients had an earlier time to depression onset than patients with either psychotic MDD alone or nonpsychotic MDD with PTSD, which appeared to contribute to the poorer history of illness demonstrated in the former group. Conclusions: Future research should explore the possibility of a subtype of psychotic depression that is associated with PTSD, resulting in a poorer course of illness. The current findings highlight the need for pharmacologic and psychotherapeutic approaches that can be better tailored to psychotic MDD patients with PTSD comorbidity. abstract_id: PUBMED:10202572 Psychotic features and illness severity in combat veterans with chronic posttraumatic stress disorder. Background: Psychotic symptoms may be present in up to 40% of patients with combat-related posttraumatic stress disorder (PTSD). In this study, we hypothesized that severity of psychotic symptoms would also reflect severity of PTSD symptoms in patients with well-defined psychotic features. Methods: Forty-five Vietnam combat veterans with PTSD but without a primary psychotic disorder diagnosis underwent a Structured Clinical Interview for DSM-III-R with Psychotic Screen, and the Clinician Administered PTSD Scale (CAPS). Patients identified as having psychotic features (PTSD-P), (n = 22) also received the Positive and Negative Syndrome Scale (PANSS) and the Hamilton Depression Rating Scale (HDRS). Results: There was a significant positive correlation between the CAPS and PANSS global ratings (p &lt; .001) and the HDRS and PANSS (p &lt; .03) in the PTSD-P patients. Many CAPS and PANSS subscales also demonstrated significant intercorrelations; however, the CAPS-B subscale (reexperiencing) and the PANSS positive symptom scale were not correlated, suggesting that psychotic features may not necessarily be influenced or accounted for by more severe reexperiencing symptoms. Fifteen (68%) of the PTSD-P patients had major depression (MDD). Both CAPS and PANSS ratings were significantly higher in the PTSD-P patients with comorbid MDD. Conclusions: As postulated, patients with more severe psychosis ratings are likely to have more severe PTSD disease burden if psychotic features are present. This study further documents the occurrence of psychotic features in PTSD that are not necessarily due to a primary psychotic disorder, suggesting that this may be a distinct subtype; however, a significant interaction likely exists between PTSD, depression, and psychotic features. abstract_id: PUBMED:10074874 Psychotic symptoms in combat-related posttraumatic stress disorder. Background: Posttraumatic stress disorder (PTSD) is known often to be comorbid with other anxiety, mood, and substance use disorders. Psychotic symptoms have also been noted in PTSD and have been reported to be more common in Hispanic veterans. However, the occurrence of psychotic symptoms, including the degree to which they are accounted for by comorbid disorders, have received limited systematic investigation. Our study objectives were to assess psychotic symptoms according to DSM-III-R criteria in patients with a primary diagnosis of combat-related PTSD and determine the associations of those symptoms with psychiatric comorbidity and ethnicity. Method: Fifty-three male combat veterans consecutively admitted to a PTSD rehabilitation unit were assessed for psychotic symptoms and Axis I disorders. Ninety-one percent were Vietnam veterans; 72% were white, 17% were Hispanic, and 11% were black. Associations between psychotic symptoms and comorbid depression, substance use disorders, and minority status were compared by chi-square analyses; associations between psychotic symptoms and both PTSD and dissociative symptom severity were compared by t test analysis. Results: Forty percent of patients reported a psychotic symptom or symptoms in the preceding 6 months. These symptoms featured auditory hallucinations in all but 1 case. The psychotic symptoms typically reflected combat-themes and guilt, were nonbizarre, and were not usually associated with formal thought disorder or flat or inappropriate affect. Psychotic symptoms were significantly associated with current major depression (p &lt; .02), but not with alcohol or drug abuse or with self-rated PTSD and dissociation severity. Psychotic symptoms and current major depression were more common in minority (black and Hispanic) than white veterans (p &lt; .002). Conclusion: Psychotic symptoms can be a feature of combat-related PTSD and appear to be associated with major depression. The association with minority status may be a function of comorbidity. abstract_id: PUBMED:9250439 Psychotic features and combat-associated PTSD. Psychotic symptoms and psychotic disorder diagnoses have occasionally been reported in association with chronic posttraumatic stress disorder (PTSD). Although psychotic features may be related to core PTSD symptoms, i.e., part of the reexperiencing phenomena, it is possible that they are secondary to certain comorbid disorders which are also prevalent in this patient population, e.g. major depression or substance abuse. In a prospective study, combat associated PTSD patients (n = 25) were administered clinical ratings, including the Structured Clinical Interview for DSM-III-R with psychotic screen (SCID-P), Clinician Administered PTSD Scale (CAPS) and the Impact of Events Scale (IES). Thirty-six percent (n = 9) endorsed psychotic symptoms with associated comorbidity including: major depressive episode, bipolar disorder, alcohol or polysubstance abuse panic disorder, and phobias. All but one of the patients with psychotic features also met criteria for major depressive episode. None had a primary psychotic disorder diagnosis. There were no significant differences in total CAPS scores between patients with or without psychotic features (82.6 +/-0 17.6 versus 75.3 +/- 22.4, p ns), nor for the different symptom cluster subscales. There were also no differences in the IES scores between groups (34.8 +/- 10 versus 32.6 +/- 10 p ns). This suggests that these psychotic features may not necessarily reflect severity of PTSD symptoms. PTSD may share a common diathesis with mood disorders including psychotic depression. Further study is needed of these phenomena. abstract_id: PUBMED:10853048 Psychotic symptoms and comorbid psychiatric disorders in Croatian combat-related posttraumatic stress disorder patients. Aim: To investigate the prevalence rate of post-traumatic stress disorder (PTSD) comorbid psychiatric disorders and to explore psychotic symptoms in patients with combat-related current PTSD. Method: The sample included Croatian war veterans (N=41) who were hospitalized at the University Department of Psychiatry of the Vrapèe Psychiatric Hospital during the 1995-1996 period and fulfilled the DSM-IV criteria for the current and chronic PTSD. The Schedule for Affective Disorder and Schizophrenia (SADS-L) was applied for the assessment of current and lifetime psychiatric disorders. Only three subjects had a prewar Axis I psychiatric disorder. One third of the patients met the criteria for personality disorder. Results: After severe combat trauma, the majority of PTSD patients (33/41) had at least one comorbid psychiatric diagnosis on Axis I. In those with personality disorders the most frequent was alcohol dependence, whereas in those without personality disorders it was major depressive disorder. Psychotic symptoms occurred in 8 out of 41 PTSD patients. None of them had a primary psychotic disorder or a personality disorder. In all the patients, psychotic symptoms were different from flashbacks. They were symbolically related to the trauma and resistant to antipsychotic treatment. Psychotic symptoms were associated with depression in 5 out of 8 patients with psychotic symptoms. Conclusion: Severe and prolonged combat trauma may be followed by the co-occurrence of PTSD and psychotic symptoms, forming the atypical clinical picture of PTSD. abstract_id: PUBMED:30781888 A Review of Epigenetics of PTSD in Comorbid Psychiatric Conditions. Post-traumatic stress disorder (PTSD) is an acquired psychiatric disorder with functionally impairing physiological and psychological symptoms following a traumatic exposure. Genetic, epigenetic, and environmental factors act together to determine both an individual's susceptibility to PTSD and its clinical phenotype. In this literature review, we briefly review the candidate genes that have been implicated in the development and severity of the PTSD phenotype. We discuss the importance of the epigenetic regulation of these candidate genes. We review the general epigenetic mechanisms that are currently understood, with examples of each in the PTSD phenotype. Our focus then turns to studies that have examined PTSD in the context of comorbid psychiatric disorders or associated social and behavioral stressors. We examine the epigenetic variation in cases or models of PTSD with comorbid depressive disorders, anxiety disorders, psychotic disorders, and substance use disorders. We reviewed the literature that has explored epigenetic regulation in PTSD in adverse childhood experiences and suicide phenotypes. Finally, we review some of the information available from studies of the transgenerational transmission of epigenetic variation in maternal cases of PTSD. We discuss areas pertinent for future study to further elucidate the complex interactions between epigenetic modifications and this complex psychiatric disorder. abstract_id: PUBMED:19764926 The relationship between childhood trauma history and the psychotic subtype of major depression. Objective: Increasing evidence exists linking childhood trauma and primary psychotic disorders, but there is little research on patients with primary affective disorders with psychotic features. Method: The sample consisted of adult out-patients diagnosed with major depressive disorder (MDD) at clinic intake using a structured clinical interview. Patients with MDD with (n = 32) vs. without psychotic features (n = 591) were compared as to their rates of different types of childhood trauma. Results: Psychotic MDD patients were significantly more likely to report histories of physical (OR = 2.81) or sexual abuse (OR = 2.75) compared with non-psychotic MDD patients. These relationships remained after controlling for baseline differences. Within the subsample with comorbid post-traumatic stress disorder, patients with psychotic MDD were significantly more likely to report childhood physical abuse (OR = 3.20). Conclusion: Results support and extend previous research by demonstrating that the relationship between childhood trauma and psychosis is found across diagnostic groups. abstract_id: PUBMED:21899984 Understanding the relationship between co-occurring PTSD and MDD: symptom severity and affect. How to best understand theoretically the nature of the relationship between co-occurring PTSD and MDD (PTSD+MDD) is unclear. In a sample of 173 individuals with chronic PTSD, we examined whether the data were more consistent with current co-occurring MDD as a separate construct or as a marker of posttraumatic stress severity, and whether the relationship between PTSD and MDD is a function of shared symptom clusters and affect components. Results showed that the more severe depressive symptoms found in PTSD+MDD as compared to PTSD remained after controlling for PTSD symptom severity. Additionally, depressive symptom severity significantly predicted co-occurring MDD even when controlling for PTSD severity. In comparison to PTSD, PTSD+MDD had elevated dysphoria and re-experiencing - but not avoidance and hyperarousal - PTSD symptom cluster scores, higher levels of negative affect, and lower levels of positive affect. These findings provide support for PTSD and MDD as two distinct constructs with overlapping distress components. abstract_id: PUBMED:10362439 Psychotic subtyping of major depressive disorder and posttraumatic stress disorder. Background: Many studies have established that a large percentage of patients with posttraumatic stress disorder (PTSD) have comorbid major depressive disorder. Other studies have found that patients with PTSD or a history of childhood trauma have an increased rate of psychotic symptoms. In the present report from the Rhode Island Methods to Improve Diagnosis and Services project, we examine whether an association exists between psychotic subtyping of major depressive disorder and PTSD. Method: Five hundred psychiatric outpatients were interviewed with the Structured Clinical Interview for DSM-IV. Results: Almost half of the 500 patients had nonbipolar major depressive disorder (N = 235, 47.0%), 45 (19.1%) of whom had PTSD. Nineteen patients had psychotic depression, 216 had nonpsychotic depression. Compared with patients with nonpsychotic depression, the patients with psychotic depression were nearly 4 times more likely to have PTSD (57.9% vs. 15.7%, Fisher exact test, p = .0001). Conclusion: The results of the present study suggest that the presence of psychosis in psychiatric outpatients with major depressive disorder is associated with concurrent PTSD. It is hypothesized that the poorer longitudinal course of psychotic versus nonpsychotic depression may be due to the underrecognition of PTSD in psychotically depressed patients. abstract_id: PUBMED:9707300 Plasma dopamine beta-hydroxylase activity in psychotic and non-psychotic post-traumatic stress disorder. Recognition and treatment of comorbid chronic psychotic symptoms in post-traumatic stress disorder (PTSD) has become of increasing clinical interest. Altered dopamine beta-hydroxylase (DBH) activity has been reported in mood disorders. Plasma DBH is reduced in major depression with psychosis and elevated in bipolar disorder with psychosis compared with their respective non-psychotic diagnostic groups. DBH is likely a trait marker with interindividual variations secondary to genetic polymorphism. We therefore evaluated DBH activity in PTSD patients with and without psychotic features and compared these groups with age- and gender-matched control subjects. Vietnam combat veterans with PTSD (n = 19) (including patients with and without psychotic features) and normal control subjects (n = 22) had plasma DBH enzyme activity assayed photometrically. DBH was significantly higher in patients with PTSD with psychotic features than in patients without psychotic features (80.6 +/- 13.4 vs. 42.1 +/- 7.3 mM/min, P &lt; 0.01) and was also higher than normal control subjects (46.12 +/- 4.93, P &lt; 0.01). Plasma DBH activity may differentiate psychotic and non-psychotic subtypes of PTSD. The observed changes are, interestingly, opposite to those seen in psychotic depression but comparable to psychotic bipolar disorder. Since DBH is a genetic marker, this may reflect individual vulnerabilities to develop psychosis in the context of trauma. Answer: Yes, comorbid posttraumatic stress disorder (PTSD) does affect the severity and course of psychotic major depressive disorder (MDD). Research indicates that the presence of PTSD in patients with psychotic MDD tends to result in greater severity and impairment compared to those with either disorder alone. A retrospective case-control study found that patients with both psychotic MDD and PTSD showed greater severity and impairment in their history of illness than those with either psychotic MDD without PTSD or nonpsychotic MDD with PTSD. Additionally, the group with comorbid psychotic MDD and PTSD had an earlier onset of depression, which contributed to a poorer history of illness (PUBMED:20021993). Studies have also shown that psychotic symptoms may be present in a significant proportion of patients with PTSD, and these symptoms are associated with a more severe overall PTSD disease burden. In combat veterans with PTSD, the severity of psychotic symptoms reflected the severity of PTSD symptoms, and patients with more severe psychosis ratings were likely to have more severe PTSD if psychotic features were present (PUBMED:10202572). Furthermore, psychotic symptoms in PTSD patients have been significantly associated with current major depression, and these symptoms are not necessarily due to a primary psychotic disorder, suggesting the possibility of a distinct subtype of PTSD with psychotic features (PUBMED:10074874). The occurrence of psychotic symptoms in PTSD that are not necessarily due to a primary psychotic disorder indicates that there may be a significant interaction between PTSD, depression, and psychotic features (PUBMED:9250439). In addition, severe and prolonged combat trauma may lead to the co-occurrence of PTSD and psychotic symptoms, forming an atypical clinical picture of PTSD. These psychotic symptoms are often resistant to antipsychotic treatment and are associated with depression in a majority of patients (PUBMED:10853048). The relationship between childhood trauma history and the psychotic subtype of major depression has also been explored, with findings indicating that psychotic MDD patients are more likely to report histories of physical or sexual abuse compared with non-psychotic MDD patients, which supports the relationship between childhood trauma and psychosis across diagnostic groups (PUBMED:19764926). Overall, the presence of comorbid PTSD in patients with psychotic MDD is associated with a more severe and complex clinical presentation, highlighting the need for tailored pharmacologic and psychotherapeutic approaches for these patients (PUBMED:20021993).
Instruction: Rendezvous technique for cholecystocholedochal lithiasis in octogenarians: is it as effective as in younger patients, or should endoscopic sphincterotomy followed by laparoscopic cholecystectomy be preferred? Abstracts: abstract_id: PUBMED:24229423 Rendezvous technique for cholecystocholedochal lithiasis in octogenarians: is it as effective as in younger patients, or should endoscopic sphincterotomy followed by laparoscopic cholecystectomy be preferred? Background: The treatment of cholecystocholedochal lithiasis (CCL) requires cholecystectomy and common bile duct (CBD) clearance, which can be achieved surgically or with a combination of surgery and endoscopy. The latter includes a two-stage-approach-preoperative retrograde cholangiography (ERC) and sphincterotomy (ST) followed by delayed laparoscopic cholecystectomy (LC), or vice versa-or a one-stage-approach-the rendezvous technique (RVT), where ERC, ST, and LC are performed during the same procedure. No data on the use of RVT in octogenarians have been reported in the literature so far. The study aims to show whether the RVT is as effective in elderly as in younger patients. Moreover, results of RVT are compared with those of a two-stage sequential treatment (TSST) in octogenarians, to identify the best approach to such a population. Subjects And Methods: Prospectively collected data of 131 consecutive patients undergoing RVT for biliary tract stone disease were retrospectively analyzed. Two analyses were performed: (1) results of RVT (operative time, conversion rate, CBD clearance, morbidity/mortality, hospital stay, costs, and need for further endoscopy) were compared between octogenarians and younger patients, and (2) results of RVT in the elderly were compared with those of 27 octogenarians undergoing TSST for CCL. Results: Octogenarians undergoing RVT were in poorer general condition (P&lt;.0001) and had a higher conversion rate (P&lt;.0001) and a longer hospital stay (P&lt;.007) than younger patients. No differences in the rates of CBD clearance, surgery-related morbidity, mortality, and costs were recorded. Although octogenarians undergoing RVT were in poorer general condition than those undergoing TSST, the results of the two approaches were similar. Conclusions: RVT in the elderly seems to be as cost-effective as in younger patients; nevertheless, it may lead to a higher conversion rate and longer hospital stay. In octogenarians, RVT is not inferior to TSST in the treatment of CCL even for patients in poor condition. abstract_id: PUBMED:29641848 Laparoscopic-endoscopic rendezvous versus preoperative endoscopic sphincterotomy in people undergoing laparoscopic cholecystectomy for stones in the gallbladder and bile duct. Background: The management of gallbladder stones (lithiasis) concomitant with bile duct stones is controversial. The more frequent approach is a two-stage procedure, with endoscopic sphincterotomy and stone removal from the bile duct followed by laparoscopic cholecystectomy. The laparoscopic-endoscopic rendezvous combines the two techniques in a single-stage operation. Objectives: To compare the benefits and harms of endoscopic sphincterotomy and stone removal followed by laparoscopic cholecystectomy (the single-stage rendezvous technique) versus preoperative endoscopic sphincterotomy followed by laparoscopic cholecystectomy (two stages) in people with gallbladder and common bile duct stones. Search Methods: We searched The Cochrane Hepato-Biliary Group Controlled Trials Register, CENTRAL, MEDLINE Ovid, Embase Ovid, Science Citation Index Expanded Web of Science, and two trials registers (February 2017). Selection Criteria: We included randomised clinical trials that enrolled people with concomitant gallbladder and common bile duct stones, regardless of clinical status or diagnostic work-up, and compared laparoscopic-endoscopic rendezvous versus preoperative endoscopic sphincterotomy procedures in people undergoing laparoscopic cholecystectomy. We excluded other endoscopic or surgical methods of intraoperative clearance of the bile duct, e.g. non-aided intraoperative endoscopic retrograde cholangiopancreatography or laparoscopic choledocholithotomy (surgical incision of the common bile duct for removal of bile duct stones). Data Collection And Analysis: We used standard methodological procedures recommended by Cochrane. Main Results: We included five randomised clinical trials with 517 participants (257 underwent a laparoscopic-endoscopic rendezvous technique versus 260 underwent a sequential approach), which fulfilled our inclusion criteria and provided data for analysis. Trial participants were scheduled for laparoscopic cholecystectomy because of suspected cholecysto-choledocholithiasis. Male/female ratio was 0.7; age of men and women ranged from 21 years to 87 years. The run-in and follow-up periods of the trials ranged from 32 months to 84 months. Overall, the five trials were judged at high risk of bias. Athough all trials measured mortality, there was just one death reported in one trial, in the laparoscopic-endoscopic rendezvous group (low-quality evidence). The overall morbidity (surgical morbidity plus general morbidity) may be lower with laparoscopic rendezvous (RR 0.59, 95% CI 0.29 to 1.20; participants = 434, trials = 4; I² = 28%; low-quality evidence); the effect was a little more certain when a fixed-effect model was used (RR 0.56, 95% CI 0.32 to 0.99). There was insufficient evidence to determine the effects of the two approaches on the failure of primary clearance of the bile duct (RR 0.55, 95% CI 0.22 to 1.38; participants = 517; trials = 5; I² = 58%; very low-quality evidence). The effects of either approach on clinical post-operative pancreatitis were unclear (RR 0.29, 95% CI 0.07 to 1.12; participants = 517, trials = 5; I² = 24%; low-quality evidence). Hospital stay appeared to be lower in the laparoscopic-endoscopic rendezvous group by about three days (95% CI 3.51 to 2.50 days shorter; 515 participants in five trials; low-quality evidence). There was very low-quality evidence that suggested longer operative time with laparoscopic-endoscopic rendezvous (MD 34.07 minutes, 95% CI 11.41 to 56.74; participants = 313; trials = 3; I² = 93%). The Trial Sequential Analyses of operating time and the length of hospital stay indicated that all the trials crossed the conventional boundaries, suggesting that the sample sizes were adequate, with a low risk of random error. Authors' Conclusions: There was insufficient evidence to determine the effects of the laparoscopic-endoscopic rendezvous versus preoperative endoscopic sphincterotomy techniques in people undergoing laparoscopic cholecystectomy on mortality and morbidity. The laparoscopic-endoscopic rendezvous procedure may lead to longer operating times, but it may reduce the length of the hospital stay when compared with preoperative endoscopic sphincterotomy followed by laparoscopic cholecystectomy. However, no firm conclusions could be drawn because the quality of evidence was low or very low. If confirmed by future trials, these data might re-design the scenario of treatment of this condition, albeit requiring greater organisational effort. Future trials should also address issues such as quality of life and cost analysis. abstract_id: PUBMED:29291778 Surgery in biliary lithiasis: from the traditional "open" approach to laparoscopy and the "rendezvous" technique. Background: According to the current literature, biliary lithiasis is a worldwide-diffused condition that affects almost 20% of the general population. The rate of common bile duct stones (CBDS) in patients with symptomatic cholelithiasis is estimated to be 10% to 33%, depending on patient's age. Compared to stones in the gallbladder, the natural history of secondary CBDS is still not completely understood. It is not clear whether an asymptomatic choledocholithiasis requires treatment or not. For many years, open cholecystectomy with choledochotomy and/or surgical sphincterotomy and cleaning of the bile duct were the gold standard to treat both pathologies. Development of both endoscopic retrograde cholangiopancreatography (ERCP) and laparoscopic surgery, together with improvements in diagnostic procedures, influenced new approaches to the management of CBDS in association with gallstones. Data Sources: We decided to systematically review the literature in order to identify all the current therapeutic options for CBDS. A systematic literature search was performed independently by two authors using PubMed, EMBASE, Scopus and the Cochrane Library Central. Results: The therapeutic approach nowadays varies greatly according to the availability of experience and expertise in each center, and includes open or laparoscopic common bile duct exploration, various combinations of laparoscopic cholecystectomy and ERCP and combined laparoendoscopic rendezvous. Conclusions: Although ERCP followed by laparoscopic cholecystectomy is currently preferred in the majority of hospitals worldwide, the optimal treatment for concomitant gallstones and CBDS is still under debate, and greatly varies among different centers. abstract_id: PUBMED:8763563 "500 consecutive cases of laparoscopic cholecystectomy". Argument for the association to endoscopic sphincterotomy: analysis Our series of 500 consecutive laparoscopic cholecystectomies has drawn attention to several factors. Results would favor endoscopic sphincterotomy in cases with associated treatment of gall stones in the main bile duct. history taking should search for past history of laparoscopic surgery, especially in men with an extensive pillosity, work-up should include ultrasonography, liver function tests and intravenous cholangiography (in all patients excepting cases of allergy), it is important to use an identical material in a given hospital facility for identical procedures in order to avoid equipment-related conversions, an interesting alternative in emergency situations would be echo-guided transcutaneous transperietal cholecystotomy which allows time for safe opacification, safety is of prime importance and rapide conversion should be made when there is any doubt, especially concerning the main duct, morbidity and mortality in this series were nearly identical to those previously reported large series, for endoscopic sphincterotomy proposed as complementary therapy for cases with associated lithiasis in the main bile duct, 2/3 were performed peroperatively and 1/3 postoperatively. Considering all sphincterotomies, 2/3 were positive with extraction of a stone and demonstration of an enlarged bile duct evidencing recent migration (no failure or iatrogenic event), the relationship between the different elements should allow rapid indications in emergency situations and identify complications immediately (mean hospitalization less than 48 hours) or later. Finally, first intention laparoscopic cholecystectomy can be proposed for patients with signs of biliary distress with lithiasis depite other, sometimes contradictory, conclusions (ANDEM, CPAM, consensus conference). First intention laparoscopic cholecystectomy should eliminate in the future most of the major biliary-pancreatic abdominal syndromes. abstract_id: PUBMED:35318553 Destiny for Rendezvous: Is Cholecysto/Choledocholithiasis Better Treated with Dual- or Single-Step Procedures? Biliary lithiasis is common worldwide, affecting almost 20% of the general population, though few experience symptoms. The frequency of choledocholithiasis in patients with symptomatic cholelithiasis is estimated to be 10-33%, depending on patients' age. Unlike gallbladder lithiasis, the medical and surgical treatment of common bile duct stones is uncertain, having changed over the last few years. The prior gold standard treatment for cholelithiasis and choledocholithiasis was open cholecystectomy with bile duct clearance, choledochotomy, and/or surgical sphincterotomy. In the last 10-15 years, new treatment approaches to the complex pathology of choledocholithiasis have emerged with the advent of endoscopic retrograde cholangiopancreatography (ERCP), laparoscopic surgery, and advanced diagnostic procedures. Although ERCP followed by laparoscopic cholecystectomy is the preferred mode of management, a single-step strategy (laparo-endoscopic rendezvous) has gained acceptance due to lesser morbidity and a lower risk of iatrogenic damage. Given the above, a tailored approach relying on careful evaluation of the disease is necessary in order to minimize complication risks and overall costs. Yet, the debate remains open, with no consensus on the superiority of laparo-endoscopic rendezvous to more conventional approaches. abstract_id: PUBMED:10449842 Management of common bile duct stones in a single operation combining laparoscopic cholecystectomy and perioperative endoscopic sphincterotomy. Background: Laparoscopic cholecystectomy (LC) has become the reference treatment for biliary lithiasis, but the management strategy for common bile duct stones (CBDS) remains a subject of controversy in the absence of an established consensus. While conventional surgery remains the reference treatment for CBDS, minimally invasive techniques are becoming more and more popular. These methods consist of the extraction of the common bile duct stones either exclusively by laparoscopy or by sequential treatment with endoscopic sphincterotomy (ES) followed by LC. The aim of this study was to evaluate the treatment of CBDS in a one-stage operation by laparoscopic cholecystectomy (LC) and perioperative endoscopic sphincterotomy. Patients And Methods: Between January 1994 and March 1998, 44 patients, 20 male and 24 female, (sex ratio 1.2) with a median age of 57 years (range 28-84 years) were treated for suspected or confirmed CBDS. The CBDS were uncomplicated in 39 cases (88%) and associated with a complication in 5 cases (12%), namely, cholangitis (2 cases) or acute pancreatitis (3 cases). The perioperative ES was performed immediately after the LC during the same operative time, with perioperative cholangiography being systematically performed (1 failure). In 6 cases, a transcystic drain was left in place (to ensure complete evacuation of the CBDS postoperatively) when there were more than three stones and/or when they were larger than 6 mm. The patient was positioned in the left lateral position in order to perform the ES. Results: Mean operative time for LC was 60 min, range 40-90 min. The general anesthesia was prolonged by 40 min in order to perform an ES (range 30-60 min). The perioperative ES was unsuccessful in one case (2%), due to the impossibility of catheterizing the papilla, the preoperative MR cholangiogram being normal. Immediate clearance of the CBD was achieved in 95% of the cases (42 p). In 2 cases, residual stone was found in the sixth day after cholangiography and was spontaneously evacuated as shown by 21st-day control. There was no mortality or postoperative complications. The duration of the postoperative hospitalization was 4.6 days (range 3-6). Conclusions: We believe that LC combined with perioperative ES is a quick, reliable, and safe technique for the treatment of CBDS during a single operative procedure, although this approach is limited by the proximity and availability of an endoscopic team. abstract_id: PUBMED:9588045 Treatment of common bile duct lithiasis: first-line endoscopic sphincterotomy and celioscopic cholecystectomy The aim of this study was to assess retrograd cholangiogram findings and first-line endoscopic sphincterotomy followed by laparoscopic cholecystectomy for the treatment of main bile duct lithiasis. Clinical, biological and echographic criteria predictive of main bile duct lithiasis were observed in 125 patients (32 men, 93 women, mean age 44.2 years) who underwent retrograde cholangiography. Results suggested lithiasis of the main bile duct in 105 case (87.5%) and were confirmed at endoscopic sphincterotomy in 99. There were no deaths; four complications occurred (3 moderate cases of pancreatitis, 1 cholecystitis). Conversion was required in 11.6%, usually because of difficulties in dissecting. No residual lithiasis was observed. Mean duration of hospitalization was 11.4 days. This sequential treatment scheme for main bile duct lithiasis appears to be effective, minimally invasive and safe. abstract_id: PUBMED:8161153 Treatment of lithiasis of the common bile duct by endoscopic sphincterotomy and laparoscopic cholecystectomy This paper evaluates the treatment of common bile duct stones by endoscopic sphincterotomy (SE) and laparoscopic cholecystectomy (CL). 733 patients presenting with symptomatic cholelithiasis were operated on between March 1990 April 1993; 131 (18%) of them had a preoperative suspicion of common bile duct stones (LVBP): jaundice for 41, biliary acute pancreatitis for 27 and altered liver function tests for 63. 131 retrograde cholangiographies (CPRE) were attempted with an associated SE (113 cases) in the presence of LVBP, biliary pancreatitis, enlargement of common bile duct and appearance of forced papilla. CL was performed 24 to 48 hours later. CPRE +/- SE had no mortality; 1 patient presented a retroduodenal perforation of CBD, requiring surgery. 58 cases (44.2%) of LVBP were diagnosed, without a statistically significant difference according to the clinical pattern. In the group with altered liver function tests only alkaline phosphatase was significantly predictive of LVBP. There was no mortality or morbidity related to CL; conversion rate was 9.8%; 4 of 12 cases of conversion were related to persistence of stones in the common bile duct, without any possibility of laparoscopic extraction. Mean hospital stay was 7.4 days. Efficacy of this sequential method of treatment of LVBP was 91.3%: this method seems satisfactory, not dangerous and minimally invasive, and should be indicated for pre-operative suspected common bile duct stones. abstract_id: PUBMED:9854199 Laparoscopic cholecystectomy and lithiasis of the common bile duct: prospective study on the importance of preoperative endoscopic ultrasonography and endoscopic retrograde cholangiography Objectives: Laparoscopic cholecystectomy is the standard treatment of symptomatic gallstones. At present, no consensus has been reached on the diagnostic and therapeutic methods of concomitant common bile duct stones. Systematic preoperative endoscopic ultrasonography followed, if necessary, by endoscopic retrograde cholangiography and sphincterotomy during the same anesthetic procedure could be a diagnostic and therapeutic alternative for common bile duct stones making possible a laparoscopic cholecystectomy without intraoperative investigation of the common bile duct. Methods: One hundred and twenty-five patients underwent a prospective endoscopic ultrasonographic evaluation prior to laparoscopic cholecystectomy for symptomatic gallstones. Fourty-four patients (35%) had at least one predictive factor for common bile duct stones. Endoscopic ultrasonography and cholecystectomy were performed on the same day. Endoscopic ultrasonography was followed by endoscopic retrograde cholangiography and sphincterotomy by the same endoscopist in case of common bile duct stones on endoscopic ultrasonography. Patients were routinely followed up between 3 and 6 months and one year after cholecystectomy. Results: Endoscopic ultrasonography suggested common bile duct stones in 21 patients (17%). Endoscopic ultrasonography identified a stone in 17 of 44 patients (38.6%) with predictor of common bile duct stones and only in 4 of 81 patients (4.9%) without predictor of common bile duct stone. Among these 21 patients, one patient was not investigated with endoscopic retrograde cholangiography because of the high risk of sphincterotomy, 19 patients had a stone removed after sphincterotomy, one patient had no visible stone neither on endoscopic retrograde cholangiography, nor on exploration of the common bile duct after sphincterotomy. Endoscopic ultrasonography was normal in 104 patients (83%). However, two patients in this group were investigated with endoscopic retrograde cholangiography because endoscopic ultrasonography was incomplete in one case and because endoscopic ultrasonography was normal in the second case but a stone in the left hepatic duct was detected by ultrasonography. A stone was removed after endoscopic sphincterotomy in these two patients. In the group of 102 patients without stone, 91 out of 92, continued to be asymptomatic during a median follow-up of 8.5 months. One patient with symptoms one month after cholecystectomy underwent endoscopic sphincterotomy but no stone was found. Conclusions: Systematic preoperative endoscopic ultrasonography followed, if necessary with endoscopic retrograde cholangiography and sphincterotomy is a diagnostic and therapeutic alternative for common bile duct stones making possible a laparoscopic cholecystectomy without intraoperative investigation of the common bile duct for all patients. This alternative is only justifiable in patients with predictor of common bile duct stones. abstract_id: PUBMED:8525440 Current management of common bile duct stones in northern Italy. In order to obtain a complete picture of the current management of cholecystocholedocal lithiasis in northern Italy we've conducted the present survey. In the years 1992-1993, among 7,861 cholecystectomies, 665 patients with gallbladder and common bile duct stones were treated in 49 surgical departments. Some 271 (43%) were treated by traditional methods: open surgery or endoscopic sphincterotomy followed by laparotomic cholecystectomy; 313 patients (49%) were treated by endoscopic sphincterotomy followed by laparoscopic cholecystectomy and only 38 (6%) were treated by one-stage laparoscopy, either by a transcystic approach (27) or by laparoscopic choledochotomy (11). Morbidity and mortality were not significantly different in the different groups. We conclude that open surgery and sequential minimal invasive treatment are the standard approaches to cholecystocholedochal lithiasis in this first stage of the laparoscopic era. The laparoscopic management of common bile duct stones is at present confined to surgical departments specially devoted to laparoscopic surgery. Answer: The rendezvous technique (RVT) for cholecystocholedochal lithiasis in octogenarians appears to be as cost-effective as in younger patients, but it may lead to a higher conversion rate and longer hospital stay. Despite octogenarians undergoing RVT being in poorer general condition than those undergoing a two-stage sequential treatment (TSST), which includes preoperative endoscopic sphincterotomy followed by laparoscopic cholecystectomy, the results of the two approaches were similar in this age group. Therefore, RVT is not considered inferior to TSST for the treatment of cholecystocholedochal lithiasis in octogenarians, even for patients in poor condition (PUBMED:24229423). However, the management of gallbladder stones concomitant with bile duct stones is controversial. While the more frequent approach is a two-stage procedure with endoscopic sphincterotomy and stone removal from the bile duct followed by laparoscopic cholecystectomy, the laparoscopic-endoscopic rendezvous combines the two techniques in a single-stage operation. The laparoscopic-endoscopic rendezvous may lead to longer operating times but could reduce the length of the hospital stay when compared with preoperative endoscopic sphincterotomy followed by laparoscopic cholecystectomy. Nevertheless, the quality of evidence is low or very low, and no firm conclusions could be drawn (PUBMED:29641848). In summary, for octogenarians with cholecystocholedochal lithiasis, the RVT can be as effective as in younger patients and is not inferior to TSST. The choice between RVT and endoscopic sphincterotomy followed by laparoscopic cholecystectomy may depend on the specific circumstances of the patient, including their general condition and the expertise available at the treating center.
Instruction: Diabetic foot ulcers in Cameroon: can microflora prevalence inform probabilistic antibiotic treatment? Abstracts: abstract_id: PUBMED:17001946 Diabetic foot ulcers in Cameroon: can microflora prevalence inform probabilistic antibiotic treatment? Objective: To determine the clinical features, regularly associated microorganisms and their susceptibility to antibiotics, and the clinical outcomes of foot ulcers in patients with diabetes at the Yaoundé Central Hospital, Cameroon. Method: A retrospective analysis of routinely collected hospital data, and data validation by survey of clinical notes was conducted from November 1999 to October 2002 for adult diabetic patients with foot ulcers. Clinical data were recorded for each patient, followed by a record of microbiological investigations where available. Results: Of 503 patients with diabetes admitted during the study period, 54 (10.7%) had foot ulcers. Male subject represented 66.7% of this population. The mean age of the study population was 59.66 +/- 1.52 years. The foot ulcer led to the diagnosis of diabetes in six patients in whom the condition was previously unidentified. Of the 54 patients with foot ulcers, nine (16.7%) were selected for surgery and the remaining 45 were managed conservatively. Microbiological investigations were available for 21 patients. Proteus mirabilis was the most frequent microorganism yielded, and was regularly associated with Staphylococcus aureus. All the microorganisms isolated showed high sensitivity to second-generation quinolone antibiotics and were regularly sensitive to aminoglycoside antibiotics. Nine (16.7%) patients died and seven (13%) were discharged at their own request. Conclusion: The mortality rate among our diabetic patients with foot ulcers is high and the combination of second-generation quinolone and aminoglycoside antibiotics can be proposed as a probabilistic antibiotic approach to treating foot infection. abstract_id: PUBMED:26998033 Clinico-microbiological study and antibiotic resistance profile of mecA and ESBL gene prevalence in patients with diabetic foot infections. Diabetic foot infections (DFIs) constitute a major complication of diabetes mellitus. DFIs contribute to the development of gangrene and non-traumatic lower extremity amputations with a lifetime risk of up to 25%. The aim of the present study was to identify the presence of neuropathy and determine the ulcer grade, microbial profile and phenotypic and genotypic prevalence of the methicillin-resistance gene mecA and extended spectrum β-lactamase (ESBL)-encoding genes in bacterial isolates of DFI in patients registered at the Pakistan Institute of Medical Sciences (Islamabad, Pakistan). The results indicated that 46/50 patients (92%), exhibited sensory neuropathy. The most common isolate was Staphylococcus aureus (25%), followed by Pseudomonas aeruginosa (P. aeruginosa; 18.18%), Escherichia coli (16.16%), Streptococcus species (spp.) (15.15%), Proteus spp. (15.15%), Enterococcus spp. (9%) and Klebsiella pneumoniae (K. pneumoniae; 3%). The prevalence of the mecA gene was found to be 88% phenotypically and 84% genotypically. K. pneumoniae was shown to have the highest percentage of ESBL producers with a prevalence of 66.7% by double disk synergy test, and 100% by the cefotaxime + clavulanic acid/ceftazidime + clavulanic acid combination disk test. P. aeruginosa and K. pneumoniae had the highest (100%) proportion of metallo β-lactamase producers as identified by the EDTA combination disk test. The overall prevalence of β-lactamase (bla)-CTX-M, bla-CTX-M-15, bla-TEM, bla-OXA and bla-SHV genes was found to be 76.9, 76.9, 75.0, 57.7 and 84.6%, respectively, in gram-negative DFI isolates. The prevalence of mecA and ESBL-related genes was found to be alarmingly high in DFIs, since these genes are a major cause of antibiotic treatment failure. abstract_id: PUBMED:33295248 Local Antibiotic Delivery Systems in the Surgical Treatment of Diabetic Foot Osteomyelitis: Again, No Benefit? This retrospective study aimed to compare the outcomes and healing parameters of 3 groups of surgical treatment combined with and without local antibiotic administration in diabetic foot osteomyelitis (DFO). Overall, 25 patients with DFO who met the criteria were included in the study. Surgical debridement was used with systemic antibiotic administration alone (group A; n = 8) or combined with local application of antibiotic-loaded polymethylmethacrylate beads (group B; n = 9) or antibiotic-loaded hydroxyapatite and calcium sulfate beads (group C; n = 8). In total, 87.5% patients in group A, 100% in group B, and 87.5% in group C healed (P = .543). Median time to healing was 17 weeks in group A, 18 weeks in group B, and 19 weeks in group C (P = .094). One patient (12.5%) in group A was amputated. DFO recurrence rate was 12.5% in group A and 12.5% in group C (P = .543). Median hospitalization was 9 days in group A, 8 days in group B, and 9 days in group C (P = .081). In conclusion, adjunctive local antibiotic therapy was not shown to improve outcomes in surgically treated DFO. abstract_id: PUBMED:34779665 Evaluation of Adherence to the Oral Antibiotic Treatment in Patients With Diabetic Foot Infection. Introduction: The knowledge about level of adherence to oral antibiotic treatment in diabetic patients with ulcer infection could be essential as a method of evaluation/monitoring of conservative treatment. Aim: To assess the adherence to oral antibiotic treatment in outpatients with diabetic foot infection (soft tissue vs. osteomyelitis) by 8-item structured, self-reported medication adherence scale. Methods: cross-sectional study was carried out with 46 consecutive patients who had diabetic foot infection (soft tissue or bone infection) and required antibiotic oral treatment at outpatient clinical setting. Medication adherence was tested using the Spanish version of the validated eight-item self-report MMAS-8. Results: patients with diabetic ulcer infection, had well level of adherence to antibiotic medication (7 ± 1.2 vs. 7.4 ± 1.5). Patients with lower level of adherence had lower level of satisfaction with the antibiotic medication. The profile of the patients with lower level of adherence were patients with primary level of education and patient who required more help to take the medication. Conclusion: Patients with diabetic foot infection demonstrated well level of adherence to antibiotic medication, independently of type of infection (soft tissue vs. osteomyelitis) by 8-item structured, self-reported medication adherence scale. abstract_id: PUBMED:37107136 The Epidemiology of Antibiotic-Related Adverse Events in the Treatment of Diabetic Foot Infections: A Narrative Review of the Literature. The use of antibiotics for the treatment of diabetic foot infections (DFIs) over an extended period of time has been shown to be associated with adverse events (AEs), whereas interactions with concomitant patient medications must also be considered. The objective of this narrative review was to summarize the most frequent and most severe AEs reported in prospective trials and observational studies at the global level in DFI. Gastrointestinal intolerances were the most frequent AEs, from 5% to 22% among all therapies; this was more common when prolonged antibiotic administration was combined with oral beta-lactam or clindamycin or a higher dose of tetracyclines. The proportion of symptomatic colitis due to Clostridium difficile was variable depending on the antibiotic used (0.5% to 8%). Noteworthy serious AEs included hepatotoxicity due to beta-lactams (5% to 17%) or quinolones (3%); cytopenia's related to linezolid (5%) and beta-lactams (6%); nausea under rifampicin, and renal failure under cotrimoxazole. Skin rash was found to rarely occur and was commonly associated with the use of penicillins or cotrimoxazole. AEs from prolonged antibiotic use in patients with DFI are costly in terms of longer hospitalization or additional monitoring care and can trigger additional investigations. The best way to prevent AEs is to keep the duration of antibiotic treatment short and with the lowest dose clinically necessary. abstract_id: PUBMED:30129109 Remission in diabetic foot infections: Duration of antibiotic therapy and other possible associated factors. Aim: To determine the most appropriate duration of antibiotic therapy for diabetic foot infections (DFIs). Methods: Using a clinical pathway for adult patients with DFIs (retrospective cohort analysis), we created a cluster-controlled Cox regression model to assess factors related to remission of infection, emphasizing antibiotic-related variables. We excluded total amputations as a result of DFI and DFI episodes with a follow-up time of &lt;2 months. Results: Among 1018 DFI episodes in 482 patients, we identified 392 episodes of osteomyelitis, 626 soft tissue infections, 246 large abscesses, 322 episodes of cellulitis and 335 episodes of necrosis; 313 cases involved revascularization. Patients underwent surgical debridement for 824 episodes (81%), of which 596 (59%) required amputation. The median total duration of antibiotic therapy was 20 days. After a median follow-up of 3 years, 251 of the episodes (24.7%) were followed by ≥1 additional episode(s). Comparing patients with and without additional episodes, risk of recurrence was lower in those who underwent amputation, had type 1 diabetes, or underwent revascularization. On multivariate analysis including the entire study population, risk of remission was inversely associated with type 1 diabetes (hazard ratio [HR] 0.3, 95% confidence interval [CI] 0.2-0.6). Neither duration of antibiotic therapy nor parenteral treatment affected risk of recurrence (HR 1.0, 95% CI 0.99-1.01 for both). Similarly, neither &gt;3 weeks versus &lt;3 weeks of therapy, nor &gt;1 week versus &lt;1 week of intravenous treatment affected recurrence. In stratified analyses for both soft tissue DFIs or osteomyelitis separately, we did not observe associations of antibiotic duration with microbiological or clinical recurrences of DFI. The HRs were 1.0 (95% CI 0.6-1.8) for an antibiotic duration &gt;3 weeks overall and 0.6 (95% CI 0.2-1.3) for osteomyelitis cases only. Plotting of duration of antibiotic therapy failed to identify any optimal threshold for preventing recurrences. Conclusions: Our analysis found no threshold for the optimal duration or route of administration of antibiotic therapy to prevent recurrences of DFI. These limited data might support possibly shorter treatment duration for patients with DFI. abstract_id: PUBMED:26452233 Diabetic foot infections: Current treatment and delaying the 'post-antibiotic era'. Background: Treatment for diabetic foot infections requires properly diagnosing infection, obtaining an appropriate specimen for culture, assessing for any needed surgical procedures and selecting an empiric antibiotic regimen. Therapy will often need to be modified based on results of culture and sensitivity testing. Because of excessive and inappropriate use of antibiotics for treating diabetic foot infections, resistance to the usually employed bacteria has been increasing to alarming levels. Review: This article reviews recommendations from evidence-based guidelines, informed by results of systematic reviews, on treating diabetic foot infections. Data from the pre-antibiotic era reported rates of mortality of about 9% and of high-level leg amputations of about 70%. Outcomes have greatly improved with appropriate antibiotic therapy. While there are now many oral and parenteral antibiotic agents that have demonstrated efficacy in treating diabetic foot infections, the rate of infection with multidrug-resistant pathogens is growing. This problem requires a multi-focal approach, including providing education to both clinicians and patients, developing robust antimicrobial stewardship programmes and using new diagnostic and therapeutic technologies. Recently, new methods have been developed to find novel antibiotic agents and to resurrect old treatments, like bacteriophages, for treating these difficult infections. Conclusion: Medical and political leaders have recognized the serious global threat posed by the growing problem of antibiotic resistance. By a multipronged approach that includes exerting administrative pressure on clinicians to do the right thing, investing in new technologies and encouraging the profitable development of new antimicrobials, we may be able to stave off the coming 'post-antibiotic era'. abstract_id: PUBMED:36810078 Application of antibiotic bone cement in the treatment of infected diabetic foot ulcers in type 2 diabetes. Background: In this study, we try to investigate the effect of antibiotic bone cement in patients with infected diabetic foot ulcer (DFU). Methods: This is a retrospective study, including fifty-two patients with infected DFU who had undergone treated between June 2019 and May 2021. Patients were divided into Polymethylmethacrylate (PMMA) group and control group. 22 patients in PMMA group received antibiotic bone cement and regular wound debridement, and 30 patients in control group received regular wound debridement. Clinical outcomes include the rate of wound healing, duration of healing, duration of wound preparation, rate of amputation, and frequency of debridement procedures. Results: In PMMA group, twenty-two patients (100%) had complete wound healing. In control group, twenty-eight patients (93.3%) had wound healing. Compared with control group, PMMA group had fewer frequencies of debridement procedures and shorter duration of wound healing (35.32 ± 3.77 days vs 44.37 ± 7.44 days, P &lt; 0.001). PMMA group had five minor amputation, while control group had eight minor amputation and two major amputation. Regarding the rate of limb salvage, there was no limb lose in PMMA group and two limb losses in control group. Conclusion: The application of antibiotic bone cement is an effective solution for infected DFU treatment. It can effectively decreased the frequency of debridement procedures and shorten the healing duration in patients with infected DFU. abstract_id: PUBMED:37849506 Empirical Antibiotic Therapy in Diabetic Foot Ulcer Infection Increases Hospitalization. Background: We evaluated the outcomes associated with initial antibiotic management strategies for infected diabetic foot ulcers (DFUs) diagnosed in an outpatient multidisciplinary center. Methods: Consecutive outpatient individuals with infected DFUs, stratified according to Infectious Diseases Society of America infection severity, were followed for 1 year from the initial antibiotic administration to treat acute infection. The main outcomes were hospitalization rates for a diabetes-related foot complication within 30 days of diagnosis and requiring an amputation or death during follow-up. Outcomes were analyzed by regression analysis, accounting for demographics, clinical characteristics, and antibiotic therapy. Results: Among 147 outpatients with infected DFUs, 116 were included. Infections were categorized as mild (68%), moderate (26%), and severe (6%). Empirical antibiotics (not culture-guided) were prescribed as initial treatment in 39 individuals, while 77 received culture-based antibiotics. There were no differences in demographic or clinical characteristics between the antibiotic administration groups, except for a higher body mass index and prevalence of chronic kidney disease in the empirical cohort. Forty-two infected DFU patients required hospitalization within 30 days of diagnosis for the same reason. The relative risk for hospitalizations was 1.87 greater in those with mild infections when treated with empirical antibiotics compared with culture-directed antibiotics. There were no differences in amputations and/or death at 1 year follow-up. Conclusions: These data support obtaining tissue culture to guide antibiotic therapy, regardless of DFU infection severity, to decrease hospitalizations. abstract_id: PUBMED:37107047 Timing of Revascularization and Parenteral Antibiotic Treatment Associated with Therapeutic Failures in Ischemic Diabetic Foot Infections. For ischemic diabetic foot infections (DFIs), revascularization ideally occurs before surgery, while a parenteral antibiotic treatment could be more efficacious than oral agents. In our tertiary center, we investigated the effects of the sequence between revascularization and surgery (emphasizing the perioperative period of 2 weeks before and after surgery), and the influence of administering parenteral antibiotic therapy on the outcomes of DFIs. Among 838 ischemic DFIs with moderate-to-severe symptomatic peripheral arterial disease, we revascularized 608 (72%; 562 angioplasties, 62 vascular surgeries) and surgically debrided all. The median length of postsurgical antibiotic therapy was 21 days (given parenterally for the initial 7 days). The median time delay between revascularization and debridement surgery was 7 days. During the long-term follow-up, treatment failed and required reoperation in 182 DFI episodes (30%). By multivariate Cox regression analyses, neither a delay between surgery and angioplasty (hazard ratio 1.0, 95% confidence interval 1.0-1.0), nor the postsurgical sequence of angioplasty (HR 0.9, 95% CI 0.5-1.8), nor long-duration parenteral antibiotic therapy (HR 1.0, 95% CI 0.9-1.1) prevented failures. Our results might indicate the feasibility of a more practical approach to ischemic DFIs in terms of timing of vascularization and more oral antibiotic use. Answer: The study conducted at the Yaoundé Central Hospital in Cameroon on diabetic foot ulcers (DFUs) suggests that the prevalence of microflora can indeed inform probabilistic antibiotic treatment. The retrospective analysis of adult diabetic patients with foot ulcers revealed that Proteus mirabilis was the most frequent microorganism isolated, often associated with Staphylococcus aureus. All microorganisms isolated showed high sensitivity to second-generation quinolone antibiotics and were regularly sensitive to aminoglycoside antibiotics. Based on these findings, the study concluded that a combination of second-generation quinolone and aminoglycoside antibiotics could be proposed as a probabilistic approach to treating foot infections in this patient population (PUBMED:17001946). This approach aligns with the understanding that appropriate antibiotic therapy is crucial for treating DFUs, as they can lead to severe complications such as gangrene and non-traumatic lower extremity amputations. The high sensitivity of the isolated microorganisms to specific antibiotics in the Cameroonian study population provides a basis for selecting empirical antibiotic therapy that is likely to be effective against the commonly encountered pathogens in that region. However, it is important to note that antibiotic resistance is a growing concern globally, and the prevalence of antibiotic-resistant genes, such as mecA and ESBL, is alarmingly high in DFIs (PUBMED:26998033). This highlights the need for careful consideration of local resistance patterns and the potential for resistance development when choosing empirical antibiotic treatments. Moreover, while local antibiotic delivery systems have been explored as an adjunct to surgical treatment for diabetic foot osteomyelitis (DFO), studies have not shown a significant benefit in terms of outcomes or healing parameters (PUBMED:33295248). Therefore, systemic antibiotic therapy, guided by microbial prevalence and sensitivity patterns, remains a critical component of the management strategy for DFUs. In summary, the microflora prevalence in Cameroon can inform probabilistic antibiotic treatment for DFUs, but ongoing surveillance of resistance patterns and judicious use of antibiotics are essential to optimize treatment outcomes and mitigate the risk of antibiotic resistance.
Instruction: Is LabTutor a helpful component of the blended learning approach to biosciences? Abstracts: abstract_id: PUBMED:27118191 Is LabTutor a helpful component of the blended learning approach to biosciences? Aims And Objectives: To evaluate the use of LabTutor (a physiological data capture and e-learning package) in bioscience education for student nurses. Background: Knowledge of biosciences is important for nurses the world over, who have to monitor and assess their patient's clinical condition, and interpret that information to determine the most appropriate course of action. Nursing students have long been known to find acquiring useable bioscience knowledge challenging. Blended learning strategies are common in bioscience teaching to address the difficulties students have. Student nurses have a preference for hands-on learning, small group sessions and are helped by close juxtaposition of theory and practice. Design: An evaluation of a new teaching method using in-classroom voluntary questionnaire. Methods: A structured survey instrument including statements and visual analogue response format and open questions was given to students who participated in Labtutor sessions. The students provided feedback in about the equipment, the learning and the session itself. Results: First year (n = 93) and third year (n = 36) students completed the evaluation forms. The majority of students were confident about the equipment and using it to learn although a few felt anxious about computer-based learning. They all found the equipment helpful as part of their bioscience education and they all enjoyed the sessions. Conclusion: This equipment provides a helpful way to encourage guided independent learning through practice and discovery and because each session is case study based and the relationship of the data to the patient is made clear. Our students helped to evaluate our initial use of LabTutor and found the sessions enjoyable and helpful. LabTutor provides an effective learning tool as part of a blended learning strategy for biosciences teaching. Relevance To Clinical Practice: Improving bioscience knowledge will lead to a greater understanding of pathophysiology, treatments and interventions and monitoring. abstract_id: PUBMED:38259560 The impact of self-directed learning experience and course experience on learning satisfaction of university students in blended learning environments: the mediating role of deep and surface learning approach. Introduction: With the rapid development of technology and the evolution of educational ideas, the blended learning model has become the new norm in higher education. Therefore, based on Biggs' learning process theory, this study aims to investigate the relationships between learning experience, learning approaches, and learning satisfaction of university students within the Chinese blended learning context to explore the dynamic process and mechanism of blended learning. Methods: The Chinese modified versions of the Self-Rating Scale of Self-Directed Learning, the Course Experience Questionnaire, and the Revised Study Process Questionnaire were administered to 939 Chinese university first-grade students (444 men, 495 women). The data were analyzed by using the covariance-based structural equation modeling (CB-SEM) technique. Results: The results demonstrated that, among Chinese university students, there were significant correlations between the self-directed learning experience, the course experience, the deep learning approach, the surface learning approach, and learning satisfaction. Additionally, the learning approaches mediated the association between the self-directed learning experience and learning satisfaction and between the course experience and learning satisfaction. Conclusion: This study provides insight into the facilitative effect of university students' self-directed learning experience and course experience on their learning satisfaction and how this effect is triggered through the mediating role of different learning approaches with the blended learning context. This study shows the learning behaviors and psychology in a blended learning environment, thus revealing the new learning characteristics of university students by integrating the self-learning characteristics of blended learning into the framework of learning process theory. The findings contribute to assisting blended learning providers in delivering targeted interventions to enhance students' learning satisfaction. abstract_id: PUBMED:36407816 Teaching Histology Using Self-Directed Learning Modules (SDLMs) in a Blended Approach. Introduction: New technologies like virtual microscopy have revolutionized histology education. However, first-year students often require additional assistance with virtual slides. Online self-directed learning modules (SDLMs) were developed to provide such support to learners by offering them short instructional videos that are uploaded to YouTube and the instructional website. The purpose of this study was to determine the effectiveness of SDLMs and to sample students' opinions about SDLMs. Method: Over a 3-year time span, SDLMs were used to augment histology lessons, and their effectiveness (on learning outcomes) was measured by using traditional steeple-chase and/or virtual slide assessments. Average percentage scores for both methods of assessment were compared using paired or independent t-tests. Student opinions about SDLMs were collected using an anonymous survey. The survey results were analyzed by average scores and thematic analysis of the narrative responses. Results: Using SDLMs in a blended approach showed significant improvement in students' academic performance - irrespective of the method of assessment. There was a strong positive correlation with the performance when students were assessed using the virtual slide method. However, a standalone approach using SDLMs did not positively impact learning outcomes. Survey results indicated that most students perceived the videos as helpful for understanding the subject better and as quick review opportunities. Conclusion: The results support the use of SDLMs in a blended instructional approach and as an adjunct resource to conventional microscopy. This use of SDLMs was positively received by learners and significantly improved the learning outcome. Supplementary Information: The online version contains supplementary material available at 10.1007/s40670-022-01669-9. abstract_id: PUBMED:22737553 The study of effectiveness of blended learning approach for medical training courses. Background: Blended learning as a method of learning that includes face to face learning, pure E-learning and didactic learning. This study aims to investigate the efficacy of medical education by this approach. Methods: This interventional study was performed in 130 students at different clinical levels participating in class sessions on "congenital adrenal hyperplasia and ambiguous genitalia". Sampling was done gradually during 6 months and all of them filled a pretest questionnaire and received an educational compact disk. One week later, a presence class session was held in a question and answer and problem solving method. Two to four weeks later, they filled a posttest questionnaire. Results: There was a significant correlation between pretest and posttest scores and the posttest scores were significantly more than the pretest ones. Sub-specialized residents had the most and the students had the least attitude towards blended learning approach. There was a significant correlation between the research samples' accessibility to computer and their attitude and satisfaction to blended learning approach. Conclusion: Findings generally showed that the blended learning was an effective approach in making a profound learning of academic subjects. abstract_id: PUBMED:33850633 An Adaptive Blended Learning Approach in the Implementation of a Medical Neuroscience Laboratory Activities. Background: The COVID-19 pandemic revealed existing gaps in the medical educational system that is heavily dependent on the presence of medical students and teachers in laboratory and class for instruction. This affects continuity in the implementation of the neuroanatomy component of the medical neuroscience laboratory activities during COVID-19. We hypothesized that pivoting wet laboratory neuroanatomy activities to online using an adaptive flexible blended method might represent an effective approach in the implementation of laboratory neuroanatomy activities during a pandemic. Methods: The current study describes an adaptive flexible blended learning approach that systematically mixes virtual face-to-face interaction activities with the online learning of brain structures, and the discussion of clinical cases. Learning materials are delivered through both synchronous and asynchronous modes, and Year 1 medical students learn neuroanatomy laboratory activities at different locations and different times. Student performances in the adaptive flexible blended learning approach were compared with the learning of similar activities during an in-person implementation of neuroscience laboratory activities. Results: The results of using this adaptive flexible blended learning approach provided an autonomous independent learning, self-study approach that broadened student performance such that we have more students scoring between 80 and 89%, whereas the in-person learning resulted in most of the students scoring &gt; 90% in the medical neuroscience laboratory activities. Conclusion: An adaptive flexible blended learning approach that combined virtual face-to-face instruction using digital technology with online learning of neuroscience laboratory activities provided a unique educational experience for Year 1 medical students to learn neuroscience laboratory activities during the COVID-19 pandemic. abstract_id: PUBMED:34129432 Teaching Outbreak Investigations with an Interactive Blended Learning Approach. Public health is a central but often neglected component of veterinary education. German veterinary public health (VPH) education includes substantial theory-focused lectures, but practical case studies are often missing. To change this, we combined the advantages of case-based teaching and blended learning to teach these topics in a more practical and interactive way. Blended learning describes the combination of online and classroom-based teaching. With it, we created an interdisciplinary module for outbreak investigations and zoonoses, based on the epidemiology, food safety, and microbiology disciplines. We implemented this module within the veterinary curriculum of the seventh semester (in the clinical phase of the studies). In this study, we investigated the acceptance of this interdisciplinary approach and established a framework for the creation of interactive outbreak investigation cases that can serve as a basis for further cases. Over a period of 3 years, we created three interactive online cases and one interactive in-class case and observed the student-reported evaluation of the blended learning concept and self-assessed learning outcomes. Results show that 80% (75-89) of students evaluated the chosen combination of case-based and blended learning for interdisciplinary teaching positively and therefore accepted it well. Additionally, 76% (70-98) of students evaluated their self-assessed learning outcomes positively. Our results suggest that teaching VPH through interdisciplinary cases in a blended learning approach can increase the quality of teaching VPH topics. Moreover, it provides a framework to incorporate realistic interdisciplinary VPH cases into the curriculum. abstract_id: PUBMED:35187254 A Blended Approach to Learning in an Obstetrics and Gynecology Residency Program: Proof of Concept. Problem: Graduate medical education programs are expected to educate residents to be able to manage critically ill patients. Most obstetrics and gynecology (OB/GYN) graduate medical education programs provide education primarily in a didactic format in a traditional face-to-face setting. Busy clinical responsibilities tend to limit resident engagement during these educational sessions. The revision of the training paradigm to a more learner-centered approach is suggested. Intervention: A blended learning education program was designed and implemented to facilitate the teaching and learning of obstetric emergencies, specifically diabetic ketoacidosis and acute-onset severe hypertension in pregnancy. The program incorporated tools to foster a community of inquiry. Multimedia presentations were also utilized as the main modality to provide instruction. The blended learning course was designed in accordance with the cognitive theory of multimedia learning. Context: This intervention was carried out in the Department of Obstetrics and Gynecology, Southern Illinois University. All 15 OB/GYN residents were enrolled in this course as part of their educational curriculum. First, face-to-face instructions were given in detail about the blended learning process, course content, and online website. The residents were then assigned tasks related to completing the online component of the course, including watching multimedia presentations, reading the resources placed online, and participating in online asynchronous discussions. The course culminated with a face-to-face session to clarify misconceptions. Pre- and postcourse quizzes were administered to the residents to assess their retention and understanding. Outcome: Objective analysis demonstrated significant improvements in retention and understanding after participating in the course. The blended learning format was well received by the residents. Resident perception of social presence in the asynchronous online discussions was demonstrative of low scores relating to peer-to-peer interaction. The multimedia presentations and the availability of learning resources were well received. Lessons Learned: Outcomes of this study suggest that blended learning is a viable tool to support teaching and learning of obstetric emergencies in an OB/GYN residency program. abstract_id: PUBMED:34875867 Blended Learning With Virtual Pediatric Emergency Patients for Medical Students Blended Learning With Virtual Pediatric Emergency Patients for Medical Students Abstract. Treating critically ill children is a major challenge for learners. Medical Students often feel inadequately prepared for their later role as physicians. This article describes the implementation and evaluation of blended learning using virtual patients (VP) during the student rotation at the pediatric emergency department Inselspital Bern. Students rated the project as highly beneficial and recommended its integration into the entire clinical curriculum. abstract_id: PUBMED:36647363 Exploring dimensions of blended learning readiness: Validation of scale and assessing blended learning readiness in the context of TVET Bangladesh. In collaboration with Technical and Vocational Education and Training educators and academic librarians, this study attempts to develop and validate a blended learning readiness scale for Bangladeshi TVET students. This study also attempts to investigate the reliability of this validated scale by measuring six blended learning readiness dimensions. In this research, the Content Validity Index, Exploratory Factor Analysis, and Confirmatory Factor Analysis were used to establish the construct validity of the blended learning readiness scale. Questionnaires were circulated to the students of six Institutes of Marine Technology (N = 235) who went for a blended learning session for an entire semester. This study's result suggests a strong positive association between Bangladeshi polytechnic students' preparedness for blended learning and attitudes toward online learning, openness to new technology, and attitudes toward the face-to-face classroom. In contrast, significant negative correlations were found between blended learning readiness and basic skills in using technology, learning flexibility, and study management. This study also implied that gender and previous academic achievement is not strong predictor of measuring blended learning readiness in Bangladeshi Technical and Vocational Education and Training context. This blended learning readiness scale would help course designers, educators, librarians, and policymakers of Bangladesh to improve the quality of the blended learning environment by addressing the students' concerns about various blended learning components. abstract_id: PUBMED:37748941 Simulation in a blended learning curriculum for neonatology. Blended learning is a learner-centered educational method that combines online and traditional face-to-face educational strategies. Simulation is a commonly utilized platform for experiential learning and an ideal component of a blended learning curriculum. This section describes blended learning, including its strengths and limitations, educational frameworks, uses within health professions education, best practices, and challenges. Also included is a brief introduction to simulation-based education, along with theoretical and real-world examples of how simulation may be integrated into a blended learning curriculum. Examples of blended learning in Neonatal-Perinatal Medicine, specifically within the Neonatal Resuscitation Program, procedural skills training, and the National Neonatology Curriculum, are reviewed. Answer: Yes, LabTutor is considered a helpful component of the blended learning approach to biosciences. According to a study evaluating its use in bioscience education for student nurses, LabTutor, which is a physiological data capture and e-learning package, was found to be beneficial. The majority of both first-year and third-year nursing students who participated in LabTutor sessions reported feeling confident about the equipment and using it to learn. They found the equipment helpful as part of their bioscience education and enjoyed the sessions. The study concluded that LabTutor encourages guided independent learning through practice and discovery, with each session being case study based and clearly relating the data to the patient. This approach aligns with the preferences of student nurses for hands-on learning and small group sessions, and it helps bridge the gap between theory and practice. The use of LabTutor was seen as an effective learning tool within a blended learning strategy for teaching biosciences, which is important for nurses to understand pathophysiology, treatments, interventions, and monitoring (PUBMED:27118191).
Instruction: Does methadone maintenance treatment based on the new national guidelines work in a primary care setting? Abstracts: abstract_id: PUBMED:12939891 Does methadone maintenance treatment based on the new national guidelines work in a primary care setting? Background: General practitioners (GPs) are being encouraged to treat more drug users but there are few studies to demonstrate the effectiveness of primary care treatment. Aim: To determine whether patients retained on methadone maintenance treatment for one year in a modern British primary care setting, with prescribing protocols based on the new national guidelines, can achieve similar harm reduction outcomes to those demonstrated in other settings, using objective outcome measures where available. Design Of Study: Longitudinal cohort study. Setting: The Primary Care Clinic for Drug Dependence, Sheffield. Method: The intervention consisted of a methadone maintenance treatment provided by GPs with prescribing protocols based on the 1999 national guidelines. The first 96 eligible consenting patients entering treatment were recruited; 65 completed the study. Outcome measures were current drug use, HIV risk-taking behaviour, social functioning, criminal activity, and mental and physical health, supplemented by urinalysis and criminal record data. Results: Frequency of heroin use was reduced from a mean of 3.02 episodes per day (standard deviation [SD] = 1.73) to a mean of 0.22 episodes per day (SD = 0.54), (chi 2 = 79.48, degrees of freedom [df] = 2, P &lt; 0.001), confirmed by urinalysis. Mean numbers of convictions and cautions were reduced by 62% (z = 3.378, P &lt; 0.001) for all crime. HIV risk-taking behaviour, social functioning, and physical and psychological wellbeing all showed significant improvements. Conclusion: Patients retained on methadone maintenance treatment for one year in a primary care setting can achieve improvements on a range of harm reduction outcomes similar to those shown by studies in other, often more highly structured programmes. abstract_id: PUBMED:8911590 A pilot study of primary-care-based buprenorphine maintenance for heroin dependence. The treatment of heroin dependence with opioid maintenance has traditionally employed methadone and more recently buprenorphine administered in traditional drug treatment settings. In this pilot study we evaluated buprenorphine maintenance for the treatment of heroin dependence in a program administered by primary-care providers in a primary-care setting. Seven patients were admitted to this nonblinded open-label pilot study and were offered 6 months of primary-care-based buprenorphine maintenance. Buprenorphine was administered in doses of 16 mg on Monday and Wednesday and 32 mg on Friday. Patients were seen weekly by primary-care providers and attended self-help meetings. Of the seven patients admitted to the study, five (71%) completed the 6-month pilot study and two (29%) were removed from the study. Urine toxicology data showed that the majority of urines tested were clear of opioids in four out of five patients who remained in treatment. These results suggest that primary-care-based opioid maintenance using buprenorphine shows promise as a new approach to the treatment of heroin dependence. abstract_id: PUBMED:36851865 Challenges of methadone maintenance treatment decentralisation from Vietnamese primary care providers' perspectives. Introduction: Decentralising methadone maintenance treatment to primary care improves patients' access to care and their drug and HIV treatment outcomes. However, primary care providers (PCP), especially those working in limited-resource settings, are facing great challenges to provide quality methadone treatment. This study explores the challenges perceived by PCP providing methadone treatment at commune health centres in a mountainous region in Vietnam. Method: We conducted in-depth interviews with 26 PCP who worked as program managers, physicians, counsellors, pharmacists and medication dispensing staff at the methadone programs of eight commune health centres in Dien Bien, Vietnam, in November and December 2019. We used the health-care system framework in developing the interview guides and in summarising data themes. Results: Participants identified major challenges in providing methadone treatment in commune health centres at the individual, clinic and environmental levels. Individual-level challenges included a lack of confidence and motivation in providing methadone treatment. Clinic-level factors included inadequate human resources, lack of institutional support, insufficient technical support, lack of referral resources and additional support for patients. Environment-level factors comprised a lack of reasonable policies on financial support for providers at commune health centres for providing methadone treatment, lack of regulations and mechanisms to ensure providers' safety in case of potential violence by patients and to share responsibility for overdose during treatment. Discussion And Conclusion: PCP in Vietnam faced multi-level challenges in providing quality methadone treatment. Supportive policies and additional resources are needed to ensure the effectiveness of the decentralisation program. abstract_id: PUBMED:29090153 Multimorbidity in patients enrolled in a community-based methadone maintenance treatment programme delivered through primary care. Background: Multimorbidity, the co-existence of two or more (2+) long-term conditions in an individual, is common among problem drug abusers. Objective: To delineate the patterns, multimorbidity prevalence, and disease severity in patients enrolled in a community-based primary care methadone maintenance treatment (MMT) programme. Design: This was a retrospective cohort study (n=274). The comparator group consisted of mainstream primary care patients. Electronic medical record assessment was performed using the Cumulative Illness Rating Scale. Results: Prevalence of multimorbidity across 2+ domains was significantly higher within the MMT sample at 88.7% (243/274) than the comparator sample at 51.8% (142/274), p&lt;0.001. MMT patients were seven times more likely to have multimorbidity across 2+ domains compared with mainstream patients (OR 7.29, 95% confidence interval 4.68-11.34; p&lt;0.001). Prevalence of multimorbidity was consistently high across all age groups in the MMT cohort (range 87.8-100%), while there was a positive correlation with age in the comparator cohort (r=0.29, p&lt;0.001). Respiratory, psychiatric, and hepatic-pancreatic domains were the three most common domains with multimorbidity. Overall, MMT patients (mean±SD, 1.97±0.43) demonstrated significantly higher disease severity than mainstream patients (mean±SD, 1.18±0.78), p&lt;0.001. Prevalence of moderate disease severity observed in the &lt;45-year MMT age group was 50% higher than the ≥45-year comparator age group. Conclusions: Prevalence of multimorbidity and disease severity in MMT patients was greater than in the age- and sex-matched comparators. Patients with a history of drug abuse require co-ordinated care for treatment of their addiction, and to manage and prevent chronic illnesses. Community-based programmes delivered through primary care help fulfil this need. abstract_id: PUBMED:9046446 Integrating primary care and methadone maintenance treatment: implementation issues. Linking primary medical care with methadone maintenance treatment brings critical services to drug users, many with HIV/AIDS, tuberculosis and other illnesses. However, a variety of important philosophical, ethical, and systems issues may impede the process of implementing a "linked" service delivery model. Conflicting paradigms, such as the traditional "doctor-patient" relationship with its emphasis on continuity of care and the substance abuse treatment model of limit-setting and behavioral consequences, create tension in the treatment system. This article describes these tensions and uses clinical vignettes to demonstrate how to address these implementation issues. In conclusion, solutions are proposed for successfully integrating services for medically ill substance abusers. abstract_id: PUBMED:34521420 Validity of self-reported substance use: research setting versus primary health care setting. Background: Self-reported substance use is more likely to be influenced by underreporting bias compared to the biological markers. Underreporting bias or validity of self-reported substance use depends on the study population and cannot be generalized to the entire population. This study aimed to compare the validity of self-reported substance use between research setting and primary health care setting from the same source population. Methods And Materials: The population in this study included from Rafsanjan Youth Cohort Study (RYCS) and from primary care health centers. The sample from RYCS is made up 607 participants, 113 (18.62%) women and 494 (81.38%) men and sample from PHC centers is made up 522 individuals including 252 (48.28%) women and 270 (51.72%) men. We compared two groups in respect of prevalence estimates based on self-reported substance use and urine test. Then for evaluating validity of self-reported substance use in both group, the results of reference standard, urine tests, were compared with the results of self-reported drug use using measures of concordance. Results: The prevalence of substance use based on urine test was significantly higher in both settings compared to self-reported substance use over the past 72 h. The sensitivity of self-report substance use over the past 72 h in research setting was 39.4, 20, 10% and zero for opium, methadone, cannabis and amphetamine, respectively and in primary health care setting was 50, 20.7, 12.5% and zero for opium, methadone, cannabis and amphetamine, respectively. The level of agreement between self-reported substance use over the past 72 h and urine test indicated fair and moderate agreement for opium in both research and primary health care settings, respectively and also slight agreement for methadone and cannabis in both settings were reported. There was no significant difference between the two groups in terms of self-reported substance use. For all substances, the level of agreement increased with longer recall periods. The specificity of self-report for all substances in both groups was more than 99%. Conclusion: Individuals in primary health care setting were more likely to self-reported substance use than in research setting, but setting did not have a statistically significant effect in terms of self-reported substance use. Programs that rely on self-reported substance use may not estimate the exact prevalence of substance use in both research and primary health care settings, especially for substances that have a higher social stigma. Therefore, it is recommended that self-report and biological indicators be used for more accurate evaluation in substance use studies. It is also suggested that future epidemiological studies be performed to reduce bias of social desirability and find a method providing the highest level of privacy. abstract_id: PUBMED:9727815 A randomized trial of buprenorphine maintenance for heroin dependence in a primary care clinic for substance users versus a methadone clinic. Purpose: Buprenorphine is an alternative to methadone for the maintenance treatment of heroine dependence and may be effective on a thrice weekly basis. Our objective was to evaluate the effect of thrice weekly buprenorphine maintenance for the treatment of heroin dependence in a primary care clinic on retention in treatment and illicit opioid use. Subjects And Methods: Opioid-dependent patients were randomly assigned to receive thrice weekly buprenorphine maintenance in a primary care clinic that was affiliated with a drug treatment program (n = 23) or in a traditional drug treatment program (n = 23) in a 12-week clinical trial. Primary outcomes were retention in treatment and urine toxicology for opioids; secondary outcomes were opioid withdrawal symptoms and toxicology for cocaine. Results: Retention during the 12-week study was higher in the primary care setting (78%, 18 of 23) than in the drug treatment setting (52%, 12 of 23; P = 0.06). Patients admitted to primary care had lower rates of opioid use based on overall urine toxicology (63% versus 85%, P &lt; 0.01) and were more likely to achieve 3 or more consecutive weeks of abstinence (43% versus 13%, P = 0.02). Cocaine use was similar in both settings. Conclusions: Buprenorphine maintenance is an effective treatment for heroin dependence in a primary care setting. abstract_id: PUBMED:26234389 Risk of mortality on and off methadone substitution treatment in primary care: a national cohort study. Aim: To assess whether risk of death increases during periods of treatment transition, and investigate the impact of supervised methadone consumption on drug-related and all-cause mortality. Design: National Irish cohort study. Setting: Primary care. Participants: A total of 6983 patients on a national methadone treatment register aged 16-65 years between 2004 and 2010. Measurement: Drug-related (primary outcome) and all-cause (secondary outcome) mortality rates and rate ratios for periods on and off treatment; and the impact of regular supervised methadone consumption. Results: Crude drug-related mortality rates were 0.24 per 100 person-years on treatment and 0.39 off treatment, adjusted mortality rate ratio 1.63 [95% confidence interval (CI) = 0.66-4.00]. Crude all-cause mortality rate per 100 person-years was 0.51 on treatment versus 1.57 off treatment, adjusted mortality rate ratio 3.64 (95% CI = 2.11-6.30). All-cause mortality off treatment was 6.36 (95% CI = 2.84-14.22) times higher in the first 2 weeks, 9.12 (95% CI = 3.17-26.28) times higher in weeks 3-4, compared with being 5 weeks or more in treatment. All-cause mortality was lower in those with regular supervision (crude mortality rate 0.60 versus 0.81 per 100 person-years) although, after adjustment, insufficient evidence exists to suggest that regular supervision is protective (mortality rate ratio = 1.23, 95% CI = 0.67-2.27). Conclusions: Among primary care patients undergoing methadone treatment, continuing in methadone treatment is associated with a reduced risk of death. Patients' risk of all-cause mortality increases following treatment cessation, and is highest in the initial 4-week period. abstract_id: PUBMED:20403022 The effect of time spent in treatment and dropout status on rates of convictions, cautions and imprisonment over 5 years in a primary care-led methadone maintenance service. Background: Methadone maintenance treatment (MMT) in primary care settings is used increasingly as a standard method of delivering treatment for heroin users. It has been shown to reduce criminal activity and incarceration over periods of periods of 12 months or less; however, little is known about the effect of this treatment over longer durations. Aims: To examine the association between treatment status and rates of convictions and cautions (judicial disposals) over a 5-year period in a cohort of heroin users treated in a general practitioner (GP)-led MMT service. Design: Cohort study. Setting: The primary care clinic for drug dependence, Sheffield, 1999-2005. Participants: The cohort comprised 108 consecutive patients who were eligible and entered treatment. Ninety were followed-up for the full 5 years. Intervention: The intervention consisted of MMT provided by GPs in a primary care clinic setting. Measurements: Criminal conviction and caution rates and time spent in prison, derived from Police National Computer (PNC) criminal records. Findings: The overall reduction in the number of convictions and cautions expected for patients entering MMT in similar primary care settings is 10% for each 6 months retained in treatment. Patients in continuous treatment had the greatest reduction in judicial disposal rates, similar to those who were discharged for positive reasons (e.g. drug free). Patients who had more than one treatment episode over the observation period did no better than those who dropped out of treatment. Conclusions: MMT delivered in a primary care clinic setting is effective in reducing convictions and cautions and incarceration over an extended period. Continuous treatment is associated with the greatest reductions. abstract_id: PUBMED:12831383 A comparison of buprenorphine treatment in clinic and primary care settings: a randomised trial. Objective: To compare outcomes, costs and incremental cost-effectiveness of heroin detoxification performed in a specialist clinic and in general practice. Design And Setting: Randomised controlled trial set in a specialist outpatient drug treatment centre and six office-based general practices in inner city Sydney, Australia. Participants: 115 people seeking treatment for heroin dependence, of whom 97 (84%) were reinterviewed at Day 8, and 78 (68%) at Day 91. Interventions: Participants were randomly allocated to primary care or a specialist clinic, and received buprenorphine for 5 days for detoxification, then were offered either maintenance therapy with methadone or buprenorphine, relapse prevention with naltrexone, or counselling alone. Main Outcome Measures: Completion of detoxification, engagement in post-detoxification treatment, and heroin use assessed at Days 8 and 91. Costs relevant to providing treatment, including staff time, medication use and diagnostic procedures, with abstinence from heroin use on Day 8 as the primary outcome measure. Results: There were no significant differences in the proportions completing detoxification (40/56 [71%] primary care v 46/59 [78%] clinic), participating in postwithdrawal treatment (28/56 [50%] primary care v 36/59 [61%] clinic), reporting no opiate use during the withdrawal period (13/56 [23%] primary care v 13/59 [22%] clinic), and in duration of postwithdrawal treatment by survival analysis. Most participants in both groups entered postwithdrawal buprenorphine maintenance. On an intention-to-treat basis, self-reported heroin use in the month before the Day 91 interview was significantly lower than at baseline (27 days/month at baseline, 14 days/month at Day 91; P &lt; 0.001) and did not differ between groups. Buprenorphine detoxification in primary care was estimated to be $24 more expensive per patient than treatment at the clinic. The incremental cost-effectiveness ratio reveals that, in this context, it costs $20 to achieve a 1% improvement in outcome in primary care. Conclusions: Buprenorphine-assisted detoxification from heroin in specialist clinic and primary care settings had similar efficacy and cost-effectiveness. Buprenorphine treatment can be initiated safely in primary care settings by trained GPs. Answer: Yes, methadone maintenance treatment (MMT) based on the new national guidelines appears to work in a primary care setting. A longitudinal cohort study conducted in a British primary care setting with prescribing protocols based on the 1999 national guidelines found that patients retained on MMT for one year achieved significant reductions in heroin use, criminal activity, HIV risk-taking behavior, and improvements in social functioning and physical and psychological wellbeing (PUBMED:12939891). These outcomes were similar to those demonstrated in other, often more highly structured programs, indicating the effectiveness of MMT in primary care. Additionally, a national Irish cohort study found that continuing in MMT is associated with a reduced risk of death, with all-cause mortality increasing following treatment cessation, particularly in the initial 4-week period (PUBMED:26234389). Another study showed that MMT delivered in a primary care clinic setting is effective in reducing convictions, cautions, and incarceration over an extended period, with continuous treatment associated with the greatest reductions (PUBMED:20403022). Furthermore, a randomized trial comparing buprenorphine maintenance for heroin dependence in a primary care clinic versus a methadone clinic found that retention in treatment and reduction in illicit opioid use were higher in the primary care setting (PUBMED:9727815). This suggests that primary care-based opioid maintenance using buprenorphine, which is an alternative to methadone, also shows promise as an effective treatment approach. These findings collectively support the notion that MMT, when implemented according to national guidelines, can be successfully delivered in a primary care setting, achieving harm reduction outcomes comparable to more traditional treatment settings.
Instruction: Voluntary reporting system in anaesthesia: is there a link between undesirable and critical events? Abstracts: abstract_id: PUBMED:11101704 Voluntary reporting system in anaesthesia: is there a link between undesirable and critical events? Background: Reporting systems in anaesthesia have generally focused on critical events (including death) to trigger investigations of latent and active errors. The decrease in the rate of these critical events calls for a broader definition of significant anaesthetic events, such as hypotension and bradycardia, to monitor anaesthetic care. The association between merely undesirable events and critical events has not been established and needs to be investigated by voluntary reporting systems. Objectives: To establish whether undesirable anaesthetic events are correlated with critical events in anaesthetic voluntary reporting systems. Methods: As part of a quality improvement project, a systematic reporting system was implemented for monitoring 32 events during elective surgery in our hospital in 1996. The events were classified according to severity (critical/undesirable) and nature (process/outcome) and control charts and logistic regression were used to analyse the data. Results: During a period of 30 months 22% of the 6439 procedures were associated with anaesthetic events, 15% of which were critical and 31% process related. A strong association was found between critical outcome events and critical process events (OR 11.5 (95% confidence interval (CI) 4.4 to 27.8)), undesirable outcome events (OR 4.8 (95% CI 2.0 to 11.8)), and undesirable process events (OR 4.8 (95% CI 1.3 to 13.4)). For other classes of events, risk factors were related to the course of anaesthesia (duration, occurrence of other events) and included factors determined during the pre-anaesthetic visit (risk of haemorrhage, difficult intubation or allergic reaction). Conclusion: Undesirable events are associated with more severe events and with pre-anaesthetic risk factors. The way in which information on significant events can be used is discussed, including better use of preoperative information, reduction in the collection of redundant information, and more structured reporting. abstract_id: PUBMED:25684322 Anesthesia-related critical incidents in the perioperative period in children; a proposal for an anesthesia-related reporting system for critical incidents in children. Background: The incidence, type and severity of anesthesia-related critical incidents during the perioperative phase has been investigated less in children than in adults. Aim: The aim of the study was to identify and analyze anesthesia-related critical incidents in children to identify areas to improve current clinical practice, and to propose a specialized anesthesia-related critical incidence registration for children. Method: All reported pediatric anesthesia-related critical incidents reported on a voluntary reporting based on a 20-item complication list of the Dutch Society of Anesthesiology between January 2007 and August 2013 were analyzed. An anesthesia-related critical incident was defined as 'any incident that affected, or could have affected, the safety of the patient while under the care of an anesthetist'. As the 20-item complications list was too crude for detailed analyses, all critical incidents were reclassified into the more detailed German classification lists with the adjustment of specific items for children (in total 10 categories with 101 different subcategories). Results: During the 6-year period, a total of 1214 critical incidents were reported out of 35 190 anesthetics (cardiac and noncardiac anesthesia cases). The most frequently reported incidents (46.5%) were related to the respiratory system. Infants &lt;1 year, children with ASA physical status III and IV, and emergency procedures had a higher rate of adverse incidents. Conclusion: Respiratory events were the most reported commonly critical incidents in children. Both the Dutch and German existing lists of critical incident definitions appeared not to be sufficient for accurate classification in children. The present list can be used for a new registration system for critical incidents in pediatric anesthesia. abstract_id: PUBMED:17066996 A trigger tool to identify adverse events in the intensive care unit. Background: The Institute for Healthcare Improvement has tested and taught use of a variety of trigger tools, including those for adverse medication events, neonatal intensive care events, and a global trigger tool for measuring all event categories in a hospital. The trigger tools have evolved as a complimentary adjunct to voluntary reporting. The Trigger Tool technique was used to identify the rate of occurrence of adverse events in the intensive care unit (ICU), and a subset of ICUs described those events in detail. Methods: Sixty-two ICUs in 54 hospitals (both academic and community) engaged in IHI critical care collaboratives between 2001 and late 2004. Charts were selected using a random sampling technique and reviewed using a two-stage process. Results: The prevalence of adverse events observed on 12,074 ICU admissions was 11.3 adverse events/100 patient days. For a subset of 1,294 charts from 13 ICUs which were reviewed in detail, 1,450 adverse events were identified, for a prevalence of 16.4 events/100 ICU days. Fifty-five percent of the charts in this subset contained at least one adverse event. Discussion: The Trigger Tool methodology is a practical approach to enhance detection of adverse events in ICU patients. Evaluation of these adverse events can be used to direct resource use for improvement work. The measurement of these sampled chart reviews can also be used to follow the impact of the change strategies on the occurrence of adverse events within a local ICU. abstract_id: PUBMED:22950987 Comparison of methods for the detection of medication safety events in the critically ill. Purpose: To categorize and synthesize medication safety event detection methods in the critically ill in order to provide clinicians and administrators with approaches to event detection that are intended to expand and complement traditional voluntary reporting systems. Methods: A literature search of OvidMEDLINE was performed to identify articles related to medication safety involving critically ill patients in the intensive care unit setting. The inclusion of articles was restricted to comparative studies. The bibliographies of all retrieved articles were reviewed to obtain additional articles of relevance. The various event detection methods were compared by: evidence supporting their use; number, type and severity of events detected; phase of the medication use process in which events were detected; and ease and cost of implementation. Major limitations of each method were also collated. Results: There are a number of methods that can be used to identify medication safety events in the critically ill. These can broadly be categorized as: 1) voluntary reporting, 2) record review, 3) rules/triggers and 4) direct observation and 5) interviews/surveys. Relatively few studies have directly compared these assessment methods in the ICU setting, although the limitations of the traditional voluntary reporting system as the sole method of event detection are well established. Although not truly dichotomous, these methods can be broken down into more proactive and reactive approaches. Rules/triggers and direct observation of the medication use process in the ICU are examples of proactive approaches to event detection, while the traditional unsolicited voluntary reporting is typically reactive. However, each of the event detection methods has advantages and disadvantages, so the methods should not be considered mutually exclusive with respect to obtaining information about medication safety. Conclusions: Given the limitations of traditional voluntary reporting systems, a multimodal approach used to identify medication safety events is most likely to capture the largest number and type of events. We would advise not trying to implement additional approaches beyond voluntary reporting systems all at once. This would be difficult and costly. Rather, we suggest a systematic implementation of additional event detection approaches that takes into account hospital-specific considerations. abstract_id: PUBMED:21877221 The critical incident reporting system as an instrument of risk management for better patient safety The probability that an inpatient will be harmed by a medical procedure is at least 3% of all patients. As a consequence, hospital risk management has become a central management task in the health care sector. The critical incident reporting system (CIRS) as a voluntary instrument for reporting (near) incidents plays a key role in the implementation of a risk management system. The goal of the CIRS is to register system errors without assigning guilt or meting out punishment and at the same time increasing the number of voluntary reports. abstract_id: PUBMED:27761112 Hospitalizations Due to Adverse Drug Events in the Elderly-A Retrospective Register Study. Adverse drug events (ADEs) are more likely to affect geriatric patients due to physiological changes occurring with aging. Even though this is an internationally recognized problem, similar research data in Finland is still lacking. The aim of this study was to determine the number of geriatric medication-related hospitalizations in the Finnish patient population and to discover the potential means of recognizing patients particularly at risk of ADEs. The study was conducted retrospectively from the 2014 emergency department patient records in Oulu University Hospital. A total number of 290 admissions were screened for ADEs, adverse drug reactions (ADRs) and drug-drug interactions (DDIs) by a multi-disciplinary research team. Customized Naranjo scale was used as a control method. All admissions were categorized into "probable," "possible," or "doubtful" by both assessment methods. In total, 23.1% of admissions were categorized as "probably" or "possibly" medication-related. Vertigo, falling, and fractures formed the largest group of ADEs. The most common ADEs were related to medicines from N class of the ATC-code system. Age, sex, residence, or specialty did not increase the risk for medication-related admission significantly (min p = 0.077). Polypharmacy was, however, found to increase the risk (OR 3.3; 95% CI, 1.5-6.9; p = 0.01). In conclusion, screening patients for specific demographics or symptoms would not significantly improve the recognition of ADEs. In addition, as ADE detection today is largely based on voluntary reporting systems and retrospective manual tracking of errors, it is evident that more effective methods for ADE detection are needed in the future. abstract_id: PUBMED:26479166 Adverse event surveillance in small animal anaesthesia: an intervention-based, voluntary reporting audit. Objective: To develop, test and refine an 'intervention-based' system for the surveillance of adverse events (AEs) during small animal anaesthesia. Study Design: Prospective, voluntary reporting audit. Animals: A total of 1386 consecutive small animal anaesthetics (including 972 dogs and 387 cats). Methods: Adverse events were defined as undesirable perianaesthetic events requiring remedial intervention to prevent or limit patient morbidity. Using previous reports, 11 common AEs were selected and 'intervention-based' definitions were devised. A voluntary reporting audit was performed over 1 year at a university teaching hospital. Data on AEs were collected via paper checkbox forms completed after each anaesthetic and were assimilated using an electronic database. Interventions were performed entirely at the discretion of the attending anaesthetist. Comparisons between dogs and cats were made using Fisher's exact tests. Results: Forms were completed for 1114 anaesthetics (a compliance of 80.4%), with 1001 AEs reported in 572 patients. The relative frequency of AEs reported were as follows: arousal or breakthrough pain (14.9%), hypoventilation (13.5%), hypotension (10.3%), arrhythmias (5.8%), hyperthermia/hypothermia (5.0%), airway complications (4.8%), recovery excitation (4.6%), aspiration risk (4.5%), desaturation (2.8%), hypertension (1.7%) and 'other' (3.7%). Canine anaesthetics (57.3%) were more likely to involve AEs than were feline anaesthetics (35.5%, p &lt; 0.01). Escalation in postanaesthetic care was required in 20% of cases where an AE was reported (8% of anaesthetics overall). In 6% of cases (2% overall), this involved management in an intensive care unit. There were six intra-anaesthetic fatalities (0.43%) during this period. The tool was widely accepted, being considered quick and easy to complete, but several semantic, logistical and personnel factors were encountered. Conclusions And Clinical Relevance: Simple intervention-based surveillance tools can be easily integrated into small animal anaesthetic practice, providing a valuable evidence base for anaesthetists. A number of considerations must be addressed to ensure compliance and the quality of data collected. abstract_id: PUBMED:22316142 Suboptimal reporting of adverse medical events to the FDA Adverse Events Reporting System by nurse practitioners and physician assistants. Objectives: The Adverse Events Reporting System (AERS) of the FDA is used to identify toxicities of drugs that are on the market. Nurse practitioners (NP) and physician assistants (PA), having an increasing role in the delivery of medical care, are also needed to participate in post-marketing pharmacovigilance. This study was performed to assess awareness and use of the AERS in voluntary reporting of drug toxicities by NPs and PAs. Methods: A cluster sample survey was issued at the Principles of Gastroenterology for the Nurse Practitioner and Physician Assistant course in August 2010. The survey assessed familiarity with the AERS, the number of adverse events seen and the frequency of reports sent to the AERS. NP and PA responses were compared using the two-tailed Fisher's exact. Results: Of the 92 respondents, 67 (72%) were NPs and 24 (26%) PAs. Of the 50 (54%) respondents that reported being familiar with the AERS system, 20 (40%) incorrectly identified the methods to report using the AERS. Overall reporting of adverse events was low, particularly in respondents seeing 5-12 adverse events per year. Conclusion: The study suggests that improved education regarding the importance of using AERS for pharmacovigilance is suggested for NPs and PAs. Due to the small size of the study, these data should be viewed as preliminary, pending a larger confirmatory study. abstract_id: PUBMED:24628436 Characterization of adverse events detected in a large health care delivery system using an enhanced global trigger tool over a five-year interval. Objective: To report 5 years of adverse events (AEs) identified using an enhanced Global Trigger Tool (GTT) in a large health care system. Study Setting: Records from monthly random samples of adults admitted to eight acute care hospitals from 2007 to 2011 with lengths of stay ≥3 days were reviewed. Study Design: We examined AE incidence overall and by presence on admission, severity, stemming from care provided versus omitted, preventability, and category; and the overlap with commonly used AE-detection systems. Data Collection: Professional nurse reviewers abstracted 9,017 records using the enhanced GTT, recording triggers and AEs. Medical record/account numbers were matched to identify overlapping voluntary reports or AHRQ Patient Safety Indicators (PSIs). Principal Findings: Estimated AE rates were as follows: 61.4 AEs/1,000 patient-days, 38.1 AEs/100 discharges, and 32.1 percent of patients with ≥1 AE. Of 1,300 present-on-admission AEs (37.9 percent of total), 78.5 percent showed NCC-MERP level F harm and 87.6 percent were "preventable/possibly preventable." Of 2,129 hospital-acquired AEs, 63.3 percent had level E harm, 70.8 percent were "preventable/possibly preventable"; the most common category was "surgical/procedural" (40.5 percent). Voluntary reports and PSIs captured &lt;5 percent of encounters with hospital-acquired AEs. Conclusions: AEs are common and potentially amenable to prevention. GTT-identified AEs are seldom caught by commonly used AE-detection systems. abstract_id: PUBMED:35172631 The use of sugammadex in critical events in anaesthesia: A retrospective review of the webAIRS database. Sugammadex has been used for more than ten years in Australia and New Zealand and has been implicated as an effective treatment, and in some cases a potential cause, of a critical incident. We aimed to identify and analyse critical incidents involving sugammadex reported to webAIRS, a de-identified voluntary online critical incident reporting system in Australia and New Zealand. We identified 116 incidents where the reporter implicated sugammadex as either a cause (23 cases) or a treatment (93 cases) during anaesthesia. There were 17 incidents suggestive of sugammadex anaphylaxis, although not all were confirmed by skin testing. There were six incidents when bradycardia was temporally related to sugammadex administration, although it was not possible to exclude other causes or contributory factors. There were nine incidents in which sugammadex was used to reverse aminosteroid-related neuromuscular blockade successfully in a 'can't intubate, can't oxygenate' (CICO) situation, and a further 67 incidents in which sugammadex was used to reverse aminosteroid neuromuscular blockade as part of the management of other critical incidents. While sugammadex was used during the management of 16 cases of anaphylaxis, there was no clear indication that this altered the course of the anaphylaxis in any of the cases. These reports indicate that sugammadex can be a potential trigger for anaphylaxis and that its use may be associated with the development of significant bradycardia. However, it is not possible to estimate or even speculate on the incidence of these sugammadex-related incidents on the basis of voluntary reporting to a database such as webAIRS. The reports also indicate that sugammadex has been used successfully to reverse residual or deep aminosteroid neuromuscular blockade in critical incident situations and to help rescue CICO scenarios. These findings provide further support for ensuring the ready availability of sugammadex wherever aminosteroid muscle relaxants are used. Answer: Yes, there is a link between undesirable and critical events in anesthesia as indicated by a study that implemented a systematic reporting system for monitoring events during elective surgery. The study found a strong association between critical outcome events and critical process events, undesirable outcome events, and undesirable process events. Specifically, the odds ratios for these associations were 11.5 for critical process events, 4.8 for undesirable outcome events, and 4.8 for undesirable process events, indicating that undesirable events are associated with more severe events and with pre-anaesthetic risk factors (PUBMED:11101704).
Instruction: Is digital intubation an option for emergency physicians in definitive airway management? Abstracts: abstract_id: PUBMED:16984845 Is digital intubation an option for emergency physicians in definitive airway management? Objectives: This study was designed to determine whether digital intubation is a valid option for definitive airway control by emergency physicians. Methods: Digital intubation was performed by 18 emergency medicine residents and 4 staff emergency medicine physicians on 6 different cadavers. Placement was confirmed by direct laryngoscopy. The total time for all attempts used, as well as the number of attempts, was recorded. Each participant attempted intubation on all 6 cadavers. Results: For 5 of the 6 cadavers, successful intubation occurred 90.9% of the time (confidence interval [CI], 85.5%-96.3%) for all participants. The average number of attempts for these 5 cadavers was 1.5 (CI, 1.4-1.7), and the average time required for success or failure was 20.8 seconds (CI, 16.9-24.8). The sixth cadaver developed soft tissue damage and a false passage near the vocal cords resulting in multiple failed attempts. Conclusions: Although the gold standard for routine endotracheal intubation remains to be direct laryngoscopy, its effectiveness in certain situations may be limited. We believe that digital intubation provides emergency physicians with another option in securing the unprotected airway. abstract_id: PUBMED:36353384 Airway management practices among emergency physicians: An observational study. Objectives: Emergency airway management is an integral part of patient stabilization. It is an essential skill for an emergency physician to master. There is a paucity of literature on airway management from low-to-middle-income countries like India where emergency medicine (EM) specialty is still in its infancy. We conducted this study to identify the existing airway management practices among emergency physicians in our tertiary care center. This study could pave the way for national airway registries. Methods: This prospective, observational study was conducted in the emergency department (ED) of a tertiary care center in India for 16 months. We included 166 patients who underwent emergency endotracheal intubation in the ED, irrespective of their age or underlying condition. The patients were observed for 15 min after intubation to identify any associated adverse events. We collected data about patients' demographic profile, indication for intubation, techniques of airway management, medications used, specialty of the physician performing intubation, use of preintubation and postintubation checklists, vitals before and after intubation, and any adverse events following intubation. Results: A total of 166 patients who required definite airway management in the ED were recruited for the study. The mean age of patients was 45.5 ± 20.1 years. Males comprised 61.4% of the patients. One hundred and forty-four patients were nontrauma cases and the remaining 22 cases were related to trauma. The most common indication for emergency airway management was altered mental status among nontrauma encounters and traumatic brain injury among trauma patients. Rapid sequence intubation (RSI) was the most common method employed (72.9% of cases). The most common agents used for induction and paralysis were etomidate and rocuronium, respectively. Direct laryngoscope was used in about 95% of cases. The first pass success rate in our study was 78.3%. EM residents were able to perform orotracheal intubation for all patients and none required a surgical airway. The incidence of adverse events within 15 min of intubation was 58.4%. Common complications observed were desaturation, right mainstem bronchus intubation, and equipment failure. Postintubation cardiac arrest occurred in around 5% of cases. Conclusion: RSI remains the most common method employed for emergency airway management. There exists heterogeneity in the practice and its associated complications. Hence, regular surveillance, quality improvement, and training are imperative to provide good patient care. abstract_id: PUBMED:29598840 Emergency Tracheal Intubation in an Ankylosing Spondylitis Patient in a Sitting Position Using an Airway Scope Combined with Face-to-Face and Digital Intubation. Background: Emergency intubation in a patient with advanced ankylosing spondylitis (AS) who presents with severe thoracic kyphosis deformity, rigid cervical flexion deformity of the neck, and an inability to achieve the supine position is particularly challenging to emergency physicians. Case Report: This study reports on an AS patient presenting with these difficult airway characteristics and acute respiratory failure who was successfully intubated using video laryngoscope-assisted inverse intubation (II) and blind digital intubation (BDI). By using Pentax AirwayScope-assisted inverse intubation, the tracheal tube tip was passed through the glottic opening, but an unexpected resistance occurred during tube advancement, which was overcome by subsequent BDI. By using laryngoscope-assisted II complemented by the BDI technique, the patient was successfully intubated without complications. WHY SHOULD AN EMERGENCY PHYSICIAN BE AWARE OF THIS?: Our case demonstrated that these two emergency airway management techniques are valuable backup methods and complement each other when applied to certain unstable airways, especially when the traditional patient position is not easily accomplished. Unexpected difficulty is not rare during airway management; emergency physicians should always be well prepared both mentally and practically. abstract_id: PUBMED:29268532 Clinical consensus of emergency airway management. Airway management is a common and key method to maintain and improve external respiration function of patients. Emergency physicians need a more appropriate guide to airway management. We concisely concluded current circumstances of Chinese emergency airway management. Then, we raised four principles: (I) priority to ventilation and oxygenation; (II) evaluation before intubation; (III) higher level of preparation (de-escalation); (IV) simplest (and least potentially harmful) form of intubation. We raised "CHANNEL" flow to direct initial emergency airway management and an algorithm was showed for emergency physicians understanding key points of airway management and further making medical decision. Finally, we introduced pharmacology of airway management. abstract_id: PUBMED:24858914 The effect of personal protective equipment on emergency airway management by emergency physicians: a mannequin study. Objective: Emergency medical personnel are at risk of secondary contamination when treating victims of chemical-biological-radiological-nuclear incidents. Hence, it is crucial to train them on the appropriate management of patients involved in chemical-biological-radiological-nuclear incidents. Personal protective equipment (PPE) plays an important role in treating patients suffering from various types of poisoning. However, very few studies have examined whether the use of PPE impedes airway management in an emergency department setting. The present study evaluated the effect of PPE on physicians' performance of emergency airway management using mannequins. Methods: Forty emergency physicians with 1-4 years of experience participated, and were divided by years of experience (1-2 vs. 3-4 years). Each participant both intubated a tracheal tube and inserted a laryngeal mask airway into a mannequin, with and without wearing protection using preassembled intubation aids. The intubation time for both methods was assessed along with participants' preferences and experiences in airway management. Results: The mean (SD) times to successful tracheal tube/mask placement with and without protection were similar [tracheal tube: 17.86 s (6.38) vs. 17.83 s (11.13), P=0.99; laryngeal mask: 10.51 s (4.39) vs. 9.65 s (3.29), P=0.32]. Conclusion: Protective equipment had no effect on physicians' emergency airway placement time. The effect of wearing PPE is limited if postintubation care is excluded from the evaluation. Furthermore, intubation experience influenced participants' preferred approach for airway management. abstract_id: PUBMED:35001821 Prehospital Surgical Airway Management: An NAEMSP Position Statement and Resource Document. Bag-valve-mask ventilation and endotracheal intubation have been the mainstay of prehospital airway management for over four decades. Recently, supraglottic device use has risen due to various factors. The combination of bag-valve-mask ventilation, endotracheal intubation, and supraglottic devices allows for successful airway management in a majority of patients. However, there exists a small portion of patients who are unable to be intubated and cannot be adequately ventilated with either a facemask or a supraglottic airway. These patients require an emergent surgical airway. A surgical airway is an important component of all airway algorithms, and in some cases may be the only viable approach; therefore, it is imperative that EMS agencies that are credentialed to manage airways have the capability to perform surgical airways when appropriate. The National Association of Emergency Medical Services Physicians (NAEMSP) recommends the following for emergency medical services (EMS) agencies that provide advanced airway management.A surgical airway is reasonable in the prehospital setting when the airway cannot be secured by less invasive means.When indicated, a surgical airway should be performed without delay.A surgical airway is not a substitute for other airway management tools and techniques. It should not be the only rescue option available.Success of an open surgical approach using a scalpel is higher than that of percutaneous Seldinger techniques or needle-jet ventilation in the emergency setting. abstract_id: PUBMED:25708959 Incidence of difficult airway situations during prehospital airway management by emergency physicians--a retrospective analysis of 692 consecutive patients. Introduction: In the prehospital setting, advanced airway management is challenging as it is frequently affected by facial trauma, pharyngeal obstruction or limited access to the patient and/or the patient's airway. Therefore, incidence of prehospital difficult airway management is likely to be higher compared to the in-hospital setting and success rates of advanced airway management range between 80 and 99%. Methods: 3961 patients treated by an emergency physician in Zurich, Switzerland were included in this retrospective analysis in order to determine the incidence of a difficult airway along with potential circumstantial risk factors like gender, necessity of CPR, NACA score, GCS, use and type of muscle relaxant and use of hypnotic drugs. Results: 692 patients underwent advanced prehospital airway management. Seven patients were excluded due to incomplete or incongruent documentation, resulting in 685 patients included in the statistical analysis. Difficult intubation was recorded in 22 patients, representing an incidence of a difficult airway of 3.2%. Of these 22 patients, 15 patients were intubated successfully, whereas seven patients (1%) had to be ventilated with a bag valve mask during the whole procedure. Conclusion: In this physician-led service one out of five prehospital patients requires airway management. Incidence of advanced prehospital difficult airway management is 3.2% and eventual success rate is 99%, if performed by trained emergency physicians. A total of 1% of all prehospital intubation attempts failed and alternative airway device was necessary. abstract_id: PUBMED:22273823 Implementation of the laryngeal tube for prehospital airway management: training of 1,069 emergency physicians and paramedics Objective: The European Resuscitation Council recommends that only rescuers experienced and well-trained in airway management should perform endotracheal intubation. Less trained rescuers should use alternative airway devices instead. Therefore, a concept to train almost 1,100 emergency physicians (EP) and emergency medical technicians (EMT) in prehospital airway management using the disposable laryngeal tube suction (LTS-D) is presented. Methods: In five operational areas of emergency medicine services in Germany and Switzerland all EPs and EMTs were trained in the use of the LTS-D by means of a standardized curriculum in the years 2006 and 2007. The main focus of the training was on different insertion techniques and LTS-D use in children and infants. Subsequently, all prehospital LTS-D applications from 2008 to 2010'were prospectively recorded. Results: None of the 762 participating EMTs and less than 20% of the EPs had previous clinical experience with the LTS-D. After the theoretical (practical) part of the training, the participants self-assessed their personal familiarity in using the LTS-D with a median value of 8 (8) and a range of 2-10 (range 1-10) of 10 points (1: worst, 10: best). Within the 3-year follow-up period the LTS-D was used in 303 prehospital cases of which 296 were successfully managed with the device. During the first year the LTS-D was used as primary airway in more than half of the cases, i.e. without previous attempts of endotracheal intubation. In the following years such cases decreased to 40% without reaching statistical significance. However, the mean number of intubation attempts which failed before the LTS-D was used as a rescue device decreased significantly during the study period (2008: 2.2 ± 0.3; 2009: 1.6 ± 0.4; 2010: 1.7 ± 0.3). Conclusion: A standardized training concept enabled almost 1,100 rescuers to be trained in the use of an alternative airway device and to successfully implement the LTS-D into the prehospital airway management algorithm. Because the LTS-D recently became an accepted alternative to endotracheal intubation in difficult airway scenarios, the number of intubation attempts before considering an alternative airway device is steadily decreasing. abstract_id: PUBMED:32055885 Airway management in preclinical emergency anesthesia with respect to specialty and education Background And Objective: Difficult airway management is a key skill in preclinical emergency medicine. A lower rate of subjective difficult airways and an increased success rate of endotracheal intubation have been reported for highly trained emergency physicians. The aim of this study was therefore to analyze the effect for different specialists and the individual state of training in the German emergency medical system. Material And Methods: In a retrospective register analysis of 6024 preclinical anesthesia procedures, the frequencies of airway devices, neuromuscular blocking agents, capnography and difficult airways were analyzed with respect to specialization and status of training. Additionally, low, medium and highly experienced emergency physicians in airway management were summarized by specialization and status of training according to the Dreyfus model of skill acquisition and compared. Results: The incidence of subjective difficult airway situations was 10% for anesthesiological emergency physicians compared to 15-20% for other disciplines. The latter used supraglottic airway devices more often (7-9% vs. 4%) and video laryngoscopes less often (3% vs. 5%) compared to anesthesiological emergency physicians. The discipline-related state of training was inhomogeneous and revealed a reduced rate of supraglottic airway devices for internal specialists with further training (10% vs. 2%). Anesthetists specialized in intensive care medicine used capnography less frequently compared to other anesthetists (79% vs. 72%). With higher levels of experience in airway management, the frequency of endotracheal intubation (86% vs. 94%), neuromuscular blocking agents (59% vs. 73%) and video laryngoscopy (3% vs. 6%) increased and the incidence of subjective difficult airway situations (16% vs. 10%) decreased. Conclusion: The level of training in airway management especially for non-anesthetists is inhomogeneous. The recently published German S1 guidelines for prehospital airway management recommend education and training as well as the primary use of the video laryngoscope with Macintosh blade. The implementation could lower the incidence of subjective difficult airways. abstract_id: PUBMED:29926691 Prospective Observational Study of Emergency Airway Management in Emergency Department. Background And Objective: In the emergency department (ED), airway management by emergency physicians is becoming more common. The presented study described emergency intubation indications, methods, operator characteristics, success rates, and adverse event rates. Material And Method: Prospective observational study using data collection form was done in the ED of Thammasat University Hospital from September 2012 to August 2015. Data were collected by each physician intubator at the time of each intubation. Results: The author recorded 1,393 encounters underwent intubation in ED. Intubation was ultimately successful in 99.43%. Cardiac arrest (18.95%) and head injury (7.32%) were the most common indication for intubation in medical encounters and trauma encounters, respectively. The overall success rates on the first attempt were 74.66% (95% confidence interval (CI); 72.37-76.94%). Rapid sequence intubation (RSI) was used in 22.47% of all encounters, had success rates on the first attempt of intubation higher than sedation without paralysis (79.55% vs. 66.09%, risk difference 15.93%, 95% CI for difference [8.64-23.23%]; p&lt;0.01). Senior physicians in emergency medicine had the highest rates of successful intubation on the first attempt (81.94%, 95% CI; 78.84-85.03%). The overall adverse event rates were 8.47%. Conclusion: The presented study observed high overall intubation success rates in ED. RSI has the highest success rates in the first attempt of intubation. Resident and staff in emergency medicine take major role in airway management. Training in emergency medicine residency programs can improve airway management skill. Answer: Yes, digital intubation is considered an option for emergency physicians in definitive airway management. A study designed to determine the validity of digital intubation for definitive airway control by emergency physicians found that for 5 out of 6 cadavers, successful intubation occurred 90.9% of the time with an average number of attempts at 1.5 and an average time required for success or failure at 20.8 seconds. This suggests that digital intubation can provide emergency physicians with another option in securing the unprotected airway, especially when the effectiveness of direct laryngoscopy may be limited in certain situations (PUBMED:16984845). Additionally, a case report of an emergency intubation in a patient with advanced ankylosing spondylitis, who presented with difficult airway characteristics, demonstrated successful intubation using video laryngoscope-assisted inverse intubation complemented by blind digital intubation. This case highlighted that digital intubation can be a valuable backup method and complement other emergency airway management techniques, particularly when traditional patient positioning is not feasible (PUBMED:29598840). However, it is important to note that while digital intubation is an option, the standard practice for emergency airway management often involves methods such as rapid sequence intubation (RSI), with direct laryngoscopy being used in about 95% of cases in one observational study (PUBMED:36353384). The use of alternative airway devices, such as the laryngeal tube, has also been implemented in prehospital airway management training for emergency physicians and paramedics, indicating a range of techniques and devices are available for emergency airway management (PUBMED:22273823). In summary, digital intubation is a viable option for emergency physicians in definitive airway management, particularly in challenging situations where other methods may be limited, but it is one of several techniques that may be employed depending on the circumstances and the training of the emergency physician.
Instruction: Can pregnant women obtain their own specimens for group B streptococcus? Abstracts: abstract_id: PUBMED:38054718 Evaluation of the Xpert Xpress GBS test for rapid detection of group B Streptococcus in pregnant women. Importance: This was the first study evaluating the performance of the Xpert Xpress group B Streptococcus (GBS) test using rectovaginal swabs from Chinese pregnant women. Compared to the other three assays, the Xpert Xpress GBS test demonstrated high sensitivity and specificity when screening 939 pregnant women for GBS in rectovaginal specimens. Additionally, its reduced time to obtain results makes it valuable for the rapid detection of GBS. abstract_id: PUBMED:30787811 Group B Streptococcus Colonization among Saudi Women During Labor. Background: The presence of group B streptococcus in the genital area during pregnancy and labor is associated with high neonatal morbidity and mortality. However, the exact prevalence of group B streptococcus among Saudi women has not yet been established. Objective: The aim of this study was to determine the prevalence of group B streptococcal colonization in Saudi pregnant women as a primary end-point and neonatal complications as a secondary end-point. Materials And Methods: A prospective, observational, cross-sectional study was conducted to estimate the prevalence of group B streptococcal colonization among Saudi women admitted in labor to the King Fahd Hospital of the University, Al-Khobar, Saudi Arabia. A total of 1371 maternal specimens (vaginal swabs, rectal swabs and midstream urine) were collected from 457 patients between October 2011 and September 2016. Neonatal specimens (urine, blood and cerebrospinal fluid) were collected if clinically indicated. Results: Of the 457 women enrolled in this study, 87 (19%) had positive cultures for group B streptococcus either in the vaginal or rectal swab or both. Group B streptococcus was also found to be the most commonly isolated organism. In total, there were five cases of neonatal sepsis, of which one early-onset neonatal sepsis was caused by group B streptococcus. Conclusions: This study found that the prevalence of group B streptococcal colonization is 19% among Saudi women admitted in labor to the King Fahd Hospital of the University. abstract_id: PUBMED:23113150 Evaluation of culture and PCR methods for diagnosis of group B streptococcus carriage in Iranian pregnant women. Background: Group B streptococcus (GBS) is one of the most important cause of morbidity and mortality among newborns especially in developing countries. It has been shown that the screening approach rather than the identification of maternal clinical risk factors for early-onset neonatal GBS disease is more effective in preventing early-onset GBS neonatal disease. The objective of this study was to detect GBS among clinical samples of women using PCR and standard microbiological culture. Methods: Samples were taken from 375 women at 28-38 weeks of gestation during six month from January 15 till June 15, 2011 from a hospital in Tehran, Iran. Samples were tested by standard culture using Todd-Hewitt broth, blood agar and by PCR targeting the cfb gene. Results: Among the 375 women, 35 (9.3%) were identified as carriers of group B streptococci on the basis of the results of the cultures of specimens, compared to 42 (11.2 %) on the basis of PCR assay. Conclusion: We found that GBS can be detected rapidly and reliably by a PCR assay in vaginal secretions from women at the time of delivery. This study also showed that the rate of incidence of GBS is high in Iranian women. abstract_id: PUBMED:34796059 Awareness of Pregnancy Screening for Group B Streptococcus Infection Among Women of Reproductive Age and Physicians in Jeddah, Saudi Arabia. Background Group B Streptococcus is part of the normal flora of the female urogenital tract and rectum. Vaginal colonization and transmission of this bacteria during delivery can lead to neonatal life-threatening complications, which can be prevented by screening and the administration of intrapartum antibiotics. This study's aim was to assess the level of awareness of antenatal screening of Group B Streptococcus among women and physicians in Jeddah, Saudi Arabia. Methods A cross-sectional study using an online survey from a previously published study was distributed among 767 participants in Jeddah from June to August 2020. The participants were family medicine or obstetrics and gynecology physicians and women of reproductive age. Results Our results revealed a good level of knowledge of the physicians, although almost half of them reported the need for training to correctly perform screenings. The level of the women's knowledge was relatively poor, their mean knowledge was 50.7%, and the majority were unaware of this infection (85.3%). Conclusions This study found a low level of knowledge of Group B Streptococcus among women of reproductive age and physicians in obstetrics and gynecology and family medicine. These findings confirm the importance of increasing the awareness of Group B Streptococcus among these populations to avoid complications associated with this infection. abstract_id: PUBMED:38247643 Anovaginal Colonization by Group B Streptococcus and Streptococcus anginosus among Pregnant Women in Brazil and Its Association with Clinical Features. Streptococcus agalactiae (Group B Streptococcus; GBS) is a leading cause of neonatal invasive disease worldwide. GBS can colonize the human gastrointestinal and genitourinary tracts, and the anovaginal colonization of pregnant women is the main source for neonatal infection. Streptococcus anginosus, in turn, can colonize the human upper respiratory, gastrointestinal, and genitourinary tracts but has rarely been observed causing disease. However, in the last years, S. anginosus has been increasingly associated with human infections, mainly in the bloodstream and gastrointestinal and genitourinary tracts. Although anovaginal screening for GBS is common during pregnancy, data regarding the anovaginal colonization of pregnant women by S. anginosus are still scarce. Here, we show that during the assessment of anovaginal GBS colonization rates among pregnant women living in Rio de Janeiro, Brazil, S. anginosus was also commonly detected, and S. anginosus isolates presented a similar colony morphology and color pattern to GBS in chromogenic media. GBS was detected in 48 (12%) while S. anginosus was detected in 17 (4.3%) of the 399 anovaginal samples analyzed. The use of antibiotics during pregnancy and history of urinary tract infections and sexually transmitted infections were associated with the presence of S. anginosus. In turn, previous preterm birth was associated with the presence of GBS (p &lt; 0.05). The correlation of GBS and S. anginosus with relevant clinical features of pregnant women in Rio de Janeiro, Brazil, highlights the need for the further investigation of these important bacteria in relation to this special population. abstract_id: PUBMED:10414060 Self-collection of group B Streptococcus cultures in pregnant women. Objective: This study assesses the sensitivity of self-collected rectovaginal culture specimens for group B Streptococcus by pregnant patients. Methods: A volunteer sample of 240 pregnant women at 28 weeks gestation self-collected rectovaginal culture swabs to screen for the presence of group B Streptococcus. The patients' physicians collected second specimens for comparison. Results: Twenty-four of 240 women grew group B Streptococcus on at least one culture (incidence, 10%). Twenty physician-collected specimens and 19 patient-collected specimens were positive (83 and 79% sensitivity, respectively). Fifteen patients (62.5%) had both physician-collected and patient-collected cultures grow group B Streptococcus. Cohen's kappa (kappa = 0.75) indicates a high degree of agreement between patient-collected and physician-collected cultures. Conclusions: Pregnant women are as likely as their attending physicians to obtain positive cultures for group B Streptococcus by self-collection of rectovaginal swabs. abstract_id: PUBMED:24520495 The prevalence of group B streptococcus colonization in Iranian pregnant women and its subsequent outcome. Background: Group B streptococcus colonization in pregnant women usually has no symptoms, but it is one of the major factors of newborn infection in developed countries. In Iran, there is a little information about the prevalence of maternal colonization and newborns infected by group B streptococcus. In order to find the necessary information to create a protocol for prevention and treatment of group B streptococcus infection in newborns, we conducted a study of its prevalence among Iranian pregnant women and its vertical transmission to their newborns. Materials And Methods: This is a cross-sectional descriptive and analytic study performed at Prenatal Care Clinic of the Sarem Hospital from 2009 to 2011. The pregnant women with the gestational age of 35-37 weeks were enrolled in the study. The vaginal culture for group B streptococcus was done for 980 mothers based on our protocol. Among 980 mothers, 48 were shown positive vaginal culture; however, 8 cases among these 48 mothers were positive for both vaginal and urine culture. Babies with mothers showing positive vaginal culture were screened for infection using complete blood count /blood culture (B/C) and C-reactive protein (CRP). Then, a complete sepsis workup was performed for babies with any signs of infection in the first 48 hours after birth, and they received antibiotic therapy if necessary. All collected data were analyzed (SPSS version 15). Results: Among 980 pregnant women with vaginal culture, 48 cases had positive group B streptococcus cultures among which 8 mothers also had positive group B streptococcus urine culture. Our findings revealed that 22 (50%) symptomatic neonates were born from the mothers with positive vaginal culture for group B streptococcus. About 28 of them (63%) had absolute neutrophil count more than normal, and 4 (9.1 %) newborns were omitted from the study. Therefore, 50% of neonates showed clinical feature, whereas para-clinical test was required to detect the infection for the rest of neonates who showed no signs or symptoms. Conclusion: The colonization of group B streptococcus in Iranian women is significant, while 50% of newborns from mother with positive vaginal culture were symptomatic after birth; therefore, screening of newborns for group B streptococcus infection is recommended to become a routine practice in all healthcare centers in Iran. abstract_id: PUBMED:34183947 Prevalence of Group B Streptococcus in Vagina and Rectum of Pregnant Women of Islamic &amp; Non-Islamic Countries: A Systematic Review and Meta-Analysis. Background: Group B streptococcus or streptococcus Agalactia is a gram positive beta hemolytic bacteria which is the main factor in neonatal infections. This study aimed at determining the prevalence of GBS in world and clarifying the rate of this infection in Islamic and non-Islamic countries. Methods: We performed a systematic search by using different databases including Medline, Scopus, Science Direct, Psycho-Info ProQuest and Web of Science published up to Feb 2019. We undertook meta-analysis to obtain the pooled estimate of prevalence of GBS colonization in Islamic and non-Islamic countries. Results: Among 3324 papers searched, we identified 245 full texts of prevalence of GBS in pregnancy; 131 were included in final analysis. The estimated mean prevalence of maternal GBS colonization was 15.5% (CI:95% (14.2-17)) worldwide; which was 14% (CI:95% (11-16.8)) in Islamic and 16.3% (CI:95% (14.6-18.1)) in non-Islamic countries and was statistically significant. Moreover, with regards to sampling area, prevalence of GBS colonization was 11.1 in vagina and 18.1 in vagina-rectum. Conclusion: Frequent washing of perineum based on religious instructions in Islamic countries can diminish the rate of GBS colonization in pregnant women. abstract_id: PUBMED:33536356 Recent Epidemiological Changes in Group B Streptococcus Among Pregnant Korean Women. Background: Although group B Streptococcus (GBS) colonization rate among pregnant Korean women is lower than that among women from many Western countries, recent data show an upward trend. We investigated recent epidemiological changes in GBS among pregnant Korean women in terms of colonization rate, antimicrobial susceptibility, serotype, and resistance genotype. Methods: Vaginal and anorectal swab specimens from 379 pregnant Korean women were cultured on Strep B Carrot Broth with GBS Detect (Hardy Diagnostics, USA), selective Todd-Hewitt broth (Becton Dickinson, USA), and Granada agar plate medium (Becton Dickinson). The antimicrobial susceptibility, serotypes, and macrolide-lincosamide-streptogramin B (MLSB) resistance genes of the GBS isolates were tested. Results: The GBS colonization rate among pregnant Korean women was 19.8% (75/379). Colonization rates using Strep B Carrot Broth with GBS Detect, selective Todd-Hewitt broth, and Granada agar plate medium cultures were 19.5%, 19.3%, and 15.0%, respectively. Six pregnant women were colonized by non-beta-hemolytic GBS and were detected only in Strep B Carrot Broth with GBS Detect. Resistance rates of GBS to clindamycin, erythromycin, and tetracycline were 16.0%, 28.0%, and 42.7%, respectively. The most common GBS serotypes were V (22.7%), VIII (20.0%), and III (20.0%). The frequency of MLSB resistance genes erm(B) and erm(TR) were 63.6% and 36.4%, respectively. Conclusions: The GBS colonization rate among pregnant Korean women has risen to levels observed in Western countries. To accurately evaluate GBS epidemiology among pregnant Korean women, periodic studies in multiple centers, including primary clinics, are necessary. abstract_id: PUBMED:28713513 The prevalence and risk factors of group B streptococcus colonization in Iranian pregnant women. Background: Group B Streptococcus (GBS) is a leading cause of serious neonatal infections. Although great progress has been made in preventing prenatal GBS, its colonization rate in different regions of Iran remains unknown. Aim: To determine GBS colonization prevalence and its risk factors among Iranian pregnant women. Methods: This prospective cross-sectional study was performed on 186 pregnant women, who attended Boo-Ali hospital which is affiliated with Islamic Azad University in Tehran, Iran, from March 2014 to June 2015. The demographic, obstetric and gynecological data were gathered. A recto-vaginal culture was taken from each patient, with a sterile swab, in lithotomy position without using speculum, and vaginal pH was measured. Patients with positive GBS received IV antibiotic therapy during labor (penicillin G 3 gram at first dose then 1.5 gram Q/4h until delivery). Data were analyzed by statistical software SPSS version 21. Statistical tests for differences were performed by Chi-square test. Potential confounding was assessed by logistic regression. Level of significance was set at p&lt;0.05. Results: Twenty-two (11.8%) patients had positive recto-vaginal colonization. No significant differences between colonized and GBS-negative women with regard to age, obstetrics history and socio-economic factor were noticed. In contrast, smoking, history of previous infection with HPV, presence of vulvitis and a vaginal pH&gt;4.5 were associated with GBS colonization (p≤0.05). Conclusions: With a relatively low prevalence and few significantly correlated factors, it is hardly possible to define a high risk group of pregnant women for GBS colonization. Therefore, thorough measures should be taken in order to prevent infection complications in mothers and neonates in the Iranian population. Answer: Yes, pregnant women can obtain their own specimens for group B streptococcus. A study assessing the sensitivity of self-collected rectovaginal culture specimens for group B Streptococcus by pregnant patients found that pregnant women are as likely as their attending physicians to obtain positive cultures for group B Streptococcus by self-collection of rectovaginal swabs. The study involved a volunteer sample of 240 pregnant women at 28 weeks gestation who self-collected rectovaginal culture swabs to screen for the presence of group B Streptococcus. The results showed a high degree of agreement between patient-collected and physician-collected cultures, indicating that self-collection by pregnant women is a viable method for obtaining specimens for group B Streptococcus screening (PUBMED:10414060).
Instruction: Spermidine: A predictor for neurological outcome and infarct size in focal cerebral ischemia? Abstracts: abstract_id: PUBMED:24741185 Effects of delayed puerarin treatment in long-term neurological outcomes of focal ischemic stroke in rats. Objective: The present study aimed at investigate the therapeutic effects of delayed puerarin treatment in neurological outcomes after middle cerebral artery occlusion (MCAO) in rats. Materials And Methods: Male Wistar rats were subjected to MCAO for 120 min followed by reperfusion for 14 days. Puerarin (0, 50, 100, 200 mg/kg, intra-peritoneally) was administered at 24 h after stroke onset and repeated daily for 14 days. Neurological deficits were evaluated at 1, 4, 7, 14 days after stroke. Brain infarct volume and peri-infarct context vessel density were examined at 14 days after stroke. Results: Puerarin significantly improved neurological functions up to 14 days after stroke and decreased the infarct volume with doses of 50 mg/kg and 100 mg/kg compared with saline controls. Puerarin treatment also significantly increased peri-infarct context vessel density at 14 days after stroke. Conclusions: Delayed treatment of puerarin initiated at 24 h after stroke is beneficial with improved long-term neurological outcomes and reduced infarction volume in focal ischemic stroke in rats. Enhanced vascular remodeling by puerarin might at least partially contribute to its beneficial effects. abstract_id: PUBMED:3598674 The therapeutic value of nimodipine in experimental focal cerebral ischemia. Neurological outcome and histopathological findings. Recent studies suggest that nimodipine, a potent calcium-channel antagonist that causes significant cerebrovascular dilatation, may improve neurological outcome after acute experimental permanent focal cerebral ischemia when given before or immediately after occlusion of the middle cerebral artery (MCA) in various animals. The authors describe the effect of nimodipine on cerebral ischemia in a rat model. At 1, 4, or 6 hours after occlusion of the MCA, rats were treated in a double-blind technique with either nimodipine, placebo, or saline. Neurological and neuropathological evaluation was performed at 24 hours. Neurological outcome was better in rats treated with nimodipine 1, 4, or 6 hours after occlusion (p less than 0.001, p less than 0.01, p less than 0.05 respectively), and the size of areas of infarction was statistically smaller in nimodipine-treated groups (p less than 0.01, p less than 0.01, p less than 0.05, respectively) when compared with control rats treated with saline or placebo. The best neurological outcome and the smallest area of infarction were found in nimodipine-treated rats 1 hour after occlusion. Compared with controls, the size of the periphery of the infarcted area was smaller in nimodipine-treated rats. The results show that nimodipine improves neurological outcome and decreases the size of infarction when administered up to 6 hours after ischemic insult. These results suggest a possible mechanism of action of nimodipine on the "penumbra" of the ischemic area. abstract_id: PUBMED:25004874 Ginkgo biloba on focal cerebral ischemia: a systematic review and meta-analysis. Gingko biloba extract (EGB) has been used in traditional medicines for centuries, and although its application to cerebral ischemia has been of great interest in recent years, high quality evidence-based clinical trials have not been carried out. This systematic review and meta-analysis aimed to examine the neuroprotective effect of EGB on focal cerebral ischemia in animal models. A systematic literature search was performed using five databases spanning January 1980-July 2013. The outcome was assessed using the effect size, which was based on infarct size and/or neurological score. A total of 42 studies with 1,232 experimental animals matched our inclusion criteria. The results revealed that EGB improved the effect size by 34% compared to the control group. The animal species, the method and time to measure outcome, and the route and dosage of EGB administration affected the variability of the effect size. Mechanisms of EGB neuroprotection were reported as anti-apoptotic, anti-oxidative, and anti-inflammatory. In conclusion, EGB exerts a significant protective effect on experimental focal cerebral ischemia. However, possible experimental bias should be taken into account in future clinical studies. abstract_id: PUBMED:11867881 Hyperglycemia in patients with focal cerebral ischemia after intravenous thrombolysis: influence on clinical outcome and infarct size. The aim of the present prospective study was to investigate whether hyperglycemia influences the clinical outcome or the infarct size after intravenous thrombolysis of focal cerebral ischemia. A consecutive series of hyperglycemic (n = 14) and normoglycemic patients (n = 17) with acute focal cerebral ischemia (&lt;3 h) in the middle cerebral artery (MCA) territory received rtPA (0.9 mg/kg body weight) intravenously. Clinical outcome was measured using the NIH Stroke Score on admission and was followed up until day 28. Infarct volume was measured by diffusion-weighted MR imaging on admission, on days 3 and 7. There was a significantly better neurological outcome on day 28 in the normoglycemic patients than in the hyperglycemic group (NIH SS 4.0 versus 7.4; p &lt; 0.05). The infarction volume increased significantly in the hyperglycemic patients Delta = 39.9 plus minus 17.4% compared to normoglycemic patients Delta = 27.1 plus minus 14.1% (p &lt; 0.05). The present study suggests that hyperglycemia in patients with a focal MCA ischemia can cause a worse clinical outcome despite recanalization of the occluded vessel by thrombolysis therapy. This correlates with a markedly larger increase of the infarction volume in the hyperglycemic group. These results may be explained by an accentuated lactate accumulation and pH decrease by elevated energy levels which cannot be compensated by restoration of blood flow alone. abstract_id: PUBMED:3960288 Improved neurological outcome in experimental focal cerebral ischemia treated with propranolol. Propranolol has been shown to exert a protective effect in experimental myocardial, renal, and early acute focal cerebral ischemia. However, propranolol was not found to reduce infarct size in nitrous oxide-anesthetized, paralyzed, mechanically ventilated cats subjected to 6 hours of acute focal ischemia. The objective of the current investigation was to study further the effects of racemic (d,l)-propranolol on the evolution of acute focal cerebral ischemia in awake, conscious cats. Adult cats were anesthetized with halothane and underwent the implantation of an occluding device around the right middle cerebral artery. After a 48-hour recovery period, the right middle cerebral artery was occluded for 6 hours and then reopened, allowing reperfusion for an additional 6 hours. Neurological examinations were conducted every 2 hours throughout each experiment. Ten cats received d,l-propranolol (2 mg/kg) 1 hour before occlusion, immediately before occlusion, and every 2 hours throughout each experiment. Eleven cats serving as controls were not treated. The neurological examination significantly improved over time in the treated group when compared to the untreated group (P = 0.01). Carbon filling defects, gross brain swelling, and infarct size were reduced in treated cats. The results of this study suggest that propranolol does have beneficial effects in acute focal cerebral ischemia. abstract_id: PUBMED:11136912 Spermidine: A predictor for neurological outcome and infarct size in focal cerebral ischemia? Background And Purpose: Polyamines are mainly restricted to the intracellular space. During focal cerebral ischemia, polyamines are released from the intracellular compartment. Experimental studies have implicated a marked elevation in brain tissue and blood. The aim of our study was to investigate whether the elevation of polyamines in the blood of patients with focal cerebral ischemia correlates with the clinical outcome and the infarct volume. Methods: Polyamines were measured in 16 patients with focal cerebral ischemia and in 8 healthy control subjects. Blood samples for polyamine measurement were taken at admission and at fixed time points for the next 28 days. Polyamines were analyzed in red blood cells by a high-pressure liquid chromatography system. Clinical findings were recorded with the NIH Stroke Scale score. Volume of infarction was analyzed from cranial CT at admission and on days 4 to 6 after ischemia. Results: A significant increase of the spermidine level in the peripheral blood could be observed in all patients with focal cerebral ischemia as compared with control subjects (P:&lt;0.01), starting with the admission. Spermidine values correlated positively with the clinical outcome at several time points in the first 48 hours (r=0.90 to 0.40; P:&lt;0.01) and with the infarct volume in cranial CT on days 4 to 6 (r=0.91; P:&lt;0.01). Conclusions: As hypothesized from experimental data, polyamine levels in blood increase in patients after focal cerebral ischemia. The results indicate that the peripheral spermidine level is closely associated with the clinical outcome as well as with the infarction volume. Therefore, polyamines may be used as a novel predictor for the prognosis of patients with focal cerebral ischemia. abstract_id: PUBMED:19679170 Isoflurane preconditioning improves short-term and long-term neurological outcome after focal brain ischemia in adult rats. Isoflurane preconditioning improved short-term neurological outcome after focal brain ischemia in adult rats. It is not known whether desflurane induces a delayed phase of preconditioning in the brain and whether isoflurane preconditioning-induced neuroprotection is long-lasting. Two months-old Sprague-Dawley male rats were exposed to or were not exposed to isoflurane or desflurane for 30 min and then subjected to a 90 min middle cerebral arterial occlusion (MCAO) at 24 h after the anesthetic exposure. Neurological outcome was evaluated at 24 h or 4 weeks after the MCAO. The density of the terminal deoxynucleotidyl transferase biotinylated UTP nick end labeling (TUNEL) positive cells in the penumbral cerebral cortex were assessed 4 weeks after the MCAO. Also, rats were pretreated with isoflurane or desflurane for 30 min. Their cerebral cortices were harvested for quantifying B-cell lymphoma-2 (Bcl-2) expression 24 h later. Here, we showed that pretreatment with 1.1% or 2.2% isoflurane, but not with 6% or 12% desflurane, increased Bcl-2 expression in the cerebral cortex, improved neurological functions and reduced infarct volumes evaluated at 24 h after the MCAO. Isoflurane preconditioning also improved neurological functions and reduced brain infarct volumes in rats evaluated 4 weeks after the MCAO. Isoflurane preconditioning also decreased the density of TUNEL-positive cells in the penumbral cerebral cortex. We conclude that isoflurane preconditioning improves short-term and long-term neurological outcome and reduces delayed cell death after transient focal brain ischemia in adult rats. Bcl-2 may be involved in the isoflurane preconditioning effect. Desflurane pretreatment did not induce a delayed phase of neuroprotection. abstract_id: PUBMED:8837805 Procedural and strain-related variables significantly affect outcome in a murine model of focal cerebral ischemia. The recent availability of transgenic mice has led to a burgeoning number of reports describing the effects of specific gene products on the pathophysiology of stroke. Although focal cerebral ischemia models in rats have been well described, descriptions of a murine model of middle cerebral artery occlusion are scant and sources of potential experimental variability remain undefined. We hypothesized that slight technical modifications would produce widely discrepant results in a murine model of stroke and that controlling surgical and procedural conditions could lead to reproducible physiological and anatomic stroke outcomes. To test this hypothesis, we established a murine model that would permit either permanent or transient focal cerebral ischemia by intraluminal occlusion of the middle cerebral artery. This study provides a detailed description of the surgical technique and reveals important differences among strains commonly used in the production of transgenic mice. In addition to strain-related differences, infarct volume, neurological outcome, and cerebral blood flow appear to be importantly affected by temperature during the ischemic and postischemic periods, mouse size, and the size of the suture that obstructs the vascular lumen. When these variables were kept constant, there was remarkable uniformity of stroke outcome. These data emphasize the protective effects of hypothermia in stroke and might help to standardize techniques among different laboratories to provide a cohesive framework for evaluating the results of future studies in transgenic animals. abstract_id: PUBMED:28821279 Genetic deletion of mGlu2 metabotropic glutamate receptors improves the short-term outcome of cerebral transient focal ischemia. We have recently shown that pharmacological blockade of mGlu2 metabotropic glutamate receptors protects vulnerable neurons in the 4-vessel occlusion model of transient global ischemia, whereas receptor activation amplifies neuronal death. This raised the possibility that endogenous activation of mGlu2 receptors contributes to the pathophysiology of ischemic neuronal damage. Here, we examined this possibility using two models of transient focal ischemia: (i) the monofilament model of middle cerebral artery occlusion (MCAO) in mice, and (ii) the model based on intracerebral infusion of endothelin-1 (Et-1) in rats. Following transient MCAO, mGlu2 receptor knockout mice showed a significant reduction in infarct volume and an improved short-term behavioural outcome, as assessed by a neurological disability scale and the "grip test". Following Et-1 infusion, Grm2 gene mutated Hannover Wistar rats lacking mGlu2 receptors did not show changes in the overall infarct volume as compared to their wild-type counterparts, although they showed a reduced infarct area in the agranular insular cortex. Interestingly, however, mGlu2 receptor-deficient rats performed better than wild-type rats in the adhesive tape test, in which these rats did not show the laterality preference typically observed after focal ischemia. These findings support the hypothesis that activation of mGlu2 receptors is detrimental in the post-ischemic phase, and support the use of mGlu2 receptor antagonists in the experimental treatment of brain ischemia. abstract_id: PUBMED:16114554 Comparison of pentobarbital and propofol on the outcome of focal cerebral ischemia model in rats Objective: To compare the effects of pentobarbital and propofol on the outcome of focal cerebral ischemia model, and to evaluate the availability of propofol in setting the focal cerebral ischemia. Methods: Thirty male Sprague-Dawley rats were subjected to middle cerebral artery occlusion (MCAO) under pentobarbital or propofol intraperitoneal anesthesia (n=15 each). A modified Bederson's scoring system was adopted to assess neurological scoring at 4 h after the MCAO. TTC staining was used to determine the infarct size at 24 h after the MCAO. At day 3 after the MCAO, TUNEL and toluidine blue staining were performed to measure the density of apoptotic cells and surviving neurons in the penumbra. Results: Neither neurological scoring (1.46 +/- 0.98 vs 1. 29 +/- 0.72), infarct size [(37.8 +/- 4.95)% vs (31.1 +/- 5.09)%] nor neuronal density in the penumbra [(740 +/- 24) neurons/mm2 vs (794 +/- 23) neurons/mm2] was statistically different between pentobarbital and propofol groups (P &gt; 0.05). But apoptotic cell density in the penumbra was significantly higher in the propofol group than that in the pentobarbital group [(356 +/- 20) cells/mm2 vs (262 +/- 17) cells/mm2, P &lt; 0.05]. Conclusion: In the establishment of the focal ischemia model in rats, anesthesia with propofol or pentobarbital induces similar neurological scoring, infarct size and survived neurons in the penumbra. Propofol anesthesia increases cell apoptosis in the penumbra compared with pentobarbital and its application might be an unsuitable anesthetic method in the model for evaluating the effect of procedures or drugs on cell apoptosis. Answer: Yes, spermidine levels in the blood have been found to be a predictor for neurological outcome and infarct size in focal cerebral ischemia. A study found that there was a significant increase in spermidine levels in the peripheral blood of patients with focal cerebral ischemia compared to control subjects. This increase was observed from the time of admission. Furthermore, the spermidine values showed a positive correlation with the clinical outcome at several time points within the first 48 hours, as well as with the infarct volume measured by cranial CT on days 4 to 6 after ischemia. These findings suggest that peripheral spermidine levels are closely associated with both the clinical outcome and the infarction volume, indicating that spermidine may be used as a novel predictor for the prognosis of patients with focal cerebral ischemia (PUBMED:11136912).
Instruction: Does breastfeeding protect against childhood overweight? Abstracts: abstract_id: PUBMED:30411971 Association of Exclusive Breastfeeding with Risk of Obesity in Childhood and Early Adulthood. Objective: To explore the effects of exclusive breastfeeding and its duration on the development of childhood and early adulthood obesity. Materials And Methods: A random sample of 5,125 dyad children and their mothers was extracted from a national database. With the use of a standardized questionnaire, telephone interviews were carried out for the collection of maternal lifestyle factors (e.g., breastfeeding). The body mass index was determined based on International Obesity Task Force criteria. Body weight and height of the offspring at the age of 8 was calculated from measurements derived from the national database, while the corresponding body measurements at early adulthood were self-reported. Results: Mothers who had breastfed or exclusively breastfed ≥6 months were 22.4% and 15.2%, respectively. Exclusive breastfeeding ≥6 months (versus never) was associated with a lower risk of overweight in childhood (8 years old; odds ratio [OR] = 0.89; 95% confidence interval [95% CI], 0.82-0.96) and adolescence/adulthood (15-25 years old; OR = 0.83; 95% CI, 0.68-0.97). Also, exclusive breastfeeding ≥6 months (versus never) was associated with a decreased risk of childhood and adolescence obesity by 30% (95% CI, 0.54-0.91) and 38% (95% CI, 0.40-0.83), respectively. Conclusions: Exclusive breastfeeding had a favorable influence on offspring's overweight and obesity not only in childhood but also in adolescence/adulthood. abstract_id: PUBMED:32464422 A Meta-Analysis of the Association Between Breastfeeding and Early Childhood Obesity. Problem: Several studies have indicated a protective effect of breastfeeding on reducing the risk of childhood obesity, however, this remains controversial. The aim of this meta-analysis is to clarify the association between breastfeeding and the risk of preschoolers' obesity. Eligibility Criteria: Prospective cohort studies published prior to December 1, 2019 were systematically searched in PubMed, EMBASE, the Web of Science and the Cochrane Library databases. Meta-analysis was performed using Stata 15.1. Sample: Twenty-six publications involving 332,297 participants were eligible for inclusion. Results: The pooled odds ratio (OR) of the risk of obesity in ever-breastfed preschoolers was 0.83 (95%CI [0.73,0.94]) compared with their never-breastfed counterparts. Random-effects dose-response model revealed a negative correlation between the duration of breastfeeding and risk of obesity (regression coefficient = -0.032, p = .001). Categorical analysis confirmed this dose-response association (1 day to &lt;3 months of breastfeeding: OR = 1.07, 95%CI [0.94,1.21]; 3 months to &lt;6 months: OR = 0.96, 95%CI [0.60,1.54]; ≥6 months: OR = 0.67, 95%CI [0.58,0.77]). One month of breastfeeding was associated with a 4.0% decrease in risk of obesity (OR = 0.96/month of breastfeeding, 95% CI [0.95, 0.97]). Under the reference of never breastfeeding, the summary OR of exclusive breastfeeding was 0.53 (95%CI [0.45,0.63]). Conclusions: Breastfeeding is inversely associated with a risk of early obesity in children aged two to six years. Moreover, there is a dose-response effect between duration of breastfeeding and reduced risk of early childhood obesity. Implications: Clinical nurses' guidance and advice that prolong the duration of breastfeeding and promote exclusive breastfeeding are needed to prevent the development of later childhood obesity. abstract_id: PUBMED:32291564 Breastfeeding practices among childhood cancer survivors. Purpose: This cross-sectional study compared breastfeeding outcomes among childhood cancer survivors to those of women in the general population and evaluated whether breastfeeding is adversely affected by cancer treatment or endocrine-related late effects. Methods: A self-reported survey ascertained breastfeeding practices and incorporated items from the questionnaires used in the Infant Feeding Practices Study II (IFPS II) to allow comparison with the general population. Among 710 eligible survivors, 472 (66%) responded. The participants were predominantly non-Hispanic White (84%), married (73%), and had some college or less (60%). The mean maternal age at the time of birth of the first child after cancer treatment was 24 years (SD 24.3 ± 4.8). Results: Fewer survivors planned to breastfeed than did IFPS II controls (67% vs. 82%, P &lt; .0001), and fewer survivors initiated breastfeeding (66% vs. 85%, P &lt; .0001). The median breastfeeding duration was shorter among survivors, with early undesired weaning occurring sooner in the survivor group (1.4 months, interquartile range (IQR) 0.5-3.5 months) than in the IFPS II group (2.7 months, IQR 0.9-5.4 months). A higher proportion of survivors reported an unfavorable breastfeeding experience (19% vs. 7.5%, P &lt; .0001) and early, undesired weaning (57.5%, 95% CI 51-64) than did IFPS II participants (45.2%, 95% CI 44-47, P = .0164). Among survivors who expressed intention and chose to breastfeed, 46% endorsed disrupted lactation related to physiologic problems with high risk in those overweight/obese. Conclusions: Survivors are at risk of negative breastfeeding experiences; however, lactation outcomes were not significantly associated with cancer diagnosis, treatments, or endocrine complications. Implications For Cancer Survivors: Prior research has not examined the association of cancer treatments and clinically validated late effects with lactation outcomes in a clinically diverse childhood cancer survivor cohort. Findings from this study suggest that childhood cancer survivors, especially those who are overweight/obese, are at risk of having negative breastfeeding experiences. Early undesired weaning, physiologic problems related to lactation and misconceptions about breastfeeding, especially fears of passing on cancer through breastmilk, highlight the need for counseling and specialized support to optimize lactation outcomes in this vulnerable population. abstract_id: PUBMED:34669515 Breastfeeding Associations with Childhood Obesity and Body Composition: Findings from a Racially Diverse Maternal-Child Cohort. Background: Studies suggest breastfeeding lowers obesity risk in childhood, but generalizability of existing evidence is limited. We examined associations of breastfeeding with childhood overweight, obesity, and percentage body fat, in a racially diverse maternal-child cohort. Methods: This cross-sectional study included 823 children, ages 4-8 years, enrolled in the Environmental Exposures and Child Health Outcomes (ECHO) cohort, a subset of the National Institute of Child Health and Human Development Fetal Growth Studies cohort. Logistic regression was used to estimate odds ratios and 95% confidence intervals (CIs) for overweight [BMI (kg/m2) 85th to &lt;95th percentile] and obesity (BMI ≥95th percentile) in relation to breastfeeding including duration of exclusive and total breastfeeding. Linear regression was used to evaluate association between breastfeeding and percentage body fat measured by bioelectrical impedance analysis. Results: Fifty-two percent of children were male, 32% non-Hispanic Black, 29% Hispanic, 27% non-Hispanic White, and 13% Asian; 16% were overweight and 13% obese. Six months of exclusive breastfeeding, compared with no breastfeeding, was associated with 60% lower odds of obesity (95% CI 0.18-0.91) adjusting for age, gender, race, socioeconomic status, maternal BMI, and child's activity. Percentage body fat was inversely associated with breastfeeding duration. For none, &lt;6, and ≥6 months of exclusive breastfeeding, adjusted mean percentage body fat was 16.8, 14.5, and 13.4, respectively. Results did not differ by gender, race/ethnicity, or maternal BMI status. Conclusions: Exclusive breastfeeding for the first 6 months of life is inversely and significantly associated with obesity and percentage body fat at ages 4-8 years. These findings support current breastfeeding guidelines. abstract_id: PUBMED:22690194 Lessons learned from the implementation of a provincial breastfeeding policy in Nova Scotia, Canada and the implications for childhood obesity prevention. Healthy public policy plays a central role in creating environments that are supportive of health. Breastfeeding, widely supported as the optimal mode for infant feeding, is a critical factor in promoting infant health. In 2005, the Canadian province of Nova Scotia introduced a provincial breastfeeding policy. This paper describes the process and outcomes of an evaluation into the implementation of the policy. This evaluation comprised focus groups held with members of provincial and district level breastfeeding committees who were tasked with promoting, protecting and supporting breastfeeding in their districts. Five key themes were identified, which were an unsupportive culture of breastfeeding; the need for strong leadership; the challenges in engaging physicians in dialogue around breastfeeding; lack of understanding around the International Code of Marketing of Breast-milk Substitutes; and breastfeeding as a way to address childhood obesity. Recommendations for other jurisdictions include the need for a policy, the value of leadership, the need to integrate policy with other initiatives across sectors and the importance of coordination and support at multiple levels. Finally, promotion of breastfeeding offers a population-based strategy for addressing the childhood obesity epidemic and should form a core component of any broader strategies or policies for childhood obesity prevention. abstract_id: PUBMED:32843859 Association Between Breastfeeding and Obesity in Preschool Children. Introduction: Childhood obesity is a significant problem nowadays, with breastfeeding being one of many factors responsible for this issue. Breastfeeding as a natural way of feeding infants has many benefits for the child, the mother, and society. Aim: The present study aimed to investigate the association between overweight children in preschool age and breastfeeding duration. Methods: The current study included 674 preschool children aged 2-5 who attended various municipal kindergartens in South Athens. Questionnaires were given to parents where they recorded the child's personal and body data, parenting, and questions about pregnancy and lactation. The effect of BMI on the duration of breastfeeding in children was examined by the chi-square independence test. Fisher's and Monte Carlo simulations were also used. For data processing, Z scores and percentiles BMI for the first, second until fifth year of the child were found and based on these values the following categorization was performed; for values below -2 as low weight, values from -2 to 1 as normal weight, from 2 to 3 as overweight and over 3 as obese children. The Corresponding categorization was based on the 3rd, 85th, 97th, and 99.9th percentage position. Results: The percentage of children at preschool age who have been breastfed for over six months and had normal weight was higher than those who breastfed below six months. Moreover, the proportion of children who were low weight, overweight and obese was lower in children who had been breastfed more than six months compared to those who breastfed for a shorter period. Additionally, a statistically significant difference was found for the effect of breastfeeding on childhood obesity in children aged 2 to 5 years. Conclusion: There is a statistical association between breastfeeding duration and body weight in preschool age. Breastfeeding for more than six months has a positive impact on the child's weight. abstract_id: PUBMED:26956226 Maternal obesity, gestational diabetes, breastfeeding and childhood overweight at age 2 years. Background: Maternal obesity, excessive gestational weight gain (EGWG), gestational diabetes mellitus (GDM) and breastfeeding are four important factors associated with childhood obesity. Objectives: The objective of the study was to assess the interplay among these four factors and their independent contributions to childhood overweight in a cohort with standard clinical care. Methods: The cohort included 15 710 mother-offspring pairs delivered in 2011. Logistic regression was used to assess associations between maternal exposures and childhood overweight (body mass index &gt;85th percentile) at age 2 years. Results: Mothers with pre-pregnancy obesity or overweight were more likely to have EGWG, GDM and less likely to breastfeed ≥6 months. Mothers with GDM had 40-49% lower EGWG rates and similar breastfeeding rates compared with mothers without GDM. Analysis adjusted for exposures and covariates revealed an adjusted odds ratio (95% confidence interval) associated with childhood overweight at age 2 years of 2.34 (2.09-2.62), 1.50 (1.34-1.68), 1.23 (1.12-1.35), 0.95 (0.83-1.10) and 0.76 (0.69-0.83) for maternal obesity, overweight, EGWG, GDM and breastfeeding ≥6 months vs. &lt;6 months, respectively. Conclusions: In this large clinical cohort, GDM was not associated with, but maternal pre-pregnancy obesity or overweight and EGWG were independently associated with an increased risk, and breastfeeding ≥6 months was associated with a decreased risk of childhood overweight at age 2 years. abstract_id: PUBMED:27404060 The Impact of Breastfeeding on Early Childhood Obesity: Evidence From the National Survey of Children's Health. Purpose: To investigate how breastfeeding initiation and duration affect the likelihood of being overweight and obese in children aged 2 to 5. Design: Cross-sectional data from the 2003 National Survey of Children's Health. Setting: Rural and urban areas of the United States. Subjects: Households where at least one member was between the ages of 2 and 5 (sample size 8207). Measures: Parent-reported body mass index, breastfeeding initiation and duration, covariates (gender, family income and education, ethnicity, child care attendance, maternal health and physical activity, residential area). Analysis: Partial proportional odds models. Results: In early childhood, breastfed children had 5.3% higher probability of being normal weight (p = .002) and 8.9% (p &lt; .001) lower probability of being obese compared to children who had never been breastfed. Children who had been breastfed for less than 3 months had 3.1% lower probability of being normal weight (p = .013) and 4.7% higher probability of being obese (p = .013) with respect to children who had been breastfed for 3 months and above. Conclusion: Study findings suggest that length of breastfeeding, whether exclusive or not, may be associated with lower risk of obesity in early childhood. However, caution is needed in generalizing results because of the limitations of the analysis. Based on findings from this study and others, breastfeeding promotion policies can cite the potential protective effect that breastfeeding has on weight in early childhood. abstract_id: PUBMED:26627216 Does Breastfeeding Protect Against Childhood Obesity? Moving Beyond Observational Evidence. Human milk is the optimal feeding choice for infants, as it dynamically provides the nutrients, immunity support, and other bioactive factors needed for infants at specific stages during development. Observational studies and several meta-analyses have suggested that breastfeeding is protective against development of obesity in childhood and beyond. However, these findings are not without significant controversy. This review includes an overview of observational findings to date, then focuses on three specific pathways that connect human milk and infant physiology: maternal obesity, microbiome development in the infant, and the development of taste preference and diet quality. Each of these pathways involves complex interactions between mother and infant, includes both biologic and non-biologic factors, and may have both direct and indirect effects on obesity risk in the offspring. This type of integrated approach to examining breastfeeding and childhood obesity is necessary to advance research in this area beyond observational findings. abstract_id: PUBMED:30412742 The impact of maternal BMI, gestational weight gain, and breastfeeding on early childhood weight: Analysis of a statewide WIC dataset. Early childhood obesity is a persistent health concern with more frequent and significant impact on low-income families. Maternal weight factors impact offspring weight status, but evidence on whether breastfeeding protects against this impact is mixed. This analysis examined a model to predict early childhood obesity risk, simultaneously accounting for maternal pre-pregnancy body mass index (BMI), gestational weight gain, and breastfeeding. The team analyzed 27,016 unique maternal-child dyadic records collected via the Supplemental Nutrition Program for Wisconsin Women, Infants, and Children (WIC) between 2009 and 2011. Generalized Linear Modeling, specifically logistic regression, was used to predict a child's risk of obesity given the mother's pre-pregnancy BMI, gestational weight gain, and duration of breastfeeding. For each 1 kg/m2 increase in pre-pregnancy BMI, there was a 4.5% increase in risk of obesity compared to children with mothers of normal BMI. Children whose mothers had excessive gestational weight gain were 50% more likely to have obesity compared to those whose mothers had ideal weight gain. For each week of additional breastfeeding, there was a 1.9% increased risk of obesity. The risk models did not differ by race. In this model, accounting for pre-pregnancy weight, gestational weight gain, and breastfeeding among a diverse, low-income sample, women with pre-pregnancy overweight and obesity or who had excessive gestational weight gain had the highest risk of early childhood obesity. While breastfeeding is healthy for many reasons, providers should focus on maternal weight-related behaviors when counseling mothers about how to avoid risk of early childhood obesity. Answer: Yes, breastfeeding appears to protect against childhood overweight. Several studies have found an inverse association between breastfeeding and the risk of early obesity in children. Exclusive breastfeeding for six months or longer has been associated with a lower risk of overweight in childhood and adolescence/adulthood (PUBMED:30411971). A meta-analysis also confirmed that breastfeeding is inversely associated with the risk of early obesity in children aged two to six years, with a dose-response effect indicating that longer durations of breastfeeding are associated with a reduced risk of early childhood obesity (PUBMED:32464422). Another study found that six months of exclusive breastfeeding, compared with no breastfeeding, was associated with 60% lower odds of obesity in children aged 4-8 years (PUBMED:34669515). Additionally, breastfeeding for more than six months has been shown to have a positive impact on the child's weight in preschool age (PUBMED:32843859). Furthermore, breastfeeding for at least six months was associated with a decreased risk of childhood overweight at age 2 years (PUBMED:26956226). These findings support current breastfeeding guidelines and suggest that breastfeeding, whether exclusive or not, may be associated with a lower risk of obesity in early childhood (PUBMED:27404060). However, it is important to note that while breastfeeding has many benefits, it should be considered as part of a broader strategy for childhood obesity prevention (PUBMED:22690194).
Instruction: Is smoking a communicable disease? Abstracts: abstract_id: PUBMED:28651390 Cross-sectional survey of smoking and smoking cessation behaviors in adults in Jiangxi province, 2013 Objective: To describe the prevalence of smoking and smoking cessation in adults of Jiangxi province in 2013. Methods: Multi-stage stratified cluster random sampling method was used to select 6 000 individuals aged ≥18 years from 10 chronic and non-communicable disease and risk factor surveillance points of Jiangxi province in 2013. A face-to-face questionnaire survey was carried out to collect information about the risk factors for chronic and non-communicable diseases and 5 997 records were used in final analysis of smoking and smoking cessation. Sample was weighted to represent the adult population of Jiangxi province. The prevalence of different groups were analyzed. Results: The prevalence of current smoking of the sample was 21.53% (1 291/5 997). After complex weighting, the prevalence of smoking was 26.07% in adults in Jiangxi (95%CI:23.48%-28.66%), and it was much higher in men (50.62%, 95%CI: 46.31%-54.94%) than in women(1.46%, 95%CI: 0.57%-2.35%), the difference was statistically significant (P&lt;0.05). The differences in smoking prevalence were significant among different age groups (P=0.029), and the smoking prevalence increased with educational level, but decreased with the worse of self-reported health condition. Most current smokers smoked every day (87.16%, 95%CI: 83.29%-91.03%) and averagely 19.27 (95%CI: 17.69-20.85) cigarettes were smoked daily. The proportion of smokers with average daily consumption ≥20 cigarettes was 64.74% (95%CI: 55.79%-73.70%). The smokers'average age of starting daily smoking was 20.28 (95%CI: 19.74-20.82) years old, which was lower in men [20.11(95%CI: 19.61-20.61) years old] than in women [26.88(95%CI: 24.73-29.03) years old], the difference was statistically significant (P&lt;0.05). Among the male smokers, 27.04%(95%CI:18.91%-35.16%) of male smokers was less than 18 years old when they started daily smoking, and the proportion was 17.46%(95%CI: 0%-37.71%) in female smokers. The smoking cessation rate was 14.80% (95%CI: 10.88%-18.72%) and increased with age, the increase of income level and the worse of self-reported health condition. The successful smoking cessation rate was 10.89%(95%CI: 8.36%-13.42%). Only 32.10%(95%CI: 21.95%-42.25%) of current smokers attempted to quit smoking. The prevalence of passive smoking was 54.71% (95%CI: 44.20%-65.21%). Conclusion: The prevalence of smoking was high in adults in Jiangxi and the proportion of heavy smokers was large. Less smokers quitted smoking and the proportion of current smokers attempting to quit smoking was small. Males and adolescent smokers are targeted populations for tobacco control and special strategy should be taken according to the characteristics of smoking population in Jiangxi. abstract_id: PUBMED:35777492 The association of smoking and smoking cessation with prevalent and incident symptoms of depression, anxiety, and sleep disturbance in the general population. Background: Smoking is a well-established risk factor for chronic non-communicable diseases. However, the relationship between cigarette smoking and the risk of developing mental health conditions remains largely elusive. This study examined the relationship between cigarette smoking as well as smoking cessation and prevalent and incident symptoms of depression, anxiety, and sleep disturbance in the general population. Methods: In a cohort of 15,010 individuals from the Gutenberg Health Study (aged 35-74 years at enrollment), prevalent (at baseline from 2007 to 2012) and incident symptoms (at follow-up from 2012 to 2017) of depression, anxiety, and sleep disturbance were determined by validated questionnaires and/or medical records. Smoking status, pack-years of smoking in current and former smokers, and years since quitting smoking in former smokers were assessed by a standardized computer-assisted interview. Results: In multivariable logistic regression models with comprehensive adjustment for covariates, smoking status was independently associated with prevalent and incident symptoms of depression (Patient Health Questionnaire-9 ≥ 10), whereas this association was weaker for anxiety (Generalized Anxiety Disorder Scale-2 ≥ 3) and sleep disturbance (Patient Health Questionnaire-9 &gt; 1). Among current and former smokers, smoking ≥30 or ≥10 pack-years, respectively, yielded in general the highest effect estimates. Smoking cessation was weakly associated with the prevalence and incidence of all outcomes, here consistent associations were observed for prevalent symptoms of depression. Limitations: The observational nature of the study does not allow for causal inferences. Conclusions: The results of the present study suggest that cigarette smoking is positively and that smoking cessation is negatively associated with symptoms of common mental health conditions, in particular of depression. abstract_id: PUBMED:33671203 Changes in Smoking Behaviour and Home-Smoking Rules during the Initial COVID-19 Lockdown Period in Israel. The COVID-19 pandemic has caused devastating impacts globally. To mitigate virus spread, Israel imposed severe restrictions during March-April 2020. An online cross-sectional survey was conducted in April 2020 among current and ex-smokers to explore changes in smoking behaviour and home-smoking rules during this period. Bivariate analysis and multivariate logistic regression examined associations between sociodemographic characteristics and perceived risk of infection and quitting smoking during the initial COVID-19 period. Current smoking was reported by 437 (66.2%) of the 660 participants, 46 (7%) quit during the initial restriction period, and 177 (26.8%) were ex-smokers. Nearly half (44.4%) of current smokers intensified their smoking, and 16% attempted to quit. Quitting during the COVID-19 period was significantly associated with higher education (adjusted odds ratio (aOR): 1.97, 95% CI: 1.0-3.8), not living with a smoker (aOR: 2.18, 95% CI: 1.0-4.4), and having an underlying chronic condition that increases risk for COVID-19 complications (aOR: 2.32, 95% CI: 1.1-4.6). Both an increase in smoking behaviour and in attempts to quit smoking during the initial COVID-19 pandemic were evident in this sample of adult Israeli smokers. Governments need to use this opportunity to encourage smokers to attempt quitting and create smoke-free homes, especially during lockdown conditions, while providing mental and social support to all smokers. abstract_id: PUBMED:23113130 Cigarette smoking in iran. Background: Cigarette smoking is the largest preventable cause of death worldwide. No systematic review is available on the situation of the smoking in Iran, so we decided to provide an overview of the studies in the field of smoking in Iranian populations. Methods: Published Persian-language papers of all types until 2009 indexed in the IranMedex (http://www.iranmedex.com) and Magiran (http://www.magiran.com). Reports of World Health Organization were also searched and optionally employed. The studies concerning passive smoking or presenting the statistically insignificant side effects were excluded. Databases were searched using various combinations of the following terms: cigarette, smoking, smoking cessation, prevalence, history, side effects, and lung cancer by independent reviewers. All the 83 articles concerning the prevalence or side effects of the smoking habit in any Iranian population were selected. The prevalence rate of daily cigarette smoking and the 95% confidence interval as well as smoking health risk associated odds ratio (OR) were retrieved from the articles or calculated. Results: The reported prevalence rates of the included studies, the summary of smoking-related side effects and the ORs (95%CI) of smoking associated risks and the available data on smoking cessation in Iran have been shown in the article. Conclusion: Because of lack of certain data, special studies on local pattern of tobacco use in different districts, about the relationship between tobacco use and other diseases, especially non communicable diseases, and besides extension of smoking cessation strategies, studies on efficacy of these methods seems to be essential in this field. abstract_id: PUBMED:22883725 Cross-sectional survey on smoking and smoking cessation behaviors among Chinese adults in 2010 Objective: To describe the prevalence of smoking and smoking cessation in Chinese adults in 2010. Methods: A face-to-face questionnaire survey was carried out in 162 surveillance points to collect information on non-communicable diseases related risk factors. Multi-stage stratified cluster random sampling method was used to select 98 712 individuals aged 18 and over to be interviewed and 98 526 records were included in the analysis of smoking and smoking cessation. Sample was weighted to represent the population of Chinese adults. Indicators such as current smoking and smoking cessation among different population were calculated. Results: Current smoking rate of our sample was 26.4% (26 047/98 526). With complex weighting, current smoking rate in Chinese adults aged 18 and above was 28.3% (95%CI: 27.2% - 29.4%), which is much higher among men (53.3%, 95%CI: 51.4% - 55.2%) than in women (2.5%, 95%CI: 1.9% - 3.0%) (P &lt; 0.05). Most male current smokers (88.3%, 95%CI: 87.3% - 89.3%) smoked every day and average daily manufacture cigarettes consumption of male adults was (17.8 ± 9.3) cigarettes. Only 14.8% (95%CI: 13.8% - 15.8%) of male ever smokers quitted smoking and 10.7% (95%CI: 9.9% - 11.5%) quitted smoking. Only 38.8% (95%CI: 36.9% - 40.8%) of male current smokers intended to quit smoking. For current smokers aged from 18 to 24, proportion of those who intended to quit smoking was highest (50.5%, 95%CI: 46.1% - 54.8%), but proportion of those who quitted smoking (7.1%, 95%CI: 5.2% - 8.9%) was lowest comparing with other age groups (P &lt; 0.05). Conclusion: Prevalence of smoking in Chinese adults was high and only a few smokers quit smoking. Prevalence of smoking in Chinese male adults was still high. Fairly low proportion of male current smokers intend to quit smoking and even lower proportion of them quit smoking successfully. abstract_id: PUBMED:15534156 Cigarette smoking and infection. Background: Infectious diseases may rival cancer, heart disease, and chronic lung disease as sources of morbidity and mortality from smoking. We reviewed mechanisms by which smoking increases the risk of infection and the epidemiology of smoking-related infection, and delineated implications of this increased risk of infection among cigarette smokers. Methods: The MEDLINE database was searched for articles on the mechanisms and epidemiology of smoking-related infectious diseases. English-language articles and selected cross-references were included. Results: Mechanisms by which smoking increases the risk of infections include structural changes in the respiratory tract and a decrease in immune response. Cigarette smoking is a substantial risk factor for important bacterial and viral infections. For example, smokers incur a 2- to 4-fold increased risk of invasive pneumococcal disease. Influenza risk is severalfold higher and is much more severe in smokers than nonsmokers. Perhaps the greatest public health impact of smoking on infection is the increased risk of tuberculosis, a particular problem in underdeveloped countries where smoking rates are increasing rapidly. Conclusions: The clinical implications of our findings include emphasizing the importance of smoking cessation as part of the therapeutic plan for people with serious infectious diseases or periodontitis, and individuals who have positive results of tuberculin skin tests. Controlling exposure to secondhand cigarette smoke in children is important to reduce the risks of meningococcal disease and otitis media, and in adults to reduce the risk of influenza and meningococcal disease. Other recommendations include pneumococcal and influenza vaccine in all smokers and acyclovir treatment for varicella in smokers. abstract_id: PUBMED:34298890 Smoking and Neuropsychiatric Disease-Associations and Underlying Mechanisms. Despite extensive efforts to combat cigarette smoking/tobacco use, it still remains a leading cause of global morbidity and mortality, killing more than eight million people each year. While tobacco smoking is a major risk factor for non-communicable diseases related to the four main groups-cardiovascular disease, cancer, chronic lung disease, and diabetes-its impact on neuropsychiatric risk is rather elusive. The aim of this review article is to emphasize the importance of smoking as a potential risk factor for neuropsychiatric disease and to identify central pathophysiological mechanisms that may contribute to this relationship. There is strong evidence from epidemiological and experimental studies indicating that smoking may increase the risk of various neuropsychiatric diseases, such as dementia/cognitive decline, schizophrenia/psychosis, depression, anxiety disorder, and suicidal behavior induced by structural and functional alterations of the central nervous system, mainly centered on inflammatory and oxidative stress pathways. From a public health perspective, preventive measures and policies designed to counteract the global epidemic of smoking should necessarily include warnings and actions that address the risk of neuropsychiatric disease. abstract_id: PUBMED:25332457 Declining Prevalence of Tobacco Smoking in Vietnam. Introduction: To supplement limited information on tobacco use in Vietnam, data from a nationally-representative population-based survey was used to estimate the prevalence of smoking among 25-64 year-olds. Methods: This study included 14,706 participants (53.5% females, response proportion 64%) selected by multi-stage stratified cluster sampling. Information was collected using the World Health Organization STEPwise approach to surveillance of risk factors for non-communicable disease (STEPS) questionnaire. Smoking prevalence was estimated with stratification by age, calendar year, and birth year. Results: Prevalence of ever-smoking was 74.9% (men) and 2.6% (women). Male ever-smokers commenced smoking at median age of 19.0 (interquartile range [IQR]: 17.0, 21.0) years and smoked median quantities of 10.0 (IQR: 7.0, 20.0) cigarettes/day. Female ever-smokers commenced smoking at median age of 20.0 (IQR: 18.0, 26.0) years and smoked median quantities of 6.0 (IQR: 4.0, 10.0) cigarettes/day. Prevalence has decreased in recent cohorts of men (p = .001), and its inverse association with years of education (p &lt; .001) has strengthened for those born after 1969 (interaction p &lt; .001). At 60 years of age, 53.0% of men who had reached that age were current smokers and they had accumulated median exposures of 39.0 (IQR: 32.0, 42.0) years of smoking and 21.0 (IQR: 11.5, 36.0) pack-years of cigarettes. The proportion of ever-smokers has decreased consistently among successive cohorts of women (p &lt; .001). Conclusions: Smoking prevalence is declining in recent cohorts of men, and continues to decline in successive cohorts of women, possibly in response to anti-tobacco initiatives commencing in the 1990s. Low proportions of quitters mean that Vietnamese smokers accumulate high exposures despite moderate quantities of cigarettes smoked per day. abstract_id: PUBMED:35701057 Systematic review of changed smoking behaviour, smoking cessation and psychological states of smokers according to cigarette type during the COVID-19 pandemic. Objectives: Although the global COVID-19 pandemic has increased interest in research involving high-risk smokers, studies examining changed smoking behaviours, cessation intentions and associated psychological states among smokers are still scarce. This study aimed to systematically review the literature related to this subject. Design: A systematic review of published articles on cigarettes and COVID-19-related topics DATA SOURCES: Our search was conducted in January 2021. We used the keywords COVID-19, cigarettes, electronic cigarettes (e-cigarettes) and psychological factors in PubMed and ScienceDirect and found papers published between January and December 2020. Data Selection: We included articles in full text, written in English, and that surveyed adults. The topics included smoking behaviour, smoking cessation, psychological state of smokers and COVID-19-related topics. Data Extraction And Synthesis: Papers of low quality, based on quality assessment, were excluded. Thirteen papers were related to smoking behaviour, nine papers were related to smoking cessation and four papers were related to psychological states of smokers. Results: Owing to the COVID-19 lockdown, cigarette users were habituated to purchasing large quantities of cigarettes in advance. Additionally, cigarette-only users increased their attempts and willingness to quit smoking, compared with e-cigarette-only users. Conclusions: Owing to the COVID-19 outbreak, the intention to quit smoking was different among smokers, according to cigarette type (cigarette-only users, e-cigarette-only users and dual users). With the ongoing COVID-19 pandemic, policies and campaigns to increase smoking cessation intentions and attempts to quit smoking among smokers at high risk of COVID-19 should be implemented. Additionally, e-cigarette-only users with poor health-seeking behaviour require interventions to increase the intention to quit smoking. abstract_id: PUBMED:29903255 Situation of dissuading smoking and related factors of employees of public places Objective: To investigate the situation of dissuading smoking and related factors among the employees of public places. Methods: By using a purposive sampling method, six national non-communicable disease control and prevention demonstrationareas, including Chongan, Rushan, Hanyang, Shouyang, Beibei and Huzhu District( County), were selected as the study sites from different geographic locations of eastern, central or western China, and urban or rural resident places. Staff from three public places such as medical care settings, government agencies and restaurants were interviewed through face to face. Information about demographics, situation of smoking and secondhand smoke, smoking dissuading, and smoking sings and smoking policies of the selected public places were collected. Totally 1629 qualified questionnaires were collected from staff of medical care settings( 1029), government agencies( 187) and restaurants( 413). Results: Of all the participants, 9. 39% were current smokers, and 64. 97%reported exposed to secondhand smoke in the past 30 days. Among the 1629 participants, 75. 57% reported they would dissuade the smokers in their workplace, while 14. 30% and10. 13% of the participants indicated they would avoid the smokers or indifferent respectively. Univariate analysis showed that public place category, gender, age, education level, current smoking situation, smoking signs were statistically significantly associated with smoking dissuading( P &lt; 0. 05), while geographic locations, place of residence, income level and knowledge score were not( P &gt; 0. 05). Logistic regression analysis showed that compared to workers from medical care settings, workers from government agencies were less likely to dissuade( OR = 0. 212, 95% CI 0. 150 ~ 0. 301), while workers from restaurants more likely to dissuade( OR = 1. 842, 95% CI 1. 117-3. 038). Staff aged 35 years old or above more likely to dissuade( OR = 1. 474, 95% CI1. 142-1. 903). Workers of present smokers were less likely to dissuade than those of nonsmokers( OR = 0. 465, 95% CI 0. 309-0. 700). Workers of public places with smokefree policies more likely to dissuade than those from public places without( OR = 1. 649, 95% CI 1. 160-2. 344). Conclusion: Most of the staff of public places reported they would dissuade smokers. Workers of nonsmokers, aged 35 years old or above and those from public places with smoke-free policy were positive factors to smoking dissuading. Answer: No, smoking is not a communicable disease. Smoking is a behavior and a risk factor for many chronic non-communicable diseases, such as cardiovascular disease, cancer, chronic lung disease, and diabetes (PUBMED:34298890). Communicable diseases, on the other hand, are illnesses caused by infectious agents or their toxins that can be transmitted from one individual to another, either directly or indirectly. Smoking can increase the risk of infections by causing structural changes in the respiratory tract and decreasing immune response, making smokers more susceptible to infectious diseases (PUBMED:15534156). However, the act of smoking itself is not an infectious process and cannot be spread from person to person like a communicable disease.
Instruction: Is having sex with other men a risk factor for transfusion-transmissible infections in male blood donors in Western countries? Abstracts: abstract_id: PUBMED:37322809 Men who have sex with men and risk for transfusion-transmissible infections in blood donors in Western countries: A systematic review update. Background And Objectives: This systematic review update summarizes evidence concerning transfusion-transmissible infections (TTIs) in male blood donors reporting sex with another man (MSM) or after easing the MSM deferral period. Materials And Methods: We searched five databases, including studies comparing MSM versus non-MSM donors (Type I), MSM deferral periods (Type II) or infected versus non-infected donors (Type III) in Western countries, and used GRADE to determine evidence certainty. Results: Twenty-five observational studies were included. Four Type I studies suggest that there may be an increased risk for overall TTIs, human immunodeficiency virus (HIV), hepatitis B virus (HBV) and syphilis in MSM donors, but the evidence is very uncertain. There was insufficient evidence of MSM with low-risk sexual behaviour. A Type II study indicates that easing the MSM deferral period to 1 year may have little to no effect on TTI risk. TTI prevalence in blood donors under 5-year, 1-year, 3-month or risk-based deferral in eight other Type II studies was too low to provide clear conclusions on the effect of easing the deferral. Three Type III studies reported that MSM may be a risk factor for HIV. Increased risk of HBV, hepatitis C virus and HTLV-I/II could not be shown. The evidence from Type III studies is very uncertain. Conclusion: There may be an increased risk of HIV in MSM blood donors. Shortening the deferral from permanent to 1 year may have little to no effect on TTI risk. However, there is limited, unclear evidence from observational studies concerning the impact of introducing 3-month or risk-based deferrals. abstract_id: PUBMED:25875812 Is having sex with other men a risk factor for transfusion-transmissible infections in male blood donors in Western countries? A systematic review. Background: Although increased prevalence of transfusion transmissible infections (TTI) among "men who have sex with men" (MSM) has been well documented, the exclusion of MSM as blood donors is contested. The aim of this systematic review is to find studies that describe the risk of TTI in MSM blood donors. Methods: We searched MEDLINE, Embase, The Cochrane Central Register of Controlled Trials, Cinahl, and Web of Science, and used GRADE for determining evidence quality. We included studies comparing MSM and non-MSM blood donors (or people eligible to give blood), living in areas most relevant for our Blood Service. Results: Out of 18 987 articles, 14 observational studies were included. Two studies directly compared MSM with non-MSM donors showing that MSM donors have a statistically significant higher risk of HIV-1 infections. In one of these studies it was shown that this was related to recent (&lt; 12 months) MSM contact. In two additional studies no evidence was shown in favour of a certain deferral period for MSM. Ten studies, applying permanent deferral for MSM, compared infected versus non-infected donors. One study found that MSM is a statistically significant risk factor for HIV-1 infection in blood donors. For other TTI such as HBV or HCV, an increased risk of infection could not be demonstrated, because the precision of the results was affected by the low numbers of donors with MSM as risk factor, or because of risk of bias in the included studies. All studies included low level evidence, because of risk of bias and imprecision of the results. Conclusions: High-quality studies investigating the risk of TTI in MSM who donate blood are scarce. The available evidence suggests a link between MSM blood donors and HIV-1 infection, but is too limited to be able to unambiguously/clearly recommend a certain deferral policy. abstract_id: PUBMED:31823386 Is sexual risk behaviour associated with an increased risk of transfusion-transmissible infections in blood donors from Western and Pacific countries? A systematic review and meta-analysis. Background And Objectives: The donor medical questionnaire is designed to aid blood establishments in supporting a safe blood supply. According to blood donor deferral policies, sexual risk behaviour (SRB) leads to a (temporary) deferral from blood donation. This systematic review aimed to scientifically underpin these policies by identifying the best available evidence on the association between SRB and the risk of transfusion transmissible infections (TTIs). Materials & Methods: Studies from three databases investigating the link between SRB (excluding men who have sex with men (MSM)) and TTIs (HBV, HCV, HIV, Treponema pallidum) in donors from Western and Pacific countries were obtained and assessed on eligibility by two reviewers independently. The association between SRB and TTIs was expressed by calculating pooled effect measures via meta-analyses. The GRADE methodology (Grades of Recommendation, Assessment, Development and Evaluation) was used to assess the quality of evidence. Results: We identified 3750 references and finally included 15 observational studies. Meta-analyses showed that there is a significant (P &lt; 0·05) positive association between the following SRB and HBV and/or HCV infection: having sex with an intravenous drug user (high-certainty evidence), receiving money or goods for sex (moderate-high certainty evidence), having a sex partner with hepatitis/HIV (moderate-certainty evidence) and paid for sex or anal sex (low-certainty evidence). Conclusion: Sexual risk behaviour (including having sex with an intravenous drug user, receiving money or goods for sex or having a sex partner with hepatitis/HIV) is probably associated with an increased risk of HBV/HCV infection in blood donors from Western and Pacific countries. abstract_id: PUBMED:35377497 Reported non-compliance with pre-donation screening among blood donors in Québec, Canada: A focus on the 3-month deferral for men who have sex with men. Background And Objectives: In Québec (Canada), the donation deferral for men who have sex with men (MSM) has recently been shortened to 3 months. Whether this change impacted compliance with pre-donation screening is unknown. We assessed compliance with the disclosure of male-to-male sex and other behavioural risk factors for HIV amid this change. Materials And Methods: Québec residents who donated from 14 July 2020 to 30 November 2020 were invited to participate in an online survey. Donors were informed that the survey was optional and anonymous. Survey questions were those used for routine pre-donation screening. Rates of reported non-compliance were weighted based on several characteristics. Results: Of 21,918 contacted donors, 7113 (32.45%) participated. Among male participants (N = 3347), six (0.27% [95% confidence interval (CI) = 0.09%-0.44%]) were not compliant with a 3-month MSM deferral. Among female participants (N = 3766), two (0.06% [95% CI = 0.00%-0.13%]) were not compliant with a 3-month deferral for sex with a man who had male-to-male sex ≤12 months. Other risk factors exhibited similar or lower rates of reported non-compliance. Conclusion: Reported non-compliance with a 3-month MSM deferral and the disclosure of other HIV behavioural risk factors was low. These results warrant the investigation of behavioural donor risk assessment approaches to further improve the inclusiveness of blood donation. abstract_id: PUBMED:31084766 Blood donation deferral policies among men who have sex with men in Brazil. Reevaluation of the deferral from voluntary blood donation by men who have sex with men (MSM) is being discussed in several countries, motivated by the need to ensure a blood supply free from transfusion-transmissible infections (e.g., HIV, syphilis). Policies being considered include: permanent exclusion for any male-male sexual encounter, temporary deferral (3 months, 12 months, 5 years) from the last encounter, or specifying behaviors that differentiate MSM at high risk from those at low risk. Current Brazilian regulations defer MSM from blood donation for 12-months after the last male-male sexual encounter. Broad epidemiological evidence indicates that many MSM are at increased risk for HIV in the present era, and few data exist to distinguish which men are likely to be in the immunological window for detection of these infections. A multicenter study developed in Brazil demonstrated that the history of male-male sex was the most strongly associated with being an HIV-positive blood donor. Meanwhile, the blanket deferral of MSM from blood donation has generated considerable controversy. Rejection of the deferral policies stems in part from perspectives defending human rights, promoting equality and citizenship, and alleging bias and discrimination. The objective of this report is to discuss the current situation of blood donation among MSM in Brazil. We highlight the lack of evidence for a true risk profile for male-male sex in the context of blood donation upon which to base sound policy. We recommend research to establish effective and acceptable criteria for blood donation by MSM and other blood donors. abstract_id: PUBMED:35502143 Balancing non-discriminatory donor selection and blood safety in the Netherlands: Evaluation of an individual risk assessment of sexual behavior. Background: To better balance the safety of the blood supply and the inclusion of men who have sex with men (MSM), further improvements are needed to the risk management strategy employed in the Netherlands to reduce transfusion-transmissible infections (TTIs). A gender-neutral individual risk assessment could provide a solution by determining donor eligibility based on sexual behaviors known to increase the risk of TTIs. Our objective is to estimate the proportion of blood donors that would be deferred by such an assessment, as well as their discomfort answering such questions. Study Design And Methods: Two surveys were distributed in May 2020 to assess sexual behavior in blood donors in the last 4, 6, and 12 months, as well as their discomfort reporting such information. A combination of both surveys measured the extent to which discomfort was associated with reporting sexual behavior. A high-risk sexual behavior pattern was defined as having had multiple sexual partners and having engaged in anal sex, without consistent condom use. Results: Of all 2177 participating whole blood donors, 0.8% report engaging in high-risk sexual behaviors over the last 4 months and would therefore be ineligible to donate. When accounting for the additional proportion of donors that reported such questions would stop them from donating, 2.0% and 3.2% of female and male donors, respectively, would be lost. Discussion: Gender-neutral eligibility criteria based on high-risk sexual behaviors may reduce the overall number of eligible donors in the Netherlands, but could make blood donation more accessible to a broader group of donors. abstract_id: PUBMED:30052873 Infection Pressure in Men Who Have Sex With Men and Their Suitability to Donate Blood. Background: Deferral of men who have sex with men (MSM) from blood donation is highly debated. We therefore investigated their suitability to donate blood. Methods: We compared the antibody prevalence of 10 sexually and transfusion-transmissible infections (TTIs) among 583 MSM and 583 age-matched repeat male blood donors. MSM were classified as low risk (lr) or medium-to-high risk (hr) based on self-reported sexual behavior and as qualified or unqualified using Dutch donor deferral criteria. Infection pressure (IP) was defined as the number of antibody-reactive infections, with class A infections (human immunodeficiency virus-1/2, hepatitis B virus, hepatitis C virus, human T-cell lymphotropic virus-1/2, syphilis) given double weight compared to class B infections (cytomegalovirus, herpes simplex virus-1/2, human herpesvirus 8, hepatitis E virus, parvovirus B19). Results: Donors had a lower median IP than qualified lr-MSM and qualified hr-MSM (2 [interquartile range {IQR}, 1-2] vs 3 [IQR, 2-4]; P &lt; .001). Low IP was found in 76% of donors, 39% of qualified lr-MSM, and 27% of qualified hr-MSM. The prevalence of class A infections did not differ between donors and qualified lr-MSM but was significantly higher in qualified hr-MSM and unqualified MSM. Recently acquired class A infections were detected in hr-MSM only. Compared to blood donors, human herpesviruses were more prevalent in all MSM groups (P &lt; .001). Conclusions: IP correlates with self-reported risk behavior among MSM. Although lr-MSM might form a low threat for blood safety with regard to class A infections, the high seroprevalence of human herpesviruses in lr-MSM warrants further investigation. abstract_id: PUBMED:36310509 Validation of new, gender-neutral questions on recent sexual behaviors among plasma donors and men who have sex with men. Background And Objectives: Several blood services might eventually interview donors with gender-neutral questions on sexual behaviors to improve the inclusivity of blood donation. We tested two ways (i.e., "scenarios") of asking donors about their recent sexual behaviors. Materials And Methods: The study comprised 126 regular source plasma donors and 102 gay, bisexual, and other men who have sex with men (gbMSM), including 73 cis-gbMSM (i.e., the "cis-gbMSM subgroup," which excluded nonbinary, genderqueer, and trans individuals). In Scenario 1, participants were asked if, in the last 3 months, they "have […] had a new sexual partner or more than one sexual partner." In Scenario 2, they were asked "Have you had a new sexual partner?" and "have you had more than one sexual partner?". Validation questions included more specific questions on the type of partners and sexual activity. Results: Among plasma donors, sensitivity was 100.0% for both scenarios; specificity was 100.0% and 99.1% for Scenarios 1 and 2, respectively. Among gbMSM, sensitivity was 74.5% and 82.9% for Scenarios 1 and 2, respectively; specificity was 100.0% for both scenarios. Among cis-gbMSM, sensitivity was 88.6% and 100.0% for Scenarios 1 and 2, respectively; specificity was 100.0% for both scenarios. The area under the receiver operating characteristic curve of Scenario 2 was significantly higher than that of Scenario 1 among gbMSM and in the cis-gbMSM subgroup (all p &lt; .05). Conclusion: Scenario 2 questions performed well among plasma donors and cis-gbMSM, but less so in the broader gbMSM population. abstract_id: PUBMED:26355711 Two decades of risk factors and transfusion-transmissible infections in Dutch blood donors. Background: Risk behavior-based donor selection procedures are widely used to mitigate the risk of transfusion-transmissible infections (TTIs), but their effectiveness is disputed in countries with low residual risks of TTIs. Study Design And Methods: In 1995 to 2014, Dutch blood donors infected with hepatitis B virus (HBV), hepatitis C virus (HCV), human immunodeficiency virus (HIV), human T-lymphotropic virus (HTLV), or syphilis were interviewed by trained medical counselors to identify risk factors associated with TTIs. Trends in the prevalence and incidence of TTIs were analyzed using binomial regression models. Results: A total of 972 new donors and 381 repeat donors had TTIs. New donors had higher rates of TTIs compared to repeat donors. Although the HBV and HCV prevalence gradually decreased over time, the incidence of all five TTIs remained stable during the past two decades. In new donors the TTIs had the following risk profiles: "blood-blood contact" for HCV, "unprotected sex" for HIV and syphilis, and "country of birth" for HBV and HTLV. In infected repeat donors, sexual risk factors predominated for all TTIs. At posttest counseling, 28% of infected repeat donors admitted to risk factors leading to permanent donor exclusion if revealed during the donor selection procedure (predominantly male-to-male sex and recent diagnosis of syphilis). Conclusion: The prevalence and incidence of TTIs among Dutch blood donors are six- to 60-fold lower than in the general Dutch population, illustrating the effectiveness of donor selection procedures. However, at least a quarter of infected donors appeared noncompliant to the donor health questionnaire (DHQ), suggesting that DHQs, or the way donor questioning is implemented, can be improved. abstract_id: PUBMED:23491864 Seroprevalence of transfusion transmissible infections among blood donors in western part of Turkey: a six-year study. Background: The most frequently encountered complication of blood transfusion is transfusion transmissible infections. Screening of transfusion transmissible infections are for safe blood transfusions, the results provide a crude idea about seropositivity rates of regions. Objective: The present study aimed to investigate distributions of transfusion transmissible infection seroprevalences in years and according to gender through medical records, and to define the regional data, retrospectively. Methods: 80.454 Blood donors, applied to our center between dates August 2004 and December 2010, were investigated by HBsAg, anti-HVC, anti-HIV ELISA (Abbott, AXSYM) and RPR methods. Results: Out of 80.454 donors, 7.321 (9.1%) were females, 73.133 (90.0%) were males. Age range of donors was 18-64 years (mean 41 years). While 61.950 (77%) of donors were voluntary, 18.504 (23%) were familial/replacement donors. 1.405 units of blood out of 80.454 were disposed, because one of infection parameters was positive. 45 units (3.2%) of disposed blood were from females, the rest belonged to male donors (1.360 units; 96.8%). HBsAg was positive in 1.054 donors (1.31%), whereas positivities of anti-HCV, anti-HIV and RPR were 312 (0.38%), 2 (0.002%) and 39 (0.04%), respectively. Conclusions: Seropositivity was determined in accordance with national data, but was at lower limits. Seropositivity rates in years differed, but neither regularly increases nor decrease was observed. When all positivities were investigated according to genders, positivity in HBsAg and VDRL tests were significantly high in male donors. Answer: Yes, having sex with other men is considered a risk factor for transfusion-transmissible infections (TTIs) in male blood donors in Western countries. Several systematic reviews and studies have found evidence suggesting that men who have sex with men (MSM) have a statistically significant higher risk of HIV-1 infections compared to non-MSM blood donors (PUBMED:25875812). This association has been well documented, and the exclusion of MSM as blood donors has been a topic of debate. One systematic review update indicated that there may be an increased risk for overall TTIs, HIV, hepatitis B virus (HBV), and syphilis in MSM donors, although the evidence is very uncertain. It also suggested that shortening the deferral period from permanent to 1 year for MSM may have little to no effect on TTI risk (PUBMED:37322809). Another study found that MSM is a statistically significant risk factor for HIV-1 infection in blood donors, but the evidence for other TTIs such as HBV or hepatitis C virus (HCV) was not conclusive due to low numbers of donors with MSM as a risk factor or risk of bias in the studies (PUBMED:25875812). Furthermore, a study in the Netherlands found that MSM with low-risk sexual behavior might form a low threat for blood safety with regard to class A infections (e.g., HIV, HBV, HCV), but the high seroprevalence of human herpesviruses in MSM warrants further investigation (PUBMED:30052873). Another study from the Netherlands showed that the prevalence and incidence of TTIs among Dutch blood donors are significantly lower than in the general population, which illustrates the effectiveness of donor selection procedures. However, noncompliance to the donor health questionnaire was observed in at least a quarter of infected donors, suggesting that improvements could be made in donor questioning (PUBMED:26355711). In summary, MSM are considered to be at an increased risk for certain TTIs, particularly HIV, which has led to deferral policies for MSM blood donors in many Western countries. However, the evidence is not entirely clear-cut, and there is ongoing debate and research into how to balance the safety of the blood supply with non-discriminatory practices (PUBMED:25875812; PUBMED:37322809; PUBMED:30052873; PUBMED:26355711).
Instruction: Do more active children sleep more? Abstracts: abstract_id: PUBMED:32044552 Sleep manifestations, sleep architecture in children with Eosinophilic esophagitis presenting to a sleep clinic. Study Objectives: To describe sleep manifestations, polysomnographic (PSG) findings, and specific sleep disorders in children with Eosinophilic Esophagitis (EoE). Methods: This retrospective study included children with EoE who were referred to sleep clinics. Clinical manifestations, PSG variables, and diagnosis of sleep disorders were analyzed. Sleep architecture of patients with EoE was compared to control subjects. Results: In sum, 81 children with EoE met the criteria for entry into the analysis with a mean age of 10.1 ± 4.4 years. Of those, 46 children (57%) presented in the sleep clinic with active EoE symptoms, while 35 (43%) children did not have active EoE symptoms at presentation. Several sleep complaints were common in children with EoE, including snoring (62, 76.5%), restless sleep (54, 66.6%), legs jerking or leg discomfort (35, 43.2%) and daytime sleepiness (47, 58.0%). Comparing sleep architecture with controls, children with EoE had significantly higher NREM2 (P= &lt; 0.001), lower NREM3 (P= &lt; 0.001), lower rapid eye movement (REM) (P = 0.017), increased periodic leg movements (PLM) index (P= &lt; 0.001) and increased arousal index (P = 0.007). There were no significant differences in the sleep efficiency between the EoE and control subjects. Common sleep diagnoses included obstructive sleep apnea (OSA, 30, 37.0%) and periodic limb movements disorder (PLMD, 20, 24.6%). Of note, we found a much higher percentage of PLMD in active EoE compared to inactive EoE (P = 0.004). Conclusions: Children with EoE have frequent sleep complaints and several sleep disorders identified from the sleep study, including sleep-disordered breathing and PLMD. Analysis of sleep architecture demonstrates significant sleep fragmentation as evidenced by decreased slow-wave sleep and REM sleep and increased arousal index. abstract_id: PUBMED:29960212 Correspondence of maternal and paternal perception of school-aged children's sleep with in-home sleep-electroencephalography and diary-reports of children's sleep. Objective: Parents are often the first to report children's sleep difficulties. The aim of the present study was to evaluate the accuracy of parent reports by examining the correspondence of maternal and paternal reports of children's sleep with in-home electroencephalography (EEG) sleep assessment and sleep diary reports. Methods: A total of 143 children (57 formerly very preterm born children) aged 7-12 years underwent one night of in-home sleep-EEG; mothers and fathers reported children's sleep-related behavior by using the German version of the Children's Sleep Habits Questionnaire, and children and parents together completed a sleep diary of children's sleep. Results: Less EEG-derived total sleep time (TST) was associated with increased mother questionnaire reports of sleep duration problems, while less sleep efficiency (SE) and longer sleep onset latency (SOL) were associated with increased mother questionnaire reports of sleep onset delay. For fathers, only longer SOL was related to increased father questionnaire reports of sleep onset delay. The abovementioned associations did not change with children's increasing age and did not differ for boys and girls. More parent questionnaire reports of sleep duration problems, sleep onset delay, and night wakings were related to shorter diary reports of sleep duration, increased sleep latency, and more nocturnal awakenings, respectively. Conclusions: Mother questionnaire reports of children's sleep corresponded moderately with objective measures of TST, SE, and SOL assessed with in-home sleep-EEG. Both mother and father questionnaire reports of children's sleep duration problems, sleep onset delay, and night wakings were related to diary reports of children's sleep. abstract_id: PUBMED:33154690 Examining Sleep and Mood in Parents of Children with Sleep Disturbances. Objective: The current study examined sleep and mood associations in parents of children with sleep disturbances across a sample of typically developing children and children with neurodevelopmental disorders. The mediating effect of children's sleep on the relationship between parents' sleep and mood was also assessed. The study explored differences in parents' sleep based on whether 1) the child had a sleep disturbance, and 2) the child was typically developing or had a neurodevelopmental disorder. Methods: A total of 293 parents of children aged 2-12 years completed an online questionnaire. Parental sleep was examined using the Pittsburgh Sleep Quality Index, the Glasgow Sleep Effort Scale and the Pre-sleep Arousal Scale, and mood was assessed using the Profile of Mood States-short form. Measures for children included the Child's Sleep Habits Questionnaire (CSHQ) and the Strengths and Difficulties Questionnaire. Results: Across the overall sample, children's sleep disturbances were associated with parents' sleep disturbances, accounting for 22% of the change in parental sleep quality. Children's sleep partially mediated parents' sleep and mood. Significant differences were observed for sleep and mood outcomes in parents of children with sleep disturbances (CSHQ scores ≥41). However, no significant differences were reported for children's sleep disturbances and parents' sleep quality based on whether the child was typically developing or had a neurodevelopmental disorder. Conclusion: Parents of children with sleep disturbances experience poor sleep and high pre-sleep arousal, indicative of insomnia. Given that these parents experience cognitive arousal and insomnia, it is recommended that parents' sleep problems are addressed and treated in clinical settings. abstract_id: PUBMED:23620683 Children's Sleep Comic: development of a new diagnostic tool for children with sleep disorders. Background: A solid diagnosis of sleep disorders in children should include both self-ratings and parent ratings. However, there are few standardized self-assessment instruments to meet this need. The Children's Sleep Comic is an adapted version of the unpublished German questionnaire "Freiburger Kinderschlafcomic" and provides pictures for items and responses. Because the drawings were outdated and allowed only for qualitative analysis, we revised the comic, tested its applicability in a target sample, and suggest a procedure for quantitative analysis. Methods: All items were updated and pictures were newly drawn. We used a sample of 201 children aged 5-10 years to test the applicability of the Children's Sleep Comic in young children and to run a preliminary analysis. Results: The Children's Sleep Comic comprises 37 items covering relevant aspects of sleep disorders in children. Application took on average 30 minutes. The procedure was well accepted by the children, as reflected by the absence of any dropouts. First comparisons with established questionnaires indicated moderate correlations. Conclusion: The Children's Sleep Comic is appropriate for screening sleep behavior and sleep problems in children. The interactive procedure can foster a good relationship between the investigator and the child, and thus establish the basis for successful intervention if necessary. abstract_id: PUBMED:38450648 Polysomnographic Characteristics of Sleep Architecture in Children With Obstructive Sleep Apnea. Background: The conventional measure of sleep fragmentation is via polysomnographic evaluation of sleep architecture. Adults with OSA have disruption in their sleep cycles and spend less time in deep sleep stages. However, there is no available evidence to suggest that this is also true for children and published results have been inconclusive. Objective: To determine polysomnographic characteristics of sleep architecture in children with OSA and investigate effects relative to OSA severity. Methods: Overnight polysomnograms (PSG) of children referred for suspected OSA were reviewed. Subjects were classified by apnea hypopnea index (AHI). PSG parameters of sleep architecture were recorded and analyzed according to OSA severity. Results: Two hundred and eleven children were studied (median age of 7.0 years, range 4-10 years) Stage N1 sleep was longer while stage N2 sleep and REM sleep was reduced in the OSA group when compared to those without OSA (6.10 vs 2.9, P &lt; .001; 42.0 vs 49.7, P &lt; .001; 14.0 vs 15.9, P = .05). The arousal index was also higher in the OSA group (12.9 vs 8.2, P &lt; .001). There was a reduction in sleep efficiency and total sleep time and an increase in wake after sleep onset noted in the OSA group (83.90 vs 89.40, P = .003; 368.50 vs 387.25, P = .001; 40.1 ± 35.59 vs 28.66 ± 24.14, P = .007; 29.00 vs 20.50; P = .011). No significant difference was found in N3 sleep stage (33.60 vs 30.60, P = .14). Conclusion: We found evidence that children with OSA have a disturbance in their sleep architecture. The changes indicate greater sleep fragmentation and more time spent in lighter stages of sleep. Future research is needed and should focus on more effective methods to measure alterations in sleep architecture. abstract_id: PUBMED:32587532 Objective and Subjective Assessments of Sleep in Children: Comparison of Actigraphy, Sleep Diary Completed by Children and Parents' Estimation. In research and clinical contexts, parents' report and sleep diary filled in by parents are often used to characterize sleep-wake rhythms in children. The current study aimed to investigate children self-perception of their sleep, by comparing sleep diaries filled in by themselves, actigraphic sleep recordings, and parental subjective estimation. Eighty children aged 8-9 years wore actigraph wristwatches and completed sleep diaries for 7 days, while their parents completed a sleep-schedule questionnaire about their child' sleep. The level of agreement and correlation between sleep parameters derived from these three methods were measured. Sleep parameters were considered for the whole week and school days and weekends separately and a comparison between children with high and low sleep efficiency was carried out. Compared to actigraphy, children overestimated their sleep duration by 92 min and demonstrated significant difficulty to assess the amount of time they spent awake during the night. The estimations were better in children with high sleep efficiency compared to those with low sleep efficiency. Parents estimated that their children went to bed 36 min earlier and obtained 36.5 min more sleep than objective estimations with actigraphy. Children and parents' accuracy to estimate sleep parameters was different during school days and weekends, supporting the importance of analyzing separately school days and weekends when measuring sleep in children. Actigraphy and sleep diaries showed good agreement for bedtime and wake-up time, but not for SOL and WASO. A satisfactory agreement for TST was observed during school days only, but not during weekends. Even if parents provided more accurate sleep estimation than children, parents' report, and actigraphic data were weakly correlated and levels of agreement were insufficient. These results suggested that sleep diary completed by children provides interesting measures of self-perception, while actigraphy may provide additional information about nocturnal wake times. Sleep diary associated with actigraphy could be an interesting tool to evaluate parameters that could contribute to adjust subjective perception to objective sleep values. abstract_id: PUBMED:23661965 Sleep assessment of children with cerebral palsy: Using validated sleep questionnaire. Background: On the basis of clinical experience, it seems that sleep disturbances are common in children with cerebral palsy (CP); however, there is a lack of research and objective data to support this observation. Aim Of Work: Our aim was to assess sleep of children with cerebral palsy, using validated sleep questionnaire. Subjects And Methods: one hundred children with diagnosis of CP were investigated via sleep questionnaires, with their ages from 2-12 years. The 100 children with CP were divided into two groups, pre-school group (52 children had a mean age 2.35 ± 1.04 years) and school ages group (48 children had a mean age 10.21 ± 3.75 years). Results: We found high incidence of sleep problem in both pre-school and school age groups. We found that pre-school children have more prevalence of early insomnia (46.2%, P value 0.028) and sleep bruxism (50%, P value 0.000), while school group suffer more sleep disordered breathing (SDB) (50%, P value 0.001), more nightmares (50%, P value 0.001), more sleep talking (12.5% P value 0.049), and more excessive daytime sleepiness (EDS) (62.5%, P value 0.001). Conclusion: Results of our study indicate that CP children have high incidence of sleep problem in both pre-school and school age groups. abstract_id: PUBMED:36790219 Correlations between sleep architecture and sleep-related masseter muscle activity in children with sleep bruxism. Background: Sleep bruxism (SB) occurring during No-REM (nREM) sleep and increase in microarousals per hour have been described in adults, but not in children. Objective: To assess the correlation between sleep architecture and masseter muscle activity related to sleep bruxism (SB/MMA) in children. Materials And Methods: Forty-three children aged 7-12 years (mean age: 9.4 ± 1.3) with confirmed SB underwent a two-night polysomnographic (PSG) study in a sleep laboratory, for accommodation (first night) and data collection (second night). Data on sleep architecture (total sleep duration (TSD), sleep efficiency (SE), sleep onset latency (SOL), REM and nREM sleep duration and proportion and microarousals/hour during REM and nREM sleep) and episodes/hour of SB/MMA were recorded. Single and multiple-variable linear regression analyses were performed to assess the correlation between data on sleep architecture (predictors) and SB/MMA (dependent variable). Results: Shorter TSD, REM and nREM stage 1 sleep duration, longer SOL and more microarousals/hour during REM and nREM sleep were found to be positive predictors of SB/MMA in children in the multiple-variable regression analysis (R2 = 0.511). Conclusion: Within the limitations of this study, it can be concluded that SB/MMA is correlated with altered sleep architecture in children (shorter total sleep duration (TSD), shorter nREM and REM sleep and higher microarousals during REM and nREM sleep). Nevertheless, the clinical significance of these findings need to be demonstrated in future studies. abstract_id: PUBMED:28212689 Sleep Complaints and Sleep Architecture in Children With Idiopathic Central Sleep Apnea. Study Objectives: Idiopathic central sleep apnea (ICSA) is categorized as a type of nonhypercapnic central sleep apnea (CSA). Recurrent cessation and resumption of respiration leads to sleep fragmentation, which causes excessive daytime sleepiness, frequent nocturnal awakenings, or both. ICSA has been described in the adult population but there is limited information in children. The purpose of this study was to describe clinical manifestations and polysomnographic findings in children with ICSA. Methods: A retrospective review of medical records and polysomnograms was performed for 14 pediatric patients with ICSA, 9 from Cincinnati Children's Hospital Medical Center and 5 from Antwerp University Hospital. Polysomnographic features of patients with ICSA were compared with those of nine age-matched control group subjects. Patients with CSA caused by medical or neurological disorders, medication use, or substance use were excluded. Results: Sleep complaints were common in the 14 children with ICSA, including those with sleep-onset insomnia (7 children), frequent nighttime awakening (3 children), restless sleep (7 children), and daytime sleepiness (5 children). Symptoms of sleep-disordered breathing were noted in 11 of 14 subjects. Compared to that of the control group, sleep latency in the ICSA group was significantly prolonged (P &lt; .05). The percentage of stage 2 sleep was significantly higher (P &lt; .05), and slow wave sleep was significantly lower in patients with ICSA (P &lt; .05). Conclusions: Similar to adult patients, children with ICSA present with complaints of insomnia, daytime sleepiness, and symptoms of obstructive sleep apnea. Analysis of polysomnograms reveals prolonged sleep latency, increased stage 2 sleep, and decreased slow wave sleep. Further studies are needed to assess mechanisms and the role of hypercapnic response in the pathogenesis of children with ICSA. abstract_id: PUBMED:33572155 Factors Affecting the Quality of Sleep in Children. Sleep quality is one of the domains of sleep. Having adequate quality sleep is defined as one's "feeling fresh" after waking-up. Inadequate sleep quality results in sleep insufficiency producing a variety of symptoms and signs. The central nervous system is affected the most in children, although other system too may be involved. Several factors affect sleep quality in children including genetics, sleep habits, medical problems, parents/caregiver factors, screen time and the child's environment. These factors are inter-related and dynamic. The outcome of sleep insufficiency is many involving neurocognitive and neurobehavior, mood and emotional issues and specific conditions, like pulmonary hypertension, cor pulmonale and obesity. Management should start with proper history taking to identify the multifaceted nature of the condition. Treatment is planned cognizant of the age of the patient and the associated etiological factors, and should involve both the children and their parents. Answer: The provided abstracts do not directly address the relationship between children's physical activity levels and the amount of sleep they get. The abstracts focus on various aspects of sleep in children, such as sleep architecture in children with specific health conditions (e.g., Eosinophilic Esophagitis, Obstructive Sleep Apnea, Cerebral Palsy, and Idiopathic Central Sleep Apnea), the accuracy of parental reports of children's sleep, the development of diagnostic tools for children's sleep disorders, and the impact of children's sleep disturbances on parents' sleep and mood (PUBMED:32044552, PUBMED:29960212, PUBMED:33154690, PUBMED:23620683, PUBMED:38450648, PUBMED:32587532, PUBMED:23661965, PUBMED:36790219, PUBMED:28212689, PUBMED:33572155). However, none of the abstracts specifically mention the correlation between physical activity levels and sleep duration in children. To answer the question of whether more active children sleep more, research that specifically investigates the relationship between physical activity and sleep patterns in children would be needed. Such research would likely involve measuring children's physical activity levels and comparing them to sleep metrics such as total sleep time, sleep efficiency, and sleep quality.
Instruction: Evaluation of febrile children with petechial rashes: is there consensus among pediatricians? Abstracts: abstract_id: PUBMED:9877362 Evaluation of febrile children with petechial rashes: is there consensus among pediatricians? Background: The evaluation of febrile children with petechial rashes evokes controversy. Although many of these children have viral infections, on occasion such patients may be infected with Neisseria meningitidis. Objective: To investigate differences in practice trends for the evaluation and management of non-toxic-appearing febrile children with petechial rashes among pediatric specialty groups. Methods: We surveyed 833 pediatricians in 4 specialties [community (CGP) and academic (AGP) general pediatrics, emergency medicine (EM) and infectious diseases] regarding 4 hypothetical non-toxic-appearing febrile children ages 1, 2, 5 and 7 years. The patients differed with regard to clinical appearance, distribution of petechiae and complete blood count results. We compared specialty group responses, adjusting for practice setting, population size and years in practice using multiple logistic regression analysis. Results: The survey was completed and returned by 416 (50%) pediatricians. There was substantial variation in the evaluation of the 2 younger febrile children without clear sources for their petechiae. For the 1-year-old the overall blood culture (BCx) rate was 82%, with the EM group (91%) more often requesting BCx than either the CGP (76%) or AGP (73%, P=0.001) groups. The overall hospital admission rate was 31%, with CGP less often requesting admission than infectious disease pediatricians (22% vs. 40%, P=0.007). In the regression analysis the only significant difference between groups was in BCx rate between the EM and AGP groups. For the 2-year-old the overall rate of BCx was 95%, lumbar puncture was 41% and admission was 44%, with no significant differences among groups. For the scenarios involving the 2 older febrile children with sources for their petechiae, the majority of respondents chose neither lumbar puncture nor admission. There was disagreement regarding BCx, both within and between groups, although most of the between group differences did not persist in the regression analysis. Conclusions: There are substantial differences among pediatricians in the evaluation of young non-toxic-appearing febrile children with petechial rashes. Although there are some differences between pediatric subspecialties, most of these differences do not persist after adjusting for practice setting, population size and physician experience. abstract_id: PUBMED:37868577 Spectrum of Febrile Thrombocytopenia in the Pediatric Population (1-18 Years) Admitted in a Tertiary Care Center. Aim Thrombocytopenia is a common manifestation of various infections. Thrombocytopenia associated with fever helps to narrow down the differential diagnosis and management of fever. It also helps to know the various complications of thrombocytopenia, its management, and the outcome of the patient. This study aimed to evaluate the clinical profile and determine etiology and complications in patients with fever and thrombocytopenia in pediatric populations. Methods One hundred and fifteen patients of both sexes aged 1-18 years with fever and found to have thrombocytopenia (platelet count &lt; 1.5 lakhs) between June 1, 2018 and March 31, 2019 were included in this study. Results Infection was the common cause of febrile thrombocytopenia and dengue fever was the most common infection. Bleeding manifestations were seen in 9.6 % of patients. Petechiae/purpura was the commonest bleeding manifestation followed by gum and nose bleeding. Common bleeding manifestations were seen in patients with a platelet count below 50,000 and the majority of them did not require platelet transfusion. Good recovery was noted in 96.5% of patients while 2.6% had mortality. Conclusions An infection, particularly dengue, was the common most cause of fever with thrombocytopenia. In the majority of patients, thrombocytopenia was transient and asymptomatic. Bleeding was present in the majority of patients with platelets less than 10,000 and 20,000 to 50,000. The most common bleeding manifestation was petechial rashes over the skin. Platelet transfusion was not required in most of the cases. On treating the specific cause, a drastic improvement in the platelet count was noted during discharge and further follow-up. Immunization is highly recommended for vaccine-preventable diseases. abstract_id: PUBMED:36866956 European study confirms the combination of fever and petechial rash as an important warning sign for childhood sepsis and meningitis. Aim: This study investigated febrile children with petechial rashes who presented to European emergency departments (EDs) and investigated the role that mechanical causes played in diagnoses. Methods: Consecutive patients with fever presenting to EDs in 11 European emergency departments in 2017-2018 were enrolled. The cause and focus of infection were identified and a detailed analysis was performed on children with petechial rashes. The results are presented as odds ratios (OR) with 95% confidence intervals (CI). Results: We found that 453/34010 (1.3%) febrile children had petechial rashes. The focus of the infection included sepsis (10/453, 2.2%) and meningitis (14/453, 3.1%). Children with a petechial rash were more likely than other febrile children to have sepsis or meningitis (OR 8.5, 95% CI 5.3-13.1) and bacterial infections (OR 1.4, 95% CI 1.0-1.8) as well as need for immediate life-saving interventions (OR 6.6, 95% CI 4.4-9.5) and intensive care unit admissions (OR 6.5, 95% CI 3.0-12.5). Conclusion: The combination of fever and petechial rash is still an important warning sign for childhood sepsis and meningitis. Ruling out coughing and/or vomiting was insufficient to safely identify low-risk patients. abstract_id: PUBMED:21049226 Syndromic surveillance: etiologic study of acute febrile illness in dengue suspicious cases with negative serology. Brazil, Federal District, 2008. With the aim of identifying the etiology of acute febrile illness in patients suspected of having dengue, yet with non reagent serum, a descriptive study was conducted with 144 people using secondary serum samples collected during convalescence. The study was conducted between January and May of 2008. All the exams were re-tested for dengue, which was confirmed in 11.8% (n = 17); the samples that remained negative for dengue (n = 127) were tested for rubella, with 3.9% (n = 5) positive results. Among those non reactive for rubella (n = 122), tests were made for leptospirosis and hantavirus. Positive tests for leptospirosis were 13.9% (n = 17) and none for hantavirus. Non reactive results (70.8%) were considered as Indefinite Febrile Illness (IFI). Low schooling was statistically associated with dengue, rubella and leptospirosis (p = 0.009), dyspnea was statistically associated with dengue and leptospirosis (p = 0.012), and exanthem/petechia with dengue and rubella (p = 0.001). Among those with leptospirosis, activities in empty or vacant lots showed statistical association with the disease (p = 0.013). Syndromic surveillance was shown to be an important tool in the etiologic identification of IFI in the Federal District of Brazil. abstract_id: PUBMED:23418797 Evaluation of fever in infants and young children. Febrile illness in children younger than 36 months is common and has potentially serious consequences. With the widespread use of immunizations against Streptococcus pneumoniae and Haemophilus influenzae type b, the epidemiology of bacterial infections causing fever has changed. Although an extensive diagnostic evaluation is still recommended for neonates, lumbar puncture and chest radiography are no longer recommended for older children with fever but no other indications. With an increase in the incidence of urinary tract infections in children, urine testing is important in those with unexplained fever. Signs of a serious bacterial infection include cyanosis, poor peripheral circulation, petechial rash, and inconsolability. Parental and physician concern have also been validated as indications of serious illness. Rapid testing for influenza and other viruses may help reduce the need for more invasive studies. Hospitalization and antibiotics are encouraged for infants and young children who are thought to have a serious bacterial infection. Suggested empiric antibiotics include ampicillin and gentamicin for neonates; ceftriaxone and cefotaxime for young infants; and cefixime, amoxicillin, or azithromycin for older infants. abstract_id: PUBMED:22383925 Simple Prognostic Criteria can Definitively Identify Patients who Develop Severe Versus Non-Severe Dengue Disease, or Have Other Febrile Illnesses. Background: SEVERE DENGUE DISEASE (SDD) (DHF/DSS: dengue hemorrhagic fever/dengue shock syndrome) results from either primary or secondary dengue virus (DENV) infections, which occur 4 - 6 days after the onset of fever. As yet, there are no definitive clinical or hematological criteria that can specifically identify SDD patients during the early acute febrile-phase of disease (day 0 - 3: &lt; 72 hours). This study was performed during a SDD (DHF/DSS) epidemic to: 1) identify the DENV serotypes that caused SDD during primary or secondary DENV infections; 2) identify simple clinical and hematological criteria that could significantly discriminate between patients who subsequently developed SDD versus non-SDD (N-SDD), or had a non-DENV fever of unknown origin (FUO) during day 0 - 3 of fever; 3) assess whether DENV serotype co-infections resulted in SDD. Methods: First serum samples, with clinical and hematological criteria, were collected from 100 patients during the early acute febrile-phase (day 0 - 3: &lt; 72 hours), assessed for DENV or FUO infections by IgM- and IgG-capture ELISAs on paired serum samples and by DENV isolations, and subsequently graded as SDD, N-SDD or FUO patients. Results: IN THIS STUDY: 1) Thirty-three patients had DENV infections, predominantly secondary DENV-2 infections, including each SDD (DHF/DSS) case; 2) Secondary DENV-2/-3 and DENV-2/-4 serotype co-infections however resulted in N-SDD; 3) Each patient who subsequently developed SDD, but none of the others, displayed three clinical criteria: abdominal pain, conjunctival injection and veni-puncture bleeding, therefore each of these criteria provided definitively significant prognostic (P &lt; 0.001) values; 4) Petechia, positive tourniquet tests and hepatomegaly, and neutrophilia or leukopenia also significantly identified those who: a) subsequently developed SDD versus N-SDD, or had a FUO; b) subsequently developed SDD versus N-SDD; c) subsequently developed N-SDD versus FUOs, respectively. Conclusions: This is the first report of simple definitively prognostic criteria for SDD patients, including the first assessment and confirmation of conjunctival injection. The three definitive clinical criteria used alone, or supported by the other four criteria, could be essential for specifically identifying those patients needing prompt hospital-based therapies to lessen or avert SDD, without unnecessary hospitalization of the other patients. Keywords: Dengue virus; Severe dengue; Dengue fever; Diagnostic; Criteria; Hemorrhage; Shock. abstract_id: PUBMED:35578548 Clinicopathological alteration of symptoms with serotype among dengue infected pediatric patients. Dengue fever is a self-limiting, acute febrile illness caused by an arbovirus. This infection may be asymptomatic or symptomatic with its potential life-threatening form as DHF/DSS. Severe dengue cases occur typically in children due to overproduction of proinflammatory and anti-inflammatory cytokines (called cytokines storm) as well as increased microvascular permeability in them. This study aimed to find circulating dengue serotype and their clinicopathological association among pediatric patients admitted to tertiary care hospitals in Kolkata, India. Overall, 210 patients were approached, among them, 170 dengue suspected children admitted to three tertiary care hospitals were included in this study. Dengue samples were screened for the presence of dengue NS1 antigen and IgM antibodies by enzyme-linked immunosorbent assay. Viral RNA was extracted from NS1 seropositive serum samples and subjected to molecular serotyping by semi-nested reverse-transcription polymerase chain reaction. All patients were followed up for clinical manifestations and biochemical parameters associated with dengue. Cocirculation of all four serotypes was observed and DENV2 was the major circulating strain. Physiological classification of associated clinical symptoms was done as per WHO guideline and represented as a percentage variable. A multivariate logistic regression approach was used for making a regression model including dengue-associated clinical symptoms with dengue positivity or negativity as dependent variables. Thrombocytopenia was observed in 69% of patients and the commonest bleeding manifestation was petechia. Liver function profiles of infected patients were observed during follow-up and represented using a box plot. A significant change in trends of dengue-associated clinical manifestations and differential expression of liver functional profile with different phases of transition of dengue fever was observed in this study population. abstract_id: PUBMED:34493177 Spotted fever diagnosis: Experience from a South Indian center. Spotted fever (SF) is an important treatable cause of acute febrile illness (AFI) with rash and has reemerged in India. A prospective AFI with rash study was undertaken at a South Indian hospital to correlate specific clinical findings with laboratory confirmation of spotted fever. During the study period (December 2017 to May 2019), 175 patients with fever and rash were suspected to have spotted fever. Molecular assays for scrub typhus and spotted fever (47 kDa and ompA qPCR) and serology (IgM ELISA) was performed on the 96 individuals recruited. Laboratory confirmed SF cases (ompA qPCR positive) were 21, whereas laboratory supported SF cases (ompA negative but sero-positive by SF IgM ELISA) were 27. Among the 48 spotted fever (SF) cases, 70% of had maculopapular rash, 12.5% had macular rash, purpuric/petechial rash (severe rash) was seen in 8 patients (16.7%). Presence of rash on the palms and soles was associated with a relative risk (RR) of 4.36 (95% CI: 2.67-7.10; p &lt; 0.001). Our study suggests that ompA qPCR though useful for confirming the diagnosis of spotted fever is not always positive. A positive SF IgM ELISA in febrile individuals with palmo-plantar rash supports the diagnosis of spotted fever especially when other causes of febrile rash have been excluded. Multi-centric prospective studies employing the serological reference standard, IFA (immunofluorescence assay) in addition to the assays used in this study are needed to validate these findings. abstract_id: PUBMED:18366556 What is your diagnosis? Particulate material in peritoneal fluid from a dog. 9-year-old castrated male Greyhound dog was presented for evaluation of vomiting and lethargy of 1-week duration. On physical examination, the dog was febrile and dehydrated with a tense abdomen and petechial hemorrhages. Clinicopathologic abnormalities included relative polycythemia, mild lymphopenia with reactive lymphocytes, hypoalbuminemia, hypocholesterolemia, hyperbilirubinemia, increased ALP, mild hypokalemia, hyperamylasemia, hyperlipasemia, increased D-dimer concentration, and hyperfibrinogenemia. Cytologic evaluation of peritoneal fluid revealed marked suppurative inflammation with intracellular barium sulfate particles. The day before presentation, the referring veterinarian had administered oral barium sulfate in an upper gastrointestinal contrast study. Radiographs revealed free contrast material in the peritoneal cavity, consistent with gastrointestinal perforation, and leakage of contrast material. Abdominal exploratory surgery revealed a mid-jejunal perforation and a hepatic nodule. Histopathologic diagnosis of the jejunal and liver lesions was T-cell lymphoma. The patient recovered well postoperatively and received chemotherapy for treatment of lymphoma. Most commercial barium sulfate preparations contain relatively uniform, weakly birefringent, pale yellow particles &lt;1 microm in diameter. Because barium sulfate is found occasionally in clinical specimens, cytopathologists should be familiar with its cytologic appearance. abstract_id: PUBMED:37256166 Neck Hair-Thread Tourniquet Syndrome by Co-sleeping With Family: A Case Report and Literature Review. Neck hair-thread tourniquet syndrome (NHTTS) is a rare condition that can be a pediatric emergency, occurring when a hair or thread becomes tightly wrapped around a body part, leading to vascular or tissue damage. NHTTS commonly affects infants and young children and can result in severe complications if not promptly diagnosed and treated. The unusual nature of this event, the diffuse petechial hemorrhage on the face, and the presentation of ligature marks extending around the neck led us to admit the child to the general pediatric ward for follow-up and further investigation of the possibility of non-accidental trauma. Co-sleeping is a common cultural practice in Japan where parents sleep in close proximity to their infants. This case report aims to raise awareness among pediatricians and parents about the possibility of NHTTS occurring in infants who co-sleep, particularly when a strand of hair becomes entangled around their neck, about the early detection and appropriate management of NHTTS. And we also summarize the reported NHTTS cases. Answer: The evaluation of febrile children with petechial rashes does not seem to have a consensus among pediatricians. A survey conducted among 833 pediatricians in various specialties, including community and academic general pediatrics, emergency medicine, and infectious diseases, revealed substantial variation in the evaluation of young non-toxic-appearing febrile children with petechial rashes. There were significant differences in practice trends, particularly in the rates of blood culture requests and hospital admission decisions. For instance, emergency medicine pediatricians were more likely to request blood cultures for a 1-year-old with petechiae compared to community and academic general pediatricians (91% vs. 76% and 73%, respectively). Similarly, community general pediatricians were less likely to request hospital admission than infectious disease pediatricians for the same scenario (22% vs. 40%). However, most of these differences did not persist after adjusting for practice setting, population size, and physician experience (PUBMED:9877362). In the context of febrile thrombocytopenia in pediatric populations, infections, particularly dengue fever, were found to be the most common cause. The majority of patients with thrombocytopenia were transient and asymptomatic, with petechiae/purpura being the most common bleeding manifestation. However, most patients did not require platelet transfusion, and good recovery was noted in the majority of cases (PUBMED:37868577). A European study confirmed that the combination of fever and petechial rash is an important warning sign for childhood sepsis and meningitis. The study found that children with a petechial rash were more likely to have sepsis or meningitis compared to other febrile children. However, ruling out coughing and/or vomiting was insufficient to safely identify low-risk patients (PUBMED:36866956). In summary, while there are general trends and guidelines for the evaluation of febrile children with petechial rashes, there is no strict consensus among pediatricians, and practice patterns may vary based on specialty, experience, and other factors. The presence of petechial rashes in febrile children remains an important clinical sign that warrants careful evaluation for serious conditions such as sepsis, meningitis, and specific infections like dengue fever.
Instruction: Preoperative risk stratification in infective endocarditis. Does the EuroSCORE model work? Abstracts: abstract_id: PUBMED:17548201 Preoperative risk stratification in infective endocarditis. Does the EuroSCORE model work? Preliminary results. Objective: There is an important role for risk prediction in cardiac surgery. Prediction models are useful in decision making and quality assurance. Patients with infective endocarditis (IE) have a particularly high risk of mortality. The aim was to assess the performance of European System for Cardiac Operative Risk Evaluation (EuroSCORE) in IE. Methods: The additive and logistic EuroSCORE models were applied to all patients undergoing surgery for IE (Duke criteria) between January 1995 and April 2006 within our prospective institutional database. Observed and predicted mortalities were compared. Model calibration was assessed with the Hosmer-Lemeshow test. Model discrimination was tested by determining the area under the receiver operating characteristic (ROC) curve. Results: One hundred and eighty-one consecutive patients undergoing 191 operations were analyzed. Observed mortality was 28.8%. For the entire cohort the mean additive score was 10.4 (additive predicted mortality of 14.2%). The mean logistic predicted mortality was 27.1%. Discriminative power was good for the additive and the logistic models for the entire series. Area under ROC curve were 0.83 (additive) and 0.84 (logistic) for the entire cohort, 0.81 and 0.81 for the aortic position, 0.91 and 0.92 for the mitral position, 0.81 and 0.81 for the native valve, 0.82 and 0.83 for the prosthetic valves, and 0.81 and 0.51 for the gram-positive microorganisms, respectively. Conclusions: This initial sample may be small; however, additive and logistic EuroSCORE adequately stratify risk in IE. Logistic EuroSCORE has been calibrated in IE, a special group of very high-risk patients. Further studies with larger sample sizes are required to confirm these initial results. abstract_id: PUBMED:29408350 A pragmatic approach for mortality prediction after surgery in infective endocarditis: optimizing and refining EuroSCORE. Objective: To simplify and optimize the ability of EuroSCORE I and II to predict early mortality after surgery for infective endocarditis (IE). Methods: Multicentre retrospective study (n = 775). Simplified scores, eliminating irrelevant variables, and new specific scores, adding specific IE variables, were created. The performance of the original, recalibrated and specific EuroSCOREs was assessed by Brier score, C-statistic and calibration plot in bootstrap samples. The Net Reclassification Index was quantified. Results: Recalibrated scores including age, previous cardiac surgery, critical preoperative state, New York Heart Association &gt;I, and emergent surgery (EuroSCORE I and II); renal failure and pulmonary hypertension (EuroSCORE I); and urgent surgery (EuroSCORE II) performed better than the original EuroSCOREs (Brier original and recalibrated: EuroSCORE I: 0.1770 and 0.1667; EuroSCORE II: 0.2307 and 0.1680). Performance improved with the addition of fistula, staphylococci and mitral location (EuroSCORE I and II) (Brier specific: EuroSCORE I 0.1587, EuroSCORE II 0.1592). Discrimination improved in specific models (C-statistic original, recalibrated and specific: EuroSCORE I: 0.7340, 0.7471 and 0.7728; EuroSCORE II: 0.7442, 0.7423 and 0.7700). Calibration improved in both EuroSCORE I models (intercept 0.295, slope 0.829 (original); intercept -0.094, slope 0.888 (recalibrated); intercept -0.059, slope 0.925 (specific)) but only in specific EuroSCORE II model (intercept 2.554, slope 1.114 (original); intercept -0.260, slope 0.703 (recalibrated); intercept -0.053, slope 0.930 (specific)). Net Reclassification Index was 5.1% and 20.3% for the specific EuroSCORE I and II. Conclusions: The use of simplified EuroSCORE I and EuroSCORE II models in IE with the addition of specific variables may lead to simpler and more accurate models. abstract_id: PUBMED:26547083 Assessment of perioperative mortality risk in patients with infective endocarditis undergoing cardiac surgery: performance of the EuroSCORE I and II logistic models. Objectives: The European System for Cardiac Operative Risk Evaluation (EuroSCORE) has been established as a tool for assisting decision-making in surgical patients and as a benchmark for quality assessment. Infective endocarditis often requires surgical treatment and is associated with high mortality. This study was undertaken to (i) validate both versions of the EuroSCORE, the older logistic EuroSCORE I and the recently developed EuroSCORE II and to compare their performances; (ii) identify predictors other than those included in the EuroSCORE models that might further improve their performance. Methods: We retrospectively studied 128 patients from a single-centre registry who underwent heart surgery for active infective endocarditis between January 2007 and November 2014. Binary logistic regression was used to find independent predictors of mortality and to create a new prediction model. Discrimination and calibration of models were assessed by receiver-operating characteristic curve analysis, calibration curves and the Hosmer-Lemeshow test. Results: The observed perioperative mortality was 16.4% (n = 21). The median EuroSCORE I and EuroSCORE II were 13.9% interquartile range (IQ) (7.0-35.0) and 6.6% IQ (3.5-18.2), respectively. Discriminative power was numerically higher for EuroSCORE II {area under the curve (AUC) of 0.83 [95% confidence interval (CI), 0.75-0.91]} than for EuroSCORE I [0.75 (95% CI, 0.66-0.85), P = 0.09]. The Hosmer-Lemeshow test showed good calibration for EuroSCORE II (P = 0.08) but not for EuroSCORE I (P = 0.04). EuroSCORE I tended to over-predict and EuroSCORE II to under-predict mortality. Among the variables known to be associated with greater infective endocarditis severity, only prosthetic valve infective endocarditis remained an independent predictor of mortality [odds ratio (OR) 6.6; 95% CI, 1.1-39.5; P = 0.04]. The new model including the EuroSCORE II variables and variables known to be associated with greater infective endocarditis severity showed an AUC of 0.87 (95% CI, 0.79-0.94) and differed significantly from EuroSCORE I (P = 0.03) but not from EuroSCORE II (P = 0.4). Conclusions: Both EuroSCORE I and II satisfactorily stratify risk in active infective endocarditis; however, EuroSCORE II performed better in the overall comparison. Specific endocarditis features will increase model complexity without an unequivocal improvement in predictive ability. abstract_id: PUBMED:26116921 EuroSCORE II underestimates mortality after cardiac surgery for infective endocarditis. Objectives: To better select for patients who most likely will benefit from cardiac surgery among those with infective endocarditis (IE), we aimed to identify preoperative markers associated with poor outcome after cardiac surgery for IE, and to evaluate the accuracy of European System for Cardiac Operative Risk Evaluation (EuroSCORE) II to predict mortality. Methods: We enrolled all adult patients who underwent cardiac surgery during the acute phase of definite IE (Duke Criteria) in two referral centres for cardiac surgery. Patients were identified through intensive care unit (ICU) electronic databases, and data were collected from medical charts on standardized questionnaire. Results: Between 2002 and 2013, 149 patients (117 males), with a median age of 64 years [interquartile range 52-73], fulfilled the inclusion criteria. Main complications before surgery were left ventricular dysfunction (23%), central nervous system symptomatic events (34%) and septic shock (24%). Most patients (95%) presented with valve regurgitation, and 49% had perivalvular abscess. Surgery was performed with a median delay of 12 days [5-24] after IE diagnosis, and mean EuroSCORE II was 15.8 (13.4-18.1). In-hospital mortality was 21%. Preoperative variables associated with mortality in multivariate analysis were obesity [odds ratio (OR) 3.67 [1.10-12.19], P = 0.03], vegetation &gt;15 mm (OR 6.72 [1.46-30.98], P = 0.01), septic shock (OR 4.87 [1.67-14.28], P = 0.004) and mechanical prosthetic valve IE (OR 4.99 [1.72-28.57], P = 0.007). EuroSCORE II underestimated mortality in patients with predicted mortality over 10%. Conclusion: Factors independently predictive of mortality after cardiac surgery for IE are obesity, septic shock, large vegetation and a mechanical prosthetic valve IE. EuroSCORE II underestimates post-cardiac surgery mortality in patients with IE. abstract_id: PUBMED:32553427 Prediction of surgical risk in patients with endocarditis: Comparison of logistic EuroSCORE, EuroSCORE II and APORTEI score. Objectives: APORTEI score is a new risk prediction model for patients with infective endocarditis. It has been recently validated on a Spanish multicentric national cohort of patients. The aim of the present study is to compare APORTEI performances with logistic EuroSCORE and EuroSCORE II by testing calibration and discrimination on a local sample population underwent cardiac surgery because of endocarditis. Methods: We tested three prediction scores on 111 patients underwent surgery from 2014 to 2020 at our Institution because of infective endocarditis. Area under the curves and Hosmer-Lemeshow test were used to analyze discrimination and calibration respectively of logistic EuroSCORE, EuroSCORE II and APORTEI score. Results: The overall observed one-month mortality rate was 21.6%. The observed-to-expected ratio was 1.27 for logistic EuroSCORE, 3.27 for EuroSCORE II and 0.94 for APORTEI. The area under the curve (AUC) value of APORTEI (0.88±0.05) was significantly higher than that one of logistic EuroSCORE (AUC 0.77±0.05; p 0.0001) and of EuroSCORE II (AUC 0.74±0.05; p 0.0005). Hosmer-Lemeshow test showed better calibration performance of the APORTEI, (logistic EuroSCORE: p 0.19; EuroSCORE II: p 0.11; APORTEI: p 0.56). Conclusion: APORTEI risk score shows significantly higher performances in term of discrimination and calibration compared with both logistic EuroSCORE and EuroSCORE II. abstract_id: PUBMED:20329485 A question of clinical reliability: observed versus EuroSCORE-predicted mortality after aortic valve replacement. Background And Aim Of The Study: The study aim was to determine the clinical reliability of the EuroSCORE as a predictor of operative risk in aortic valve replacement (AVR). Methods: Between 2000 and 2007, a total of 1497 patients underwent isolated elective AVR (no endocarditis, aortic procedure or re-do) at the authors' institution. A fitting of the deviation of expected mortality (EM) from observed mortality (OM) was performed and studied. To identify the cause of deviation of EM, a multivariate analysis of the EuroSCORE variables (using SAS JMP software) was conducted on the available data, and the results were re-evaluated. Results: An overestimation of EM was observed, and this was found to increase systematically with the rise in expected risk (0.3 +/- 1.0% at 5% OM versus 23.8 +/- 1.9% at 35% OM; p &lt; 0.0001). A multivariate analysis of the EuroSCORE variables showed only age and preoperative neurological dysfunction as significant risk factors (p &lt; 0.003 and &lt; 0.04, respectively). All other EuroSCORE variables were statistically insignificant. Conclusion: The EuroSCORE is a solid and practical concept, but is clinically unreliable as a predictor of operative risk for elective AVR; hence, it should no longer be used for this purpose in its present form. It is recommended that a statistical correction of the EuroSCORE deviation be used, and that an updated EuroSCORE or a new risk stratification tool be developed to predict operative risk for patients undergoing heart valve surgery. abstract_id: PUBMED:21342767 Validation of the EuroSCORE risk models in Turkish adult cardiac surgical population. Objective: The aim of this study was to validate additive and logistic European System for Cardiac Operative Risk Evaluation (EuroSCORE) models on Turkish adult cardiac surgical population. Methods: TurkoSCORE project involves a reliable web-based database to build up Turkish risk stratification models. Current patient population consisted of 9443 adult patients who underwent cardiac surgery between 2005 and 2010. However, the additive and logistic EuroSCORE models were applied to only 8018 patients whose EuroSCORE determinants were complete. Observed and predicted mortalities were compared for low-, medium-, and high-risk groups. Results: The mean patient age was 59.5 years (± 12.1 years) at the time of surgery, and 28.6% were female. There were significant differences (all p&lt;0.001) in the prevalence of recent myocardial infarction (23.5% vs 9.7%), moderate left ventricular function (29.9% vs 25.6%), unstable angina (9.8% vs 8.0%), chronic pulmonary disease (13.4% vs 3.9%), active endocarditis (3.2% vs 1.1%), critical preoperative state (9.0% vs 4.1%), surgery on thoracic aorta (3.7% vs 2.4%), extracardiac arteriopathy (8.6% vs 11.3%), previous cardiac surgery (4.1% vs 7.3%), and other than isolated coronary artery bypass graft (CABG; 23.0% vs 36.4%) between Turkish and European cardiac surgical populations, respectively. For the entire cohort, actual hospital mortality was 1.96% (n=157; 95% confidence interval (CI), 1.70-2.32). However, additive predicted mortality was 2.98% (p&lt;0.001 vs observed; 95%CI, 2.90-3.00), and logistic predicted mortality was 3.17% (p&lt;0.001 vs observed; 95%CI, 3.03-3.21). The predictive performance of EuroSCORE models for the entire cohort was fair with 0.757 (95%CI, 0.717-0.797) AUC value (area under the receiver operating characteristic, AUC) for additive EuroSCORE, and 0.760 (95%CI, 0.721-0.800) AUC value for logistic EuroSCORE. Observed hospital mortality for isolated CABG was 1.23% (n=75; 95%CI, 0.95-1.51) while additive and logistic predicted mortalities were 2.87% (95%CI, 2.82-2.93) and 2.89% (95%CI, 2.80-2.98), respectively. AUC values for the isolated CABG subset were 0.768 (95%CI, 0.707-0.830) and 0.766 (95%CI, 0.705-0.828) for additive and logistic EuroSCORE models. Conclusion: The original EuroSCORE risk models overestimated mortality at all risk subgroups in Turkish population. Remodeling strategies for EuroSCORE or creation of a new model is warranted for future studies in Turkey. abstract_id: PUBMED:10456395 European system for cardiac operative risk evaluation (EuroSCORE). Objective: To construct a scoring system for the prediction of early mortality in cardiac surgical patients in Europe on the basis of objective risk factors. Methods: The EuroSCORE database was divided into developmental and validation subsets. In the former, risk factors deemed to be objective, credible, obtainable and difficult to falsify were weighted on the basis of regression analysis. An additive score of predicted mortality was constructed. Its calibration and discrimination characteristics were assessed in the validation dataset. Thresholds were defined to distinguish low, moderate and high risk groups. Results: The developmental dataset had 13,302 patients, calibration by Hosmer Lemeshow Chi square was (8) = 8.26 (P &lt; 0.40) and discrimination by area under ROC curve was 0.79. The validation dataset had 1479 patients, calibration Chi square (10) = 7.5, P &lt; 0.68 and the area under the ROC curve was 0.76. The scoring system identified three groups of risk factors with their weights (additive % predicted mortality) in brackets. Patient-related factors were age over 60 (one per 5 years or part thereof), female (1), chronic pulmonary disease (1), extracardiac arteriopathy (2), neurological dysfunction (2), previous cardiac surgery (3), serum creatinine &gt;200 micromol/l (2), active endocarditis (3) and critical preoperative state (3). Cardiac factors were unstable angina on intravenous nitrates (2), reduced left ventricular ejection fraction (30-50%: 1, &lt;30%: 3), recent (&lt;90 days) myocardial infarction (2) and pulmonary systolic pressure &gt;60 mmHg (2). Operation-related factors were emergency (2), other than isolated coronary surgery (2), thoracic aorta surgery (3) and surgery for postinfarct septal rupture (4). The scoring system was then applied to three risk groups. The low risk group (EuroSCORE 1-2) had 4529 patients with 36 deaths (0.8%), 95% confidence limits for observed mortality (0.56-1.10) and for expected mortality (1.27-1.29). The medium risk group (EuroSCORE 3-5) had 5977 patients with 182 deaths (3%), observed mortality (2.62-3.51), predicted (2.90-2.94). The high risk group (EuroSCORE 6 plus) had 4293 patients with 480 deaths (11.2%) observed mortality (10.25-12.16), predicted (10.93-11.54). Overall, there were 698 deaths in 14,799 patients (4.7%), observed mortality (4.37-5.06), predicted (4.72-4.95). Conclusion: EuroSCORE is a simple, objective and up-to-date system for assessing heart surgery, soundly based on one of the largest, most complete and accurate databases in European cardiac surgical history. We recommend its widespread use. abstract_id: PUBMED:29694650 Value of euroscore II to predict operative mortality in infectious endocarditis surgery. Background: Stratification of surgical risk is an important step in cardiac surgery, often based on the estimation of operative mortality. The EuroSCORE II (ES II) incorporates several factors in the calculation of mortality, but few are specific to Infectious endocarditis (IE). Aim: Our study is aimed to evaluate the predictive power of the Es II in the surgery of IE and to test its discriminating power according to certain specific parameters of the IE. Methods: 55 surgical procedures were carried out between January 2000 and June 2012 (37 EI on native valves and 18 on prosthesis). The mortality observed was compared with the mortality predicted by Es II. The discriminant capacity of the Es II model was tested using the receiver operating characteristic (ROC) curve model by comparing the areas under the curve (AsC). Results: For our cohort The observed mortality was 30.9 % , the mortality predicted by Es II was 10.5%. in general, the EsII discriminatory capacity for mortality was satisfactory: the ROC AsC was 0,76. By analyzing the subgroups of the endocarditis, Es II lost its discriminating power: theROC AsC was: 0.64 for staphylococcal, 0.62 for the annular abscess and 0.56 for endocarditis on prosthesis. Conclusion: The EuroSCORE II model has a satisfactory discriminating power in the IE. However, analysis of subgroups leads to decrease in this discriminating power. Thus, some specific parameters of the IE, which do not appear in the EuroSCORE II model, should be taken into accountwhen analyzing mortality. abstract_id: PUBMED:11603595 Risk factors for early mortality after valve surgery in Europe in the 1990s: lessons from the EuroSCORE pilot program. Background And Aim Of The Study: The characteristics of valve surgery are evolving. The study aim was to explore its demographics and risk factors in Europe in the 1990s, using the EuroSCORE database. Methods: For the EuroSCORE program, information on 98 variables regarding risk factors, procedures and outcome were collected for 5,672 patients undergoing valve surgery under cardiopulmonary bypass in 128 European centers. Bivariate (i.e. Mann-Whitney test or chi-square when appropriate), then logistic regression analyses were carried out to identify risk factors for early mortality. The predictive value of EuroSCORE was analyzed using the Hosmer-Lemershow test and by computing the area under the receiver operating characteristic (ROC) curve. Results: Aortic valve stenosis was the most common diagnosis (47.6%), whilst mitral valve surgery accounted for 42% of procedures. Coronary surgery was performed concomitantly in 21% of cases. Hospital mortality was 6.1%. Predictive factors for early mortality were: age (p = 0.0001), preoperative serum creatinine &gt;200 micromol/l (p = 0.014), previous heart surgery (p = 0.0001), poor left ventricular function (p = 0.008), chronic congestive heart failure (p = 0.0001), pulmonary hypertension (p = 0.0001), active acute endocarditis (p = 0.0001), emergency procedure (p = 0.05), critical preoperative status (p = 0.0001), tricuspid surgery (p = 0.015), aortic and mitral surgery (p = 0.002), combined thoracic surgery (p = 0.0001), and combined coronary surgery (p = 0.0001). The predictive value of EuroSCORE for mortality was good (area under the ROC curve = 0.75). Conclusion: The 'valve' subset of the EuroSCORE database provides an instant picture of European valve surgery in the 1990s that can be used either for individual assessment, or for country- or institution-based epidemiological studies of risk factors and practices. Answer: The EuroSCORE model has been used for preoperative risk stratification in infective endocarditis (IE), but its performance and accuracy have been a subject of investigation in various studies. One study found that both the additive and logistic EuroSCORE models adequately stratified risk in IE, with good discriminative power for the entire cohort, as well as for specific valve positions and types. However, the study noted that further research with larger sample sizes was needed to confirm these initial results (PUBMED:17548201). Another study aimed to optimize and refine the EuroSCORE models specifically for IE by simplifying the scores and adding specific IE variables. The recalibrated scores performed better than the original EuroSCOREs, and the addition of variables such as fistula, staphylococci, and mitral location improved performance further. The study concluded that simplified EuroSCORE models with the addition of specific variables could lead to more accurate models for IE (PUBMED:29408350). A comparison between EuroSCORE I and EuroSCORE II found that both models satisfactorily stratified risk in active IE, but EuroSCORE II performed better overall. The study also noted that specific endocarditis features increased model complexity without unequivocally improving predictive ability (PUBMED:26547083). However, another study indicated that EuroSCORE II underestimated mortality after cardiac surgery for IE, and identified obesity, septic shock, large vegetation, and mechanical prosthetic valve IE as independent predictors of mortality. This study suggested that EuroSCORE II may not accurately predict post-cardiac surgery mortality in IE patients (PUBMED:26116921). The APORTEI score, a new risk prediction model for IE, was compared with logistic EuroSCORE and EuroSCORE II and showed significantly higher performance in terms of discrimination and calibration. This suggests that the APORTEI score may be more suitable for predicting surgical risk in IE patients (PUBMED:32553427). In summary, while the EuroSCORE models have been used for risk stratification in IE, their performance varies, and there is potential for improvement by refining the models or using alternative scoring systems like the APORTEI score. Some studies have shown that EuroSCORE can underestimate or overestimate mortality, and the addition of IE-specific variables may enhance the accuracy of risk predictions.
Instruction: Is autonomic function associated with left ventricular systolic function in Chagas heart disease patients undergoing treatment for heart failure? Abstracts: abstract_id: PUBMED:24861302 Is autonomic function associated with left ventricular systolic function in Chagas heart disease patients undergoing treatment for heart failure? Introduction: The association between cardiac autonomic and left ventricular (LV) dysfunction in Chagas disease (ChD) is controversial. Methods: A standardized protocol that includes the Valsalva maneuver, a respiratory sinus arrhythmia (RSA) test, and an echocardiographic examination was used. Spearman correlation coefficients (rho) were used to investigate associations. Results: The study population consisted of 118 ChD patients undergoing current medical treatment, with an average LV ejection fraction of 51.4±2.6%. The LV ejection fraction and diastolic dimension were correlated with the Valsalva index (rho=0.358, p&lt;0.001 and rho=-0.266, p=0.004, respectively) and the RSA (rho=0.391, p&lt;0.001 and rho=-0.311, p&lt;0.001, respectively). Conclusions: The impairment of LV function is directly associated with a reduction of cardiac autonomic modulation in ChD. abstract_id: PUBMED:8561672 Relationship between ventricular arrhythmia and cardiac function in Chagas disease Purpose: It is well established the association between heart failure and arrhythmias in different cardiopathies. There are no studies in Chagas' myocardiopathy that analyze the relation between arrhythmias and left ventricular function. Methods: We studied 629 patients with Chagas' disease, divided in 3 groups, according to ejection fraction obtained through echocardiographic study (normal, between 0.64 and 0.45, and below 0.44). Results: At conventional ECG, the presence of ventricular arrhythmias was respectively in the 3 groups: 15%, 36% and 64%, showing higher incidence when left ventricular function was getting worse. Conclusion: Ventricular arrhythmias in Chagas' disease are frequent in patients with normal ejection fraction, and become more intense as ventricular dysfunction progresses. abstract_id: PUBMED:8546548 Congestive heart failure with normal left ventricular systolic function. Clinical approaches to the diagnosis and treatment of diastolic heart failure. The syndrome of congestive heart failure with preserved left ventricular systolic function is common in clinical practice. The signs and symptoms of the disorder are similar to those of congestive heart failure with left ventricular systolic dysfunction, underscoring a need for routine evaluation of left and right ventricular systolic function in patients with congestive heart failure. The syndrome may be related to anatomic abnormalities that increase the resistance to ventricular filling, or to physiologic abnormalities of myocardial relaxation or compliance. Advancing age, often in association with hypertension, coronary artery disease, tachycardia, and atrial fibrillation, is commonly associated with the disorder. Randomized controlled clinical trials are needed to evaluate the efficacy of various therapeutic agents in reducing the risks associated with diastolic heart failure. abstract_id: PUBMED:36228327 Improved Right Ventricular Systolic Function After Cardiac Resynchronization Therapy in Patients With Heart Failure. Background: Since the introduction of cardiac resynchronization therapy (CRT) to improve left ventricular function, the effect of CRT on the right ventricle in patients with heart failure has not been well described. Methods: We evaluated the effect of CRT on right ventricular systolic function in 20 patients (80% men; mean [SD] age, 58.5 [9.8] y) with cardiomyopathy and right ventricular systolic dysfunction (New York Heart Association class III or IV, left ventricular ejection fraction ≤35%, and QRS interval ≥120 ms). The median follow-up time was 15 months. Right ventricular systolic function, defined as a tricuspid annular plane systolic excursion (TAPSE) index of 16 mm or less, was evaluated in patients before and after CRT. Results: Twelve (60%) patients had ischemic cardiomyopathy, and 12 (60%) patients had left bundle branch block detected using surface electrocardiogram. The mean (SD) QRS duration was 160.5 (24.4) ms. From before CRT to the time of follow-up after CRT, the mean (SD) ejection fraction increased significantly from 22.5% (5.6%) to 29.4% (7.4%) (P &amp;lt; .001). The mean (SD) TAPSE index also increased significantly from 13.70 (1.78) mm to 16.50 (4.77) mm (P = .018). Eleven (55%) patients showed improved right ventricular systolic function (TAPSE ≥16 mm) after CRT. Patients with a favorable right ventricular response to CRT were significantly older (64.6 [8.2] y vs 53.6 [8.4] y, respectively) and more likely to have nonischemic origin of cardiomyopathy than were patients with unimproved right ventricular function (66.7% vs 18.2%, respectively). Conclusion: Our findings indicate that CRT is associated with improved right ventricular systolic function in patients with heart failure and right ventricular systolic dysfunction. Patients with nonischemic heart disease more often show improved right ventricular function after CRT. abstract_id: PUBMED:9829322 Chagas' heart disease and the autonomic nervous system. The autonomic nervous system is abnormal in patients with advanced Chagas' heart disease. Most researchers consider these autonomic abnormalities as primary, specific and irreversible. However, when and why these abnormalities appear in the natural history of Chagas' disease, is still the subject of intense controversy. Recent morphological and functional studies strongly suggest that the sympathetic and the parasympathetic abnormalities are preceded by myocardial damage and left ventricular dysfunction. Moreover, chagasic patients with cardiac failure benefit from drugs which antagonize neurohumoral activation. Consequently, the abnormalities of the autonomic nervous system of chagasic patients are very likely secondary and partially reversible. abstract_id: PUBMED:24553982 Echocardiographic parameters and survival in Chagas heart disease with severe systolic dysfunction. Background: Echocardiography provides important information on the cardiac evaluation of patients with heart failure. The identification of echocardiographic parameters in severe Chagas heart disease would help implement treatment and assess prognosis. Objective: To correlate echocardiographic parameters with the endpoint cardiovascular mortality in patients with ejection fraction &lt; 35%. Methods: Study with retrospective analysis of pre-specified echocardiographic parameters prospectively collected from 60 patients included in the Multicenter Randomized Trial of Cell Therapy in Patients with Heart Diseases (Estudo Multicêntrico Randomizado de Terapia Celular em Cardiopatias) - Chagas heart disease arm. The following parameters were collected: left ventricular systolic and diastolic diameters and volumes; ejection fraction; left atrial diameter; left atrial volume; indexed left atrial volume; systolic pulmonary artery pressure; integral of the aortic flow velocity; myocardial performance index; rate of increase of left ventricular pressure; isovolumic relaxation time; E, A, Em, Am and Sm wave velocities; E wave deceleration time; E/A and E/Em ratios; and mitral regurgitation. Results: In the mean 24.18-month follow-up, 27 patients died. The mean ejection fraction was 26.6 ± 5.34%. In the multivariate analysis, the parameters ejection fraction (HR = 1.114; p = 0.3704), indexed left atrial volume (HR = 1.033; p &lt; 0.0001) and E/Em ratio (HR = 0.95; p = 0.1261) were excluded. The indexed left atrial volume was an independent predictor in relation to the endpoint, and values &gt; 70.71 mL/m2 were associated with a significant increase in mortality (log rank p &lt; 0.0001). Conclusion: The indexed left atrial volume was the only independent predictor of mortality in this population of Chagasic patients with severe systolic dysfunction. abstract_id: PUBMED:2148434 Disorders of diastolic function in chronic left ventricular insufficiency Abnormalities in the diastolic function of the left ventricular pump are the common determinant and, above all, the earliest manifestation of all forms of chronic left ventricular failure, whether or not the left ventricular systolic function is abnormal. Congestive signs, in particular, are directly related to abnormalities of ventricular filling. Primary diastolic dysfunction is the cause of left ventricular failure in about 40 p. 100 of the cases, but it may also be observed in almost all cardiopathies. In myocardial ischaemia the pressure-volume relation is displaced upwards owing to a slowed down, inhomogeneous and incomplete relaxation. Left ventricular hypertrophy, whether it is due to excessive pressure (arterial hypertension, aortic stenosis) or reflects a primary hypertrophic cardiomyopathy, is associated with a slowing down of ventricular relaxation and a reduction of left ventricular diastolic distensibility, even though the ventricular pump systolic function remains normal for a long time. Outside alterations in the distensibility of the ventricular muscle, ventricular dilatation alters ventricular filling by forcing the ventricle to function on the vertical part of its diastolic pressure-volume relation. Nowadays, the aged hearts is the most frequent cause of heart failure with normal systolic function. In all cases dysrhythmias and atrioventricular desynchronization act as aggravating factors. Treatment is often difficult since positively inotropic drugs or arterial vasodilators frequently have a modest or even deleterious effect. abstract_id: PUBMED:28805026 Correlation of 6-min walk test with left ventricular function and quality of life in heart failure due to Chagas disease. Objectives: To evaluate the correlation of the total distance walked during the six-minute walk test (6MWT) with left ventricular function and quality of life in patients with Chagas Disease (ChD) complicated by heart failure. Methods: This is a cross-sectional study of adult patients with ChD and heart failure diagnosed based on Framingham criteria. 6MWT was performed following international guidelines. New York Heart Association functional class, brain natriuretic peptide (BNP) serum levels, echocardiographic parameters and quality of life (SF-36 and MLHFQ questionnaires) were determined and their correlation with the distance covered at the 6MWT was tested. Results: Forty adult patients (19 male; 60 ± 12 years old) with ChD and heart failure were included in this study. The mean left ventricular ejection fraction was 35 ± 12%. Only two patients (5%) ceased walking before 6 min had elapsed. There were no cardiac events during the test. The average distance covered was 337 ± 105 metres. The distance covered presented a negative correlation with BNP (r = -0.37; P = 0.02), MLHFQ quality-of-life score (r = -0.54; P = 0.002), pulmonary artery systolic pressure (r = -0.42; P = 0.02) and the degree of diastolic dysfunction (r = -0.36; P = 0.03) and mitral regurgitation (r = -0.53; P = 0.0006) and positive correlation with several domains of the SF-36 questionnaire. Conclusions: The distance walked during the 6MWT correlates with BNP, quality of life and parameters of left ventricular diastolic function in ChD patients with heart failure. We propose this test to be adopted in endemic areas with limited resources to aid in the identification of patients who need referral for tertiary centres for further evaluation and treatment. abstract_id: PUBMED:25584097 Left ventricular diastolic function in hypertension: methodological considerations and clinical implications. The assessment of left ventricular (LV) diastolic function should be an integral part of a routine examination of hypertensive patient; indeed when LV diastolic function is impaired, it is possible to have heart failure even with preserved LV ejection fraction. Left ventricular diastolic dysfunction (LVDD) occurs frequently and is associated to heart disease. Doppler echocardiography is the best tool for early LVDD diagnosis. Hypertension affects LV relaxation and when left ventricular hypertrophy (LVH) occurs, it decreases compliance too, so it is important to calculate Doppler echocardiography parameters, for diastolic function evaluation, in all hypertensive patients. The purpose of our review was to discuss about the strong relationship between LVDD and hypertension, and their relationship with LV systolic function. Furthermore, we aimed to assess the relationship between the arterial stiffness and LV structure and function in hypertensive patients. abstract_id: PUBMED:27744513 Effects of Obesity on Cardiovascular Hemodynamics, Cardiac Morphology, and Ventricular Function. Obesity produces a variety of hemodynamic alterations that may cause changes in cardiac morphology which predispose to left and right ventricular dysfunction. Various neurohormonal and metabolic alterations commonly associated with obesity may contribute to these abnormalities of cardiac structure and function. These changes in cardiovascular hemodynamics, cardiac morphology, and ventricular function may, in severely obese patients, predispose to heart failure, even in the absence of other forms of heart disease (obesity cardiomyopathy). In normotensive obese patients, cardiac involvement is commonly characterized by elevated cardiac output, low peripheral vascular resistance, and increased left ventricular (LV) end-diastolic pressure. Sleep-disordered breathing may lead to pulmonary arterial hypertension and, in association with left heart failure, may contribute to elevation of right heart pressures. These alterations, in association with various neurohormonal and metabolic abnormalities, may produce LV hypertrophy; impaired LV diastolic function; and less commonly, LV systolic dysfunction. Many of these alterations are reversible with substantial voluntary weight loss. Answer: Yes, autonomic function is associated with left ventricular systolic function in Chagas heart disease patients undergoing treatment for heart failure. A study that included a standardized protocol with the Valsalva maneuver and a respiratory sinus arrhythmia (RSA) test, along with an echocardiographic examination, found that the impairment of left ventricular (LV) function is directly associated with a reduction of cardiac autonomic modulation in Chagas disease (ChD) patients. Specifically, the LV ejection fraction and diastolic dimension were correlated with the Valsalva index and the RSA, indicating a direct relationship between autonomic function and LV systolic function in these patients (PUBMED:24861302).
Instruction: Does guideline knowledge affect treatment compliance among emergency doctors? Abstracts: abstract_id: PUBMED:25118657 Does guideline knowledge affect treatment compliance among emergency doctors? Background: The insufficient adoption of internationally accepted clinical guidelines may lead to less than adequate patient care of patients with asthma. Objective: To evaluate the knowledge and treatment compliance with Global Initiative of Asthma (GINA, 2011) asthma treatment guidelines among emergency physicians (EPs) at a referral hospital in northern Malaysia. Methods: A cross-sectional study was designed in the territory-level referral hospital in northern Malaysia. Twenty-seven EPs were asked to complete an asthma guideline questionnaire to assess their knowledge regarding GINA 2011 asthma treatment guidelines. A total of 810 patients were enrolled, and 30 patients were selected per physician. The authors evaluated the physicians' compliance with GINA 2011 asthma treatment guidelines. Results: Of 27 EPs, 20 (74.1%) had adequate knowledge of GINA 2011 asthma treatment guidelines. A total of 615 (75.9%) patients received guideline-recommended emergency treatment. Shortness of breath (n = 436, 53.8%) was the most frequently reported chief complaint. Furthermore, there was a significant but weak association between knowledge of the guideline and treatment compliance among emergency doctors (P = 0.003, φ = 0.110). Moreover, there was no significant change in therapy for patients with comorbid conditions. The mean age of respondents was 27.3 years. Conclusions: Overall, a fair level of guideline knowledge and treatment compliance was noted among EPs. Doctors with adequate guideline knowledge were more likely to comply with GINA 2011 asthma treatment guidelines. abstract_id: PUBMED:34322429 A study to assess the knowledge and awareness among young doctors about emergency contraception. Background: Emergency contraception (EC) is the contraception on demand which can prevent millions of unintended pregnancies. The knowledge and awareness of young doctors towards EC who may be the first contact physician of the society has not been well studied. This study aims to assess the knowledge and awareness of young doctors in a teaching institute in northeast India. Methodology: This study was carried out among 200 young doctors and included 100 interns and 100 postgraduate trainees (PGT) and senior resident doctors (SRD) from January 2020 to March 2020 to compare their knowledge and awareness about EC. A predesigned self-administered 22 items questionnaire was used to collect data. Observation: In our study, majority of the doctors in both groups were aware of levonorgestrel 1.5 mg tablet as EC (93% and 95%) and more interns than PG SRD were aware of its easy availability (86%, 35%, P value &lt; 0.0001), government supply (77%, 30%, P value &lt; 0.0001), and that copper intrauterine contraceptive device (IUCD) can be used as EC up to 120 h (89%, 60%, P value &lt; 0.0001). Most doctors were unaware of ulipristal acetate. Most PGT SRDs believe that EC promotes irresponsible behavior, sexually transmitted diseases, and promiscuity but most intern did not agree to it (P value &lt; 0.0001 for each). More than 65% doctors in both groups were aware of the mechanism of action of EC. PGT SRD were more aware of the effectiveness of EC (62%, 80%, P value 0.0078). More interns were aware that EC affects the next period (53%, 25%, P value &lt; 0.0001). Conclusion: Interns were more aware about contraception than PGT and SRD, especially about government supply of EC, about IUCD, and behavioral aspect like promoting irresponsible behavior, sexually transmitted disease, and promiscuity. abstract_id: PUBMED:31089025 Knowledge and practice assessment, and self reported barriers to guideline based asthma management among doctors in Nigeria. Background And Objective: Doctors' knowledge contributes to practice and quality of care rendered to patients. To assess the knowledge and practice assessment and self reported barriers to guideline-based management among doctors. Subjects And Methods: This was a cross-sectional study among doctors from various part of the country attending a continuing medical education (CME) program in Lagos, Nigeria. We used a self-administered, pretested, semistructured, validated questionnaire based on the Global Initiative for Asthma (GINA) guideline. Results: Of the 98 participants, 41 (42%) and 18 (18.4%) had good level of asthma knowledge and practice, respectively. There was no relationship between level of knowledge and practice and the level of knowledge was not associated with the practice (X2 = 6.56, P = 0.16). The most reported barriers to good guideline-based practice were the unavailability of diagnostic and treatment facilities (44.3%), poor medication adherence (25.7%), and high cost of asthma medications (18.6%). Conclusion: The level of asthma knowledge and practice, respectively, among doctors in Nigeria is low and there is no relationship between level of knowledge and practice. Unavailability of diagnostic and treatment facilities, poor medication adherence, and high cost of medications are important barriers to good practice. There is a need to improve asthma education among doctors in Nigeria. Addressing barriers to good practice is essential for the translation of knowledge into practice. abstract_id: PUBMED:25097852 Knowledge of anaphylaxis among Emergency Department staff. Background: Anaphylaxis is an emergency condition that requires immediate, accurate diagnosis and appropriate management. However, little is known about the level of knowledge of doctors and nurses treating these patients in the Emergency Department. Objective: To determine the knowledge of doctors and nurses in the Emergency Department on the recent definition and treatment recommendations of anaphylaxis. Methods: We surveyed doctors and nurses of all grades in a tertiary Hospital Emergency Department using a standardized anonymous questionnaire. Results: We had a total of 190 respondents-47 doctors and 143 nurses. The response rate was 79.7% for doctors and 75.3% for nurses. Ninety-seven point eight percent of the doctors and 83.7% of the nurses chose the accepted definition of anaphylaxis. High proportions of doctors (89-94%) and nurses (65-72%) diagnose anaphylaxis in the three scenarios demonstrating anaphylaxis and anaphylactic shock. Forty-two point six percent of the doctors and 76.9% of the nurses incorrectly diagnosed single organ involvement without hypotension as anaphylaxis. As for treatment, 89.4% of the doctors indicated adrenaline as the drug of choice and 85.1% chose intramuscular route for adrenaline administration. Among the nurses, 40.3% indicated adrenaline as the drug of choice and 47.4% chose the intramuscular route for adrenaline. Conclusion: High proportion of doctors and nurses are able to recognize the signs and symptoms of anaphylaxis, although there is a trend towards over diagnosis. There is good knowledge on drug of choice and the accepted route of adrenaline among the doctors. However, knowledge of treatment of anaphylaxis among nurses was moderate and can be improved. abstract_id: PUBMED:25364598 Knowledge of Asthma among Doctors Practicing in Three South Eastern States of Nigeria. Background: Asthma is a chronic airway disease that has a significant impact on patients with substantial global socioeconomic burden. Appropriate knowledge by health care practitioners is important in the management of asthma. Aim: The aim was to assess the knowledge of asthma among doctors practicing in health care facilities in three South-Eastern states of Nigeria. Subjects And Methods: This was a descriptive cross-sectional study. The participants were selected using multi-staged sampling method and interviewed with structured, self-administered questionnaires. Comparison of the different outcome variables using the Chi-square (categorical) and Student's t-test (noncategorical) with the characteristics of the participants were done. Result: A total of 283 doctors were interviewed. Eighty-eight percent of them identified asthma as a common disease in our environment, (P = 0.04) but unrelated to socioeconomic status. Knowledge of epidemiology was poor among medical officers and registrars (P = 0.04). Most of the doctors (80%)(226/283) recognized the pathogenic significance of bronchospasm in exacerbation, while 58.6% (166/283) of them considered chronic inflammation as a significant factor in asthma pathogenesis P &lt; 0.001. Majority of the doctors (84.1%) (238/283) were aware of the use of steroids in acute exacerbation, while 59.4% (168/283) considered aminophylline as the first line medication in exacerbation (P = 0.02). Knowledge about the use of steroids as controller medication was noted in 1.7% (5/283) of the respondents. Only 47.3% (134/283) of the participants were aware of the Global Initiative on Asthma guideline, (P = 0.03). Conclusion: There was good knowledge of epidemiology and clinical features of asthma, but a small number of the doctors had knowledge of pathophysiology and treatment of the disease. For best practices in asthma management, there is a need for further education. abstract_id: PUBMED:31625792 Insufficient knowledge and inappropriate practices of emergency doctors towards tetanus prevention in trauma patients: a pilot survey. China has a shocking number of tetanus cases in the world, but little research has investigated doctors' knowledge of and practices in tetanus prophylaxis, especially tetanus vaccination. To this end, we conducted a pilot study on 197 emergency doctors using a mixed method of web-based (163; 82.8%) and paper-based (34; 17.2%) surveys. There was no difference between the two groups except for the percentage of doctors receiving a tetanus booster in the past 10 years and the responses to question 11. Surprisingly, only 28.9% of doctors had received formal training on tetanus immunization and only 21.3% had themselves received a tetanus vaccine booster in the past 10 years. Furthermore, only 14.2% of the respondents confirmed the availability of the tetanus vaccine in their respective institutions. Finally, the correct rates and Tetanus-immune-globulin (TIG)-only option rates for questions 11-15 were unsatisfactory. Our results showed that most emergency doctors' knowledge and practices strayed from the recommendations of Advisory Committee on Immunization Practices (ACIP): 1) TIG alone for most trauma patients instead of vaccine was an overused treatment approach. 2) Most of the emergency doctors lacked formal training on and knowledge of tetanus vaccination. 3) Even the emergency doctors themselves were not properly vaccinated. 4) The tetanus vaccine was only available in a small number of the respondents' institutions. The findings of this study suggest an urgent need to improve this dire situation. abstract_id: PUBMED:34769813 Knowledge, Attitude, and Practice of Evidence-Based Medicine among Emergency Doctors in Kelantan, Malaysia. This study aimed to determine the prevalence of high levels of knowledge, positive attitude, and good practice on evidence-based medicine (EBM) and identify the associated factors for practice score on EBM among emergency medicine doctors in Kelantan, Malaysia. This cross-sectional study was conducted in government hospitals in Kelantan. The data were collected from 200 emergency physicians and medical officers in the emergency department using the Noor Evidence-Based Medicine Questionnaire. Simple and general linear regressions analyses using SPSS were performed. A total of 183 responded, making a response rate of 91.5%. Of them, 49.7% had a high level of knowledge, 39.9% had a positive attitude and 2.1% had good practice. Sex, race, the average number of patients seen per day, internet access in workplace, having online quick reference application, and attitude towards EBM were significantly associated with EBM practice scores. It is recommended that appropriate authorities provide emergency doctors with broader access to evidence resources. EBM skill training should be enhanced in the current medical school curriculums. abstract_id: PUBMED:31609070 Medical doctors' knowledge of dental trauma management: A review. Education in dental trauma is extremely important to promote knowledge on the assessment and management of a traumatized tooth. Medical doctors are normally only required to manage the emergency phase of traumatic dental injury (TDI) treatment before referring to a dentist, endodontist or oral and maxillofacial surgeon for continuing care. Medical doctors who possess sufficient theoretical knowledge and are competent enough clinically to handle TDI can provide a higher standard of treatment care and ultimately achieve a better patient outcome. The aim of this literature review was to assess the extent of medical doctors' knowledge of dental trauma management for injuries in the following four areas: (a) tooth structure; (b) to the supporting bone; (c) to the periodontal tissues; and (d) to the soft tissues. Based on the findings from this literature review, an overall deficiency in knowledge and confidence in managing dental trauma has been identified. Knowledge and understanding to categorize TDI using the same classification of dental injuries commonly used amongst dentists would allow medical doctors to better manage and communicate with dental colleagues concerning referral for further care. If the medical education curriculum provided medical doctors with more information and skills for the management of dental trauma and an understanding of the importance of early management, then more favourable outcomes may prevail for dental trauma patients. abstract_id: PUBMED:31122050 Physician Compliance With Bronchiolitis Guidelines in Pediatric Emergency Departments. An online survey was administered through the American Academy of Pediatrics (AAP) Section of Emergency Medicine Survey Listserv in Fall, 2017. Overall compliance was measured as never using chest X-rays, viral testing, bronchodilators, or systemic steroids. Practice compliance was measured as never using those modalities in a clinical vignette. Chi-square tests assessed differences in compliance between modalities. t tests assessed differences on agreement with each AAP statement. Multivariate logistic regression determined factors associated with overall compliance. Response rate was 47%. A third (35%) agreed with all 7 AAP statements. There was less compliance with ordering a bronchodilator compared with chest X-ray, viral testing, or systemic steroid. There was no association between compliance and either knowledge or agreement with the guideline. Physicians with institutional bronchiolitis guidelines were more likely to be practice compliant. Few physicians were compliant with the AAP bronchiolitis guideline, with bronchodilator misuse being most pronounced. Institutional bronchiolitis guidelines were associated with physician compliance. abstract_id: PUBMED:28031617 Knowledge, awareness and practice of ethics among doctors in tertiary care hospital. Introduction: With the advancement of healthcare and medical research, doctors need to be aware of the basic ethical principles. This cross-sectional study is an attempt to assess the knowledge, awareness, and practice of health-care ethics among health-care professionals. Materials And Methods: After taking written informed consent, a standard questionnaire was administered to 117 doctors. No personal information was recorded on the questionnaire so as to ensure the confidentiality and anonymity of participants. Data analysis was done using SPSS version 21 (IBM Corp., Armonk, NY, USA). Results: Statistically significant difference observed between the opinions of consultant and senior resident (SRs) on issues like, adherence to confidentiality; paternalistic attitude of doctors (doctors should do their best for the patient irrespective of patient's opinion); doctor's decision should be final in case of disagreement and interest in learning ethics (P &lt; 0.05). However, no difference reported among them with respect to patient wishes, informing patient regarding wrongdoing, informing close relatives, seeking consent for children and patients' consent for procedures. Furthermore, no significant difference observed between the two with respect to the practice of health-care ethics. Surprisingly, the response of clinical and nonclinical faculty did not differ as far as awareness and practice of ethics were concerned. Conclusion: The significant difference is observed in the knowledge, awareness, and practice of ethics among consultants and SRs. Conferences, symposium, and workshops, on health-care ethics, may act as a means of sensitizing doctors and thus will help to bridge this gap and protect the well-being and confidentiality of the patients. Such an effort may bring about harmonious change in the doctor-patient relationship. Answer: Yes, guideline knowledge does affect treatment compliance among emergency doctors. The study conducted among emergency physicians (EPs) at a referral hospital in northern Malaysia found that doctors with adequate knowledge of the Global Initiative of Asthma (GINA, 2011) asthma treatment guidelines were more likely to comply with those guidelines. There was a significant, albeit weak, association between knowledge of the guideline and treatment compliance among emergency doctors (P = 0.003, φ = 0.110) (PUBMED:25118657). This finding is consistent with other studies that have assessed the knowledge and practices of emergency doctors in various contexts. For instance, a study on emergency contraception knowledge among young doctors in northeast India showed that interns were more aware of contraception than postgraduate trainees and senior resident doctors, especially regarding government supply of emergency contraception, intrauterine contraceptive devices as emergency contraception, and behavioral aspects such as promoting irresponsible behavior, sexually transmitted disease, and promiscuity (PUBMED:34322429). Similarly, a study in Nigeria found that the level of asthma knowledge and practice among doctors was low, and there was no relationship between the level of knowledge and practice. The study identified barriers to good guideline-based practice, such as the unavailability of diagnostic and treatment facilities, poor medication adherence, and high cost of asthma medications (PUBMED:31089025). In Malaysia, a study aimed to determine the prevalence of high levels of knowledge, positive attitude, and good practice on evidence-based medicine (EBM) among emergency medicine doctors found that only a small percentage had good practice, and factors such as sex, race, the average number of patients seen per day, internet access in the workplace, having an online quick reference application, and attitude towards EBM were significantly associated with EBM practice scores (PUBMED:34769813). These studies collectively suggest that knowledge of guidelines is an important factor that can influence the compliance of emergency doctors with treatment protocols, although other factors such as access to resources, training, and attitudes towards evidence-based medicine also play a significant role.
Instruction: Does net energy cost of swimming affect time to exhaustion at the individual's maximal oxygen consumption velocity? Abstracts: abstract_id: PUBMED:16998440 Does net energy cost of swimming affect time to exhaustion at the individual's maximal oxygen consumption velocity? Aim: The purpose of the present study was to examine the relationship between time limit at the minimum velocity that elicits the individual's maximal oxygen consumption (TLim-v VO2max) and three swimming economy related parameters: the net energy cost corresponding to v VO2max (Cv VO2max), the slope of the regression line obtained from the energy expenditure (E) and corresponding velocities during an incremental test (C(slope)) and the ratio between the mean E value and the velocity mean value of the incremental test (C(inc)). Complementarily, we analysed the influence of Cv VO2max, C(slope) and C(inc) on TLim-v VO2max by swimming level. Methods: Thirty swimmers divided into 10 low-level (LLS) (4 male and 6 female) and 20 highly trained swimmers (HTS) (10 of each gender) performed an incremental test for v VO2max assessment and an all-out TLim-v VO2max test. Results: TLim-v VO2max, v VO2max, Cv fVO2max, C(slope) and C(inc) averaged, respectively, 313.8+/-63 s, 1.16+/-0.1 m x s(-1), 13.2+/-1.9 J x kg(-1) x m(-1), 28+/-3.2 J x kg(-1) x m(-1) and 10.9+/-1.8 J x kg(-1) x m(-1) in the LLS and 237.3+/-54.6 s, 1.4+/-0.1 m x s(-1), 15.6+/-2.2 J x kg(-1) x m(-1), 36.8+/-4.5 J x kg(-1) x m(-1) and 13+/-2.3 J x kg(-1) x m(-1) in the HTS. TLim-v VO2max was inversely related to C(slope) (r = -0.77, P &lt; 0.001), and to v VO2max (r = -0.35, P = 0.05), although no relationships with the Cv VO2max and the C(inc) were observed. Conclusions: The findings of this study confirmed exercise economy as an important factor for swimming performance. The data demonstrated that the swimmers with higher and v VO2max performed shorter time in TLim-v VO2max efforts. abstract_id: PUBMED:28245246 Oxygen uptake kinetics and energy system's contribution around maximal lactate steady state swimming intensity. The purpose of this study was to examine the oxygen uptake ([Formula: see text]) kinetics and the energy systems' contribution at 97.5, 100 and 102.5% of the maximal lactate steady state (MLSS) swimming intensity. Ten elite female swimmers performed three-to-five 30 min submaximal constant swimming bouts at imposed paces for the determination of the swimming velocity (v) at 100%MLSS based on a 7 x 200 m intermittent incremental protocol until voluntary exhaustion to find the v associated at the individual anaerobic threshold. [Formula: see text] kinetics (cardiodynamic, primary and slow component phases) and the aerobic and anaerobic energy contributions were assessed during the continuous exercises, which the former was studied for the beginning and second phase of exercise. Subjects showed similar time delay (TD) (mean = 11.5-14.3 s) and time constant (τp) (mean = 13.8-16.3 s) as a function of v, but reduced amplitude of the primary component for 97.5% (35.7 ± 7.3 mL.kg.min-1) compared to 100 and 102.5%MLSS (41.0 ± 7.0 and 41.3 ± 5.4 mL.kg.min-1, respectively), and τp decreased (mean = 9.6-10.8 s) during the second phase of exercise. Despite the slow component did not occur for all swimmers at all swim intensities, when observed it tended to increase as a function of v. Moreover, the total energy contribution was almost exclusively aerobic (98-99%) at 97.5, 100 and 102.5%MLSS. We suggest that well-trained endurance swimmers with a fast TD and τp values may be able to adjust faster the physiological requirements to minimize the amplitude of the slow component appearance, parameter associated with the fatigue delay and increase in exhaustion time during performance, however, these fast adjustments were not able to control the progressive fatigue occurred slightly above MLSS, and most of swimmers reached exhaustion before 30min swam. abstract_id: PUBMED:36246138 Time limit and V̇O2 kinetics at maximal aerobic velocity: Continuous vs. intermittent swimming trials. The time sustained during exercise with oxygen uptake (V̇O2) reaching maximal rates (V̇O2peak) or near peak responses (i.e., above second ventilatory threshold [t@VT2) or 90% V̇O2peak (t@90%V̇O2peak)] is recognized as the training pace required to enhance aerobic power and exercise tolerance in the severe domain (time-limit, tLim). This study compared physiological and performance indexes during continuous and intermittent trials at maximal aerobic velocity (MAV) to analyze each exercise schedule, supporting their roles in conditioning planning. Twenty-two well-trained swimmers completed a discontinuous incremental step-test for V̇O2peak, VT2, and MAV assessments. Two other tests were performed in randomized order, to compare continuous (CT) vs. intermittent trials (IT100) at MAV until exhaustion, to determine peak oxygen uptake (Peak-V̇O2) and V̇O2 kinetics (V̇O2K). Distance and time variables were registered to determine the tLim, t@VT2, and t@90%V̇O2peak tests. Blood lactate concentration ([La-]) was analyzed, and rate of perceived exertion (RPE) was recorded. The tests were conducted using a breath-by-breath apparatus connected to a snorkel for pulmonary gas sampling, with pacing controlled by an underwater visual pacer. V̇O2peak (55.2 ± 5.6 ml·kg·min-1) was only reached in CT (100.7 ± 3.1 %V̇O2peak). In addition, high V̇O2 values were reached at IT100 (96.4 ± 4.2 %V̇O2peak). V̇O2peak was highly correlated with Peak-V̇O2 during CT (r = 0.95, p &lt; 0.01) and IT100 (r = 0.91, p &lt; 0.01). Compared with CT, the IT100 presented significantly higher values for tLim (1,013.6 ± 496.6 vs. 256.2 ± 60.3 s), distance (1,277.3 ± 638.1 vs. 315.9 ± 63.3 m), t@VT2 (448.1 ± 211.1 vs. 144.1 ± 78.8 s), and t@90%V̇O2peak (321.9 ± 208.7 vs. 127.5 ± 77.1 s). V̇O2K time constants (IT100: 25.9 ± 9.4 vs. CT: 26.5 ± 7.5 s) were correlated between tests (r = 0.76, p &lt; 0.01). Between CT and IT100, tLim were not related, and RPE (8.9 ± 0.9 vs. 9.4 ± 0.8) and [La-] (7.8 ± 2.7 vs. 7.8 ± 2.8 mmol·l-1) did not differ between tests. MAV is suitable for planning swimming intensities requiring V̇O2peak rates, whatever the exercise schedule (continuous or intermittent). Therefore, the results suggest IT100 as a preferable training schedule rather than the CT for aerobic capacity training since IT100 presented a significantly higher tLim, t@VT2, and t@90%V̇O2peak (∼757, ∼304, and ∼194 s more, respectively), without differing regards to [La-] and RPE. The V̇O2K seemed not to influence tLim and times spent near V̇O2peak in both workout modes. abstract_id: PUBMED:12909706 Excess post-exercise oxygen consumption in adult sockeye (Oncorhynchus nerka) and coho (O. kisutch) salmon following critical speed swimming. The present study measured the excess post-exercise oxygen cost (EPOC) following tests at critical swimming speed (Ucrit) in three stocks of adult, wild, Pacific salmon (Oncorhynchus sp.) and used EPOC to estimate the time required to return to their routine level of oxygen consumption (recovery time) and the total oxygen cost of swimming to Ucrit. Following exhaustion at Ucrit, recovery time was 42-78 min, depending upon the fish stock. The recovery times are several-fold shorter than previously reported for juvenile, hatchery-raised salmonids. EPOC varied fivefold among the fish stocks, being greatest for Gates Creek sockeye salmon (O. nerka), which was the salmon stock that had the longest in-river migration, experienced the warmest temperature and achieved the highest maximum oxygen consumption compared with the other salmon stocks that were studied. EPOC was related to Ucrit, which in turn was directly influenced by ambient test temperature. The non-aerobic cost of swimming to Ucrit was estimated to add an additional 21.4-50.5% to the oxygen consumption measured at Ucrit. While these non-aerobic contributions to swimming did not affect the minimum cost of transport, they were up to three times higher than the value used previously for an energetic model of salmon migration in the Fraser River, BC, Canada. As such, the underestimate of non-aerobic swimming costs may require a reevaluation of the importance of how in-river barriers like rapids and bypass facilities at dams, and year-to-year changes in river flows and temperatures, affect energy use and hence migration success. abstract_id: PUBMED:1555562 Determination and validity of critical velocity as an index of swimming performance in the competitive swimmer. The purpose of this investigation was to test whether the concept of critical power used in previous studies could be applied to the field of competitive swimming as critical swimming velocity (vcrit). The vcrit, defined as the swimming velocity over a very long period of time without exhaustion, was expressed as the slope of a straight line between swimming distance (dlim) at each speed (with six predetermined speeds) and the duration (tlim). Nine trained college swimmers underwent tests in a swimming flume to measure vcrit at those velocities until the onset of fatigue. A regression analysis of dlim on tlim calculated for each swimmer showed linear relationships (r2 greater than 0.998, P less than 0.01), and the slope coefficient signifying vcrit ranged from 1.062 to 1.262 m.s-1 with a mean of 1.166 (SD 0.052) m.s-1. Maximal oxygen consumption (VO2max), oxygen consumption (VO2) at anaerobic threshold, and the swimming also velocity at the onset of blood lactate accumulation (vOBLA) were also determined during the incremental swimming test. The vcrit showed significant positive correlations with VO2 at anaerobic threshold (r = 0.818, P less than 0.01), vOBLA (r = 0.949, P less than 0.01) and mean velocity of 400 m freestyle (r = 0.864, P less than 0.01). These data suggested that vcrit could be adopted as an index of endurance performance in competitive swimmers. abstract_id: PUBMED:8425518 Does critical swimming velocity represent exercise intensity at maximal lactate steady state? The purpose of this investigation was to determine whether the critical swimming velocity (vcrit), which is employed in competitive swimming, corresponds to the exercise intensity at maximal lactate steady state. vcrit is defined as the swimming velocity which could theoretically be maintained forever without exhaustion and expression as the slope of a regression line between swimming distances covered and the corresponding times. A total of eight swimmers were instructed to swim two different distances (200 m and 400 m) at maximal effort and the time taken to swim each distance was measured. In the present study, vcrit is calculated as the slope of the line connecting the two times required to swim 200 m and 400 m. vcrit determined by this new simple method was correlated significantly with swimming velocity at 4 mmol.l-1 of blood lactate concentration (r = 0.914, P &lt; 0.01) and mean velocity in the 400 m freestyle (r = 0.977, P &lt; 0.01). In the maximal lactate steady-state test, the subjects were instructed to swim 1600 m (4 x 400 m) freestyle at three constant velocities (98%, 100% and 102% of vcrit). At 100% vcrit blood lactate concentration showed a steady-state level of approximately 3.2 mmol.l-1 from the first to the third stage and at 98% of vcrit lactate concentration had a tendency to decrease significantly at the fourth stage. On the other hand, at 102% of vcrit, blood lactate concentration increased progressively and those of the third and fourth stages were significantly higher than those at 100% of vcrit (P &lt; 0.05). These data suggest that vcrit, which can be calculated by performing two timed, maximal effort swimming tests, may correspond to the exercise intensity at maximal lactate steady state. abstract_id: PUBMED:17901978 The critical velocity in swimming. In supra-maximal exercise to exhaustion, the critical velocity (cv) is conventionally calculated from the slope of the distance (d) versus time (t) relationship: d = I + St. I is assumed to be the distance covered at the expense of the anaerobic capacity, S the speed maintained on the basis of the subject's maximal O(2) uptake (VO2max) This approach is based on two assumptions: (1) the energy cost of locomotion per unit distance (C) is constant and (2) VO2max is attained at the onset of exercise. Here we show that cv and the anaerobic distance (d (anaer)) can be calculated also in swimming, where C increases with the velocity, provided that VO2max its on-response, and the C versus v relationship are known. d (anaer) and cv were calculated from published data on maximal swims for the four strokes over 45.7, 91.4 and 182.9 m, on 20 elite male swimmers (18.9 +/- 0.9 years, 75.9 +/- 6.4 kg), whose VO2max and C versus speed relationship were determined, and compared to I and S obtained from the conventional approach. cv was lower than S (4, 16, 7 and 11% in butterfly, backstroke, breaststroke and front crawl) and I (=11.6 m on average in the four strokes) was lower than d (anaer). The latter increased with the distance: average, for all strokes: 38.1, 60.6 and 81.3 m over 45.7, 91.4 and 182.9 m. It is concluded that the d versus t relationship should be utilised with some caution when evaluating performance in swimmers. abstract_id: PUBMED:17990207 Time limit at VO2max velocity in elite crawl swimmers. The purpose of this study is to assess, with elite crawl swimmers, the time limit at the minimum velocity corresponding to maximal oxygen consumption (TLim-vVO2max), and to characterize its main determinants. Eight subjects performed an incremental test for vVO2max assessment and, forty-eight hours later, an all-out swim at vVO2max until exhaustion. VO2 was directly measured using a telemetric portable gas analyzer and a visual pacer was used to help the swimmers keeping the predetermined velocities. Blood lactate concentrations, heart rate and stroke parameter values were also measured. TLim-vVO2max and vVO2max, averaged, respectively, 243.2 +/- 30.5 s and 1.45 +/- 0.08 m . s (-1). TLim-vVO2max correlated positively with VO2 slow component (r = 0.76, p &lt; 0.05). Negative correlations were found between TLim-vVO2max and body surface area (r = - 0.80) and delta lactate (r = - 0.69) (p &lt; 0.05), and with vVO2max (r = - 0.63), v corresponding to anaerobic threshold (r = - 0.78) and the energy cost corresponding to vVO2max (r = - 0.62) (p &lt; 0.10). No correlations were observed between TLim-vVO2max and stroking parameters. This study confirmed the tendency to TLim-vVO2max be lower in the swimmers who presented higher vVO2max and vAnT, possibly explained by their higher surface area, energy cost and anaerobic rate. Additionally, O2SC seems to be a determinant of TLim-vVO2max. abstract_id: PUBMED:16293903 VO2 responses to intermittent swimming sets at velocity associated with VO2max. While the physiological adaptations following endurance training are relatively well understood, in swimming there is a dearth of knowledge regarding the metabolic responses to interval training (IT). The hypothesis tested predicted that two different endurance swimming IT sets would induce differences in the total time the subjects swam at a high percentage of maximal oxygen consumption (VO(2)max). Ten trained triathletes underwent an incremental test to exhaustion in swimming so that the swimming velocity associated with VO(2)max (vVO(2)max) could be determined. This was followed by a maximal 400-m test and two intermittent sets at vVO(2)max: (a) 16 x 50 m with 15-s rest (IT(50)); (b) 8 x 100 m with 30-s rest (IT(100)). The times sustained above 95% VO(2)max (68.50 +/- 62.69 vs. 145.01 +/- 165.91 sec) and 95% HRmax (146.67 +/- 131.99 vs. 169.78 +/- 203.45 sec, p = 0.54) did not differ between IT(50) and IT(100)(values are mean +/- SD). In conclusion, swimming IT sets of equal time duration at vVO(2)max but of differing work-interval durations led to slightly different VO(2)and HR responses. The time spent above 95% of VO(2)max was twice as long in IT(100) as in IT (50), and a large variability between mean VO(2)and HR values was also observed. abstract_id: PUBMED:8857705 Significance of the velocity at VO2max and time to exhaustion at this velocity. In 1923, Hill and Lupton pointed out that for Hill himself, 'the rate of oxygen intake due to exercise increases as speed increases, reaching a maximum for the speeds beyond about 256 m/min. At this particular speed, for which no further increases in O2 intake can occur, the heart, lungs, circulation, and the diffusion of oxygen to the active muscle-fibres have attained their maximum activity. At higher speeds the requirement of the body for oxygen is far higher but cannot be satisfied, and the oxygen debt continuously increases'. In 1975, this minimal velocity which elicits maximal oxygen uptake (VO2max) was called 'critical speed' and was used to measure the maximal aerobic capacity (max Eox), i.e. the total oxygen consumed at VO2max. This should not be confused with the term 'critical power' which is closes to the power output at the 'lactate threshold'. In 1984, the term 'velocity at VO2max' and the abbreviation 'vVO2max' was introduced. It was reported that vVO2max is a useful variable that combines VO2max and economy into a single factor which can identify aerobic differences between various runners or categories of runners. vVO2max explained individual differences in performance that VO2max or running economy alone did not. Following that, the concept of a maximal aerobic running velocity (Vamax in m/sec) was formulated. This was a running velocity at which VO2max occurred and was calculated as the ratio between VO2max (ml/kg/min) minus oxygen consumption at rest, and the energy cost of running (ml/kg/sec). There are many ways to determine the velocity associated with VO2max making it difficult to compare maintenance times. In fact, the time to exhaustion (tlim) at vVO2max is reproducible in an individual, however, there is a great variability among individuals with a low coefficient of variation for vVO2max. For an average value of about 6 minutes, the coefficient of variation is about 25%. It seems that the lactate threshold which is correlated with the tlim at vVO2max can explain this difference among individuals, the role of the anaerobic contribution being significant. An inverse relationship has been found between tlim at vVO2max and VO2max, and a positive one between vVO2max and the velocity at the lactate threshold expressed as a fraction of vVO2max. These results are similar for different sports (e.g. running, cycling, kayaking, swimming). It seems that the real time spent at VO2max is significantly different from an exhaustive run at a velocity close to vVO2max (105% vVO2max). However, the minimal velocity which elicits VO2max, and the tlim at this velocity appear to convey valuable information when analysing a runner's performance over 1500m to a marathon. Answer: Yes, the net energy cost of swimming does affect the time to exhaustion at the individual's maximal oxygen consumption velocity. The study by Fernandes et al. (PUBMED:16998440) found that time limit at the minimum velocity that elicits the individual's maximal oxygen consumption (TLim-v VO2max) was inversely related to the slope of the regression line obtained from the energy expenditure and corresponding velocities during an incremental test (C(slope)). This indicates that swimmers with a higher net energy cost at v VO2max performed shorter time in TLim-v VO2max efforts, confirming exercise economy as an important factor for swimming performance. Additionally, the study by Sousa et al. (PUBMED:17990207) observed that TLim-v VO2max tended to be lower in swimmers who presented higher v VO2max and vAnT, possibly explained by their higher surface area, energy cost, and anaerobic rate. This suggests that a higher net energy cost of swimming is associated with a shorter time to exhaustion at maximal oxygen consumption velocity. Furthermore, the study by Billat et al. (PUBMED:8857705) highlighted that the time to exhaustion (tlim) at v VO2max is reproducible in an individual, but there is great variability among individuals. The study suggested that the lactate threshold, which is correlated with tlim at v VO2max, can explain differences among individuals, with the role of the anaerobic contribution being significant. This implies that the net energy cost, which includes both aerobic and anaerobic contributions, is a determinant of the time to exhaustion at v VO2max. In summary, the net energy cost of swimming, which encompasses both aerobic and anaerobic energy contributions, is an important determinant of the time to exhaustion at an individual's maximal oxygen consumption velocity.
Instruction: Can depression treatment in primary care reduce disability? Abstracts: abstract_id: PUBMED:30205353 Domain-specific associations between disability and depression, anxiety, and somatization in primary care patients. This study explores the associations between different disability domains and the most prevalent symptoms of mental disorders in primary care patients (i.e. depression, anxiety, and somatization). A total of 1241 participants from 28 primary care centres completed self-report measures of depression, anxiety, and somatization. This same sample also completed the Sheehan Disability Scale (SDS) to assess functional impairment in work, social life, and family life domains. Associations between the symptoms and each disability domain were examined using hierarchical regression analyses. Depression emerged as the strongest predictor of all three disability domains. Somatization was associated only with the work domain, and anxiety was associated only with the family life domain. Clinical symptoms explained a greater proportion of the variance than sociodemographic variables. In primary care patients, depression, anxiety and somatizations were associated with distinct domains of disability. Early provision of effective treatments in the primary care setting may be crucial to reduce the societal burden of common mental disorders. abstract_id: PUBMED:19023696 Disability from depression: the public health challenge to primary care. Epidemiologists have identified that depression will soon be the leading cause of disability throughout the world. To inform public health campaigns to reduce this problem, this paper summarizes current scientific knowledge about optimizing the potential of primary care settings to reduce disability by providing effective treatment for depression. To meet this challenge, primary care practices need to be re-engineered: 1) to conduct systematic screening programs to identify depressed patients, 2) to provide depressed patients initial evidence-based treatment, and 3) to monitor treatment adherence and symptom response in treated patients over 2 years. While additional research is needed in developing countries, preliminary evidence indicates that primary care practices re-engineered to improve depression management can make a substantial contribution to reducing depression-associated disability. abstract_id: PUBMED:9626725 Impact of improved depression treatment in primary care on daily functioning and disability. Background: Few data are available regarding the impact of improved depression treatment on daily functioning and disability. Methods: In two studies of more intensive depression treatment in primary care, patients initiating antidepressant treatment were randomly assigned to either usual care or to a collaborative management programme including patient education, on-site mental health treatment, adjustment of antidepressant medication, behavioural activation and monitoring of medication adherence. Assessments at baseline as well as 4 and 7 months included several measures of impairment, daily functioning and disability: self-rated overall health, number of bodily pains, number of somatization symptoms, changes in work due to health, reduction in leisure activities due to health, number of disability days and number of restricted activity days. Results: Average data from the 4- and 7-month assessments in both studies, intervention patients reported fewer somatic symptoms (OR 0.68, 95% CI 0.46, 0.99) and more favourable overall health (OR 0.50, 95% CI 0.28, 0.91). While intervention patients fared better on other measures of functional impairment and disability, none of these differences reached statistical significance. Conclusions: More effective acute-phase depression treatment reduced somatic distress and improved self-rated overall health. The absence of a significant intervention effect on other disability measures may reflect the brief treatment and follow-up period and the influence of other individual and environmental factors on disability. abstract_id: PUBMED:11115207 Can depression treatment in primary care reduce disability? A stepped care approach. Objective: To assess effects of stepped collaborative care depression intervention on disability. Design: Randomized controlled trial. Setting: Four primary care clinics of a large health maintenance organization. Patients: Two hundred twenty-eight patients with either 4 or more persistent major depressive symptoms or a score of 1.5 or greater on the Hopkins Symptom Checklist. Depression items were randomized to stepped care intervention or usual care 6 to 8 weeks after initiating antidepressant medication. Intervention: Augmented treatment of persistently depressed patients by an on-site psychiatrist collaborating with primary care physicians. Treatment included patient education, adjustment of pharmacotherapy, and proactive monitoring of outcomes. Main Outcome Measures: Baseline, 1-, 3-, and 6-month assessments of the Sheehan Disability Scale and the social function and role limitation subscales of the Medical Outcomes Study 36-Item Short-Form Health Survey (SF-36). Results: Patients who received the depression intervention experienced less interference in their family, work, and social activities than patients receiving usual primary care (Sheehan Disability Scale, z = 2.23; P =.025). Patients receiving intervention also reported a trend toward more improvement in SF-36-defined social functioning than patients receiving usual care (z = 1.63, P =.10), but there was no significant difference in role performance (z = 0.07, P =.94). Conclusions: Significant disability accompanied depression in this persistently depressed group. The stepped care intervention resulted in small to moderate functional improvements for these primary care patients. Arch Fam Med. 2000;9:1052-1058 abstract_id: PUBMED:17117336 Applicability of the ICF in measuring functioning and disability in unipolar depression in Primary Care settings Introduction: We use the biopsychosocial model of the International Classification of Functioning, Disability, and Health (ICF): a) to analyze functioning and disability patterns in unipolar depression cases attended in primary care settings; b) to study predictive and mediator variables related to disability in depression, and c) to determine the impact of traditional interventions in depression cases using functional remission as outcome measure. Design: Naturalistic, prospective, longitudinal. Setting: Multicenter study in primary care. Health Area 2. Region of Madrid. Participants: Adult patients with a diagnosis of unipolar depression who initiate psychopharmacological treatment with selective serotonin reuptake inhibitor (SSRI) in primary care sites. Patients with history of bipolar disorders, psychotic disorders, dementias, and dependence of toxic substances will be excluded. Main Measurements: Level of functioning and disability in different domains of well-being assessed through ICF related instruments. Stressful life events, social support and cognitive schemes will be analyzed as mediator variables. Socio-demographic and clinical characteristics, psychopharmacological treatment and treatment compliance are considered independent factors. DISCUSSION AND PRACTICAL USE: Selection bias may affect the generalization of the results. The biopsychosocial model underlying the ICF and its methodology are applied to the study of depression in primary care settings for the first time in Spain. Improving our understanding of disability related factors in depressive patients is expected. This study is one of the main research priorities of the EU (MHADIE project). abstract_id: PUBMED:32751039 Assessing Depression in the Primary Care Setting. Depression affects almost 10% of the adult population in the United States but often goes unrecognized and untreated. The World Health Organization predicts depression soon to be the second leading cause of disability. Recognizing the signs and symptoms of depression and then feeling confident to treat are limitations many primary care providers acknowledge. In this study, significantly more patients were identified as moderately to severely depressed using the Patient Health Questionnaire-9 (PHQ-9) screening tool as compared to the clinic's usual care practice of patient self-report. This study examines the PHQ-9, an evidence-based screening tool, to assist primary care providers in identifying depression. It also offers evidenced-based algorithms and websites to assist primary care providers with treatment protocols. The purpose of this article is to evaluate whether screening patients for depression using the PHQ-9 questionnaire is an effective tool in identifying patients with depression compared to the clinic's usual care practice of self-report. Implementing an evidence-based screening tool in the primary care setting assisted identifying those at risk for depression. This study of 200 patients in the primary care setting demonstrated the effectiveness of using the PHQ-9 as an efficient and accurate depression screening tool. Results of this study were chi-square analysis revealed that a significantly higher proportion of patients were newly diagnosed with depression in the study group than in the comparison group, χ2(1, N = 200) = 9.96, p &lt; .01. abstract_id: PUBMED:24996484 Late-life depression in the primary care setting: challenges, collaborative care, and prevention. Late-life depression is highly prevalent worldwide. In addition to being a debilitating illness, it is a risk factor for excess morbidity and mortality. Older adults with depression are at risk for dementia, coronary heart disease, stroke, cancer and suicide. Individuals with late-life depression often have significant medical comorbidity and, poor treatment adherence. Furthermore, psychosocial considerations such as gender, ethnicity, stigma and bereavement are necessary to understand the full context of late-life depression. The fact that most older adults seek treatment for depression in primary care settings led to the development of collaborative care interventions for depression. These interventions have consistently demonstrated clinically meaningful effectiveness in the treatment of late-life depression. We describe three pivotal studies detailing the management of depression in primary care settings in both high and low-income countries. Beyond effectively treating depression, collaborative care models address additional challenges associated with late-life depression. Although depression treatment interventions are effective compared to usual care, they exhibit relatively low remission rates and small to medium effect sizes. Several studies have demonstrated that depression prevention is possible and most effective in at-risk older adults. Given the relatively modest effects of treatment in averting years lived with disability, preventing late-life depression at the primary care level should be highly prioritized as a matter of health policy. abstract_id: PUBMED:17877061 Depression in primary care in Israel. Depression is a leading cause of morbidity, disability and health care utilization. It is commonly encountered in primary care settings yet is often missed or suboptimally managed. We summarize studies conducted in Israel on the prevalence of depression in primary care settings, its correlates, and predictors of treatment and outcome, and discuss their implications for clinical practice and public health policy. An electronic search was conducted using the MEDLINE and PsychINFO databases. The inclusion criteria were original studies that assessed aspects of depression in a population aged 18 or older, were conducted in primary care settings in Israel, and had sufficient detailed description of depression-related measures, study sample and outcome measures. Twelve articles reporting results from seven studies met these criteria. The prevalence of current depression in primary care varied considerably across studies: 1.6-5.9% for major depression, 1.1-5.4% for minor depression, 14.3-24% for depressive symptoms. Depression was consistently related to female gender and fewer years of education, and was associated with disability, decreased quality of life, and increased health-related expenditure. Many cases of depression were undiagnosed and most patients had persistent depression or achieved only partial remission. Depression represents a serious challenge for the primary health care system in Israel. Greater efforts should be focused on screening and treating depression in primary care. However, the studies reviewed here used different methodologies and assessed different aspects of depression and, therefore, should be generalized cautiously. Systematic research on the prevalence, correlates and management of depression in primary care, with emphasis on collaborative care models, is strongly needed to inform research, clinicians and health care policy makers. abstract_id: PUBMED:32060635 Treatment of Depression in Primary Care with Computerized Psychological Therapies: Systematic Reviews. Depression is one of the most important causes of disability due to illness in our environment. The primary care health system receives a high percentage of this consultation about psychological distress. Often this end in a pharmacological overtreatment in patients with mild depression, due to a lack of access to alternative tools for management. To analyze the evidence that exists by now about the effectiveness of computerized psychological therapies, in people with depression in primary care setting. The search process was mainly done through MEDLINE and Cochrane using keywords such as: "depression", "treatment", "primary care", "online", "internet", "computerized", "Cognitive Behavioral Therapy" and delimiting the search by years and types of studies. The Oxman quality scale was used to analyze quality of Systematic Reviews (SR). 11 previous SR were analyzed. Almost all research is experimental and has not been implemented in the public health network except in the United Kingdom, where there is a tradition in the use of the Beating the Blues program. It requires research in our country and development of programs in Spanish, or adaptation of those of other countries, to test the effectiveness in our health system and to study, in turn, the cost-efficiency. But it is proven to be effective in reducing depressive symptoms and must be study as a possible tool to be introduced in the management of depression in non-specialized care. abstract_id: PUBMED:11351783 The evaluation and treatment of depression in primary care. The evaluation and treatment of depressive disorders are vital functions for practicing primary care physicians. Depression is a prevalent, recurrent, highly treatable disorder that is debilitating and leads to significant psychosocial impairment. In view of the broadly available armamentarium of safe, newer medications, primary care physicians should be proficient in the treatment of these disorders. The following review will provide a synopsis of the current state of diagnosis, evaluation, and treatment of depression in the primary care setting. Appropriate treatment of depression can result in improvement in emotional, cognitive, and behavioral symptoms of depression and reduce psychosocial impairment, disability, and associated medical morbidity. Answer: Yes, depression treatment in primary care can reduce disability. Studies have shown that effective treatment for depression in primary care settings can lead to less interference in family, work, and social activities, as well as improvements in self-rated overall health and functioning (PUBMED:11115207; PUBMED:9626725). Depression has been identified as a leading cause of disability, and primary care practices that are re-engineered to improve depression management can make a substantial contribution to reducing depression-associated disability (PUBMED:19023696). Furthermore, collaborative care models for depression treatment in primary care have demonstrated clinically meaningful effectiveness in reducing symptoms and improving outcomes (PUBMED:24996484). Depression treatment interventions are effective compared to usual care, although they may exhibit relatively low remission rates and small to medium effect sizes. Nonetheless, preventing late-life depression at the primary care level should be highly prioritized as a matter of health policy (PUBMED:24996484). Additionally, computerized psychological therapies have been shown to be effective in reducing depressive symptoms and could be considered as a tool to be introduced in the management of depression in non-specialized care (PUBMED:32060635).
Instruction: Does a patent accessory pancreatic duct prevent acute pancreatitis? Abstracts: abstract_id: PUBMED:14696497 Clinical significance of the accessory pancreatic duct. Background/aims: The accessory pancreatic duct is the smaller and less constant pancreatic duct in comparison with the main pancreatic duct. We investigated the patency of the accessory pancreatic duct and its role in pancreatic pathophysiology. Methodology: Dye-injection endoscopic retrograde pancreatography was performed in 411 patients. In patients in whom the main pancreatic duct could be selectively cannulated, contrast medium with indigo carmine was injected through the catheter. Excretion of the dye from the minor duodenal papilla was observed endoscopically. Results: Patency of the accessory pancreatic duct was 43% of the 291 control cases. In the 46 patients with acute pancreatitis, 8 (17%) had a patent accessory pancreatic duct. The difference in patency between this group and the normal group was significant (p &lt; 0.01). Especially, patency of the accessory pancreatic duct was only 8% of the 13 patients with acute biliary pancreatitis. In the patients with pancreaticobiliary maljunction, biliary carcinoma occurred in 72% of patients with a nonpatent accessory pancreatic duct, but in contrast, it occurred only in 30% of those with a patent accessory pancreatic duct. This difference was significant (p &lt; 0.05). Lower amylase level in the bile of patients with pancreaticobiliary maljunction with a patent accessory pancreatic duct was frequently observed than those with a nonpatent accessory pancreatic duct. Conclusions: A patent accessory pancreatic duct may prevent acute pancreatitis by lowering the pressure in the main pancreatic duct. In cases of pancreaticobiliary maljunction with a patent accessory pancreatic duct, the incidence of carcinogenesis of the bile duct might be lower, as the reflux of the pancreatic juice to the bile duct might be reduced by the flow of the pancreatic juice into the duodenum through the accessory pancreatic duct. abstract_id: PUBMED:21175482 Does a patent accessory pancreatic duct prevent acute pancreatitis? Background And Aim: The role of the accessory pancreatic duct (APD) in pancreatic pathophysiology has been unclear. We previously examined the patency of the APD in 291 control cases who had a normal pancreatogram in the head of the pancreas by dye-injection endoscopic retrograde pancreatography (ERP). APD patency was 43% and was closely related with the shape of the terminal portion of the APD. The present study aimed to clarify the clinical implications of a patent APD. Methods: Based on the underlying data, the patency rate of the APD was estimated from the terminal shape of the APD on ERP in 167 patients with acute pancreatitis. Results: In patients with acute pancreatitis, stick-type APD, spindle-type APD, and cudgel-type APD, which showed a high patency, were rare, and branch-type APD and halfway-type or no APD, which showed quite low patency, were frequent in acute pancreatitis patients. Accordingly, the estimated patency of the APD in acute pancreatitis patients was only 21%. There was no significant relationship between the estimated APD patency and etiology or severity of acute pancreatitis. Conclusions: The terminal shapes of the APD with low patency were frequent in acute pancreatitis patients, and estimated APD patency was only 21% in acute pancreatitis. A patent APD may function as a second drainage system to reduce the pressure in the main pancreatic duct and prevent acute pancreatitis. abstract_id: PUBMED:15293129 Clinical significance of the minor duodenal papilla and accessory pancreatic duct. The accessory pancreatic duct (APD) is the main drainage duct of the dorsal pancreatic bud in the embryo, entering the duodenum at the minor duodenal papilla (MIP). As development progresses, the duct of the dorsal bud undergoes varying degrees of atrophy at the duodenal end. In cases of patent APD, smooth-muscle fiber bundles derived from the duodenal proper muscular tunics surround the APD. The APD shows long and short patterns on pancreatography, and ductal fusion in the two types appears to differ embryologically. Patency of the APD in control cases, as determined by dye-injection endoscopic retrograde pancreatography, was 43%. Patency of the APD may depend on duct caliber, course, and terminal shape of the APD. A patent APD may prevent acute pancreatitis by reducing the pressure in the main pancreatic duct. Pancreas divisum is a common anatomical anomaly in which the ventral and dorsal pancreatic ducts do not unite embryologically. As the majority of exocrine flow is routed through the MIP in individuals with pancreas divisum, interrelationships between poor function of the MIP and increased flow of pancreatic juice caused by alcohol or diet may increase dorsal pancreatic duct pressure and lead to the development of pancreatitis. Wire-guided minor sphincterotomy, followed by dorsal duct stenting, is recommended for acute recurrent pancreatitis associated with pancreas divisum. abstract_id: PUBMED:20857518 Clinical implications of accessory pancreatic duct. The accessory pancreatic duct (APD) is the main drainage duct of the dorsal pancreatic bud in the embryo, entering the duodenum at the minor duodenal papilla (MIP). With the growth, the duct of the dorsal bud undergoes varying degrees of atrophy at the duodenal end. Patency of the APD in 291 control cases was 43% as determined by dye-injection endoscopic retrograde pancreatography. Patency of the APD in 46 patients with acute pancreatitis was only 17%, which was significantly lower than in control cases (P &lt; 0.01). The terminal shape of the APD was correlated with APD patency. Based on the data about correlation between the terminal shape of the APD and its patency, the estimated APD patency in 167 patients with acute pancreatitis was 21%, which was significantly lower than in control cases (P &lt; 0.01). A patent APD may function as a second drainage system for the main pancreatic duct to reduce the pressure in the main pancreatic duct and prevent acute pancreatitis. Pancreatographic findings of 91 patients with pancreaticobiliary maljunction (PBM) were divided into a normal duct group (80 patients) and a dorsal pancreatic duct (DPD) dominant group (11 patients). While 48 patients (60%) with biliary carcinoma (gallbladder carcinoma, n = 42; bile duct carcinoma, n = 6) were identified in PBM with a normal pancreatic duct system, only two cases of gallbladder carcinoma (18%) occurred in DPD-dominant patients (P &lt; 0.05). Concentration of amylase in the bile of DPD dominance was significantly lower than that of normal pancreatic duct system (75 403.5 ± 82 015.4 IU/L vs 278 157.0 ± 207 395.0 IU/L, P &lt; 0.05). In PBM with DPD dominance, most pancreatic juice in the upper DPD is drained into the duodenum via the MIP, and reflux of pancreatic juice to the biliary tract might be reduced, resulting in less frequency of associated biliary carcinoma. abstract_id: PUBMED:9018014 Patency of the human accessory pancreatic duct as determined by dye-injection endoscopic retrograde pancreatography. The accessory pancreatic duct (APD) is the smaller and less constant pancreatic duct. The patency of the APD was investigated clinically in an effort to determine its role in pancreatic pathophysiology. Dye-injection endoscopic retrograde pancreatography (ERP) was performed in 190 cases. In the patients who exhibited filling of the fine branches of the ducts on ERP, contrast medium with indigo carmine was injected into the major duodenal papilla. The patency of the APD was determined by observing the excretion of the dye from the minor duodenal papilla. Of the 123 control cases studied, 41% had a patent APD. According to the shape of the terminal portion of the APD on accessory pancreatogram, it was classified as either the stick type (n = 63), branch type (n = 15), saccular type (n = 15), spindle type (n = 11), or cudgel type (n = 8). In these groups, 49, 0, 27, 82, and 87% of the APD were patent, respectively. The patency of the APD in the patients with acute pancreatitis was 6% (1 of 17). The difference in patency between this group and the control group was significant (p &lt; 0.01). The patency of the APD varies with the shape of the terminal portion of the APD. A patent APD may prevent acute pancreatitis by lowering the pressure in the main pancreatic ducts. abstract_id: PUBMED:20551660 A patent accessory pancreatic duct prevents pancreatitis following endoscopic retrograde cholangiopancreatography. Background/aim: Pancreatitis is the most common and feared complication of endoscopic retrograde cholangiopancreatography (ERCP). We previously examined patency of the accessory pancreatic duct (APD) by dye injection endoscopic retrograde pancreatography (ERP). APD patency was found in 43% of 291 control cases who had no particular changes in the head of the pancreas compared to only 6% in patients with acute pancreatitis. APD patency was closely related with the shape of the terminal portion of the APD. This study aimed to clarify whether patency of the APD prevents post-ERCP pancreatitis. Methods: We examined retrospectively the terminal shape of the APD by ERP in 34 patients with post-ERCP pancreatitis. Based on these data, patency of the APD was estimated from its terminal shape in patients with post-ERCP pancreatitis. Results: The stick-type APD (p &lt; 0.01), which indicated high patency, was less frequent, and the branch-type APD (p &lt; 0.01) and halfway-type APD, or no APD (p &lt; 0.01), which showed quite low patency, were more frequent in patients with post-ERCP pancreatitis compared with controls. Accordingly, the estimated patency of the APD in post-ERCP pancreatitis patients was only 16%, which was significantly lower than the 43% in controls. There was no significant relationship between the estimated APD patency and the severity of post-ERCP pancreatitis. Conclusions: The estimated APD patency was significantly lower in patients with post-ERCP pancreatitis. A patent APD may function as a second drainage system to reduce the pressure in the main pancreatic duct and prevent post-ERCP pancreatitis. abstract_id: PUBMED:29511398 A Review of Double Common Bile Duct and Its Sequelae. A double or accessory common bile duct (ACBD) is a rare congenital anomaly. We report the case of a 60-year-old American Asian male, who was found to have a double or duplicated common bile duct after being admitted for evaluation of a pancreatic mass. A duplicated bile duct has the same mucosa histologically as a single bile duct. However, the opening of a duplicated bile duct lacks a sphincter allowing retrograde flow of gut contents which results in a higher probability of intraductal calculus formation. On rare occasions, it can predispose to liver abscesses, pancreatitis, pancreatic cancer, gallbladder cancer, gastric cancer, and ampullary cancer depending on the location of the opening of the ACBD. We present an integrative review of the limited cases of ACBD with correlation to the current case and discussion regarding the aspects of diagnosis and management. abstract_id: PUBMED:30880936 Endoscopic dissection of refractory pancreatic duct stricture via accessory pancreatic duct approach for concurrent treatment of anomalous pancreaticobiliary junction in aging patients. Background: Although endoscopic management of pancreatic strictures by dilation and stenting is well established, some high-grade strictures are refractory to conventional methods. Here, we report a novel technique via accessory pancreatic duct (APD) approach to simultaneously release chronic pancreatitis-associated pancreatic stricture and correct anomalous pancreaticobiliary junction (APBJ). Due to APBJ and stricture of proximal main pancreatic duct, the APD turned out to be compensatory expansion. The stiff stenosis was dissected along the axial of APD using needle-knife electrocautery or holmium laser ablation, and then the supporting stent was placed into the pancreatic body duct. By doing so, the outflow channels of pancreatic and biliary ducts were exquisitely separated. Patients And Methods: Two patients aged 69 and 71 years underwent stricture dissection and stent insertion for fluent drainage of pancreatic juice. The postoperative course was marked by complete abdominal pain relief and normal blood amylase recovery. In the first patient, wire-guided needle-knife electrocautery under fluoroscopic control was applied to release refractory stricture. The second patient was treated by SpyGlass pancreatoscopy-guided holmium laser ablation to lift pancreatic stricture. Results: Plastic stents in APD were removed at 3 months after surgery, and magnetic resonance imaging at 6 months showed strictly normal aspect of the pancreatic duct. Conclusion: Although both cases were successful without severe complications, we recommend this approach only for selected patients with short refractory pancreatic strictures due to chronic pancreatitis. In order to prevent severe complications (bleeding, perforation or pancreatitis), direct-view endoscopy-guided electrotomy needs to be developed. abstract_id: PUBMED:6492507 A duodenal duplication cyst communicated with an accessory pancreatic duct. We treated a 49 year-old woman with acute pancreatitis, in whom there was an accessory pancreatic duct which opened into a duodenal duplication cyst. Epigastric pain associated with vomiting and fever were present. Laboratory data showed leukocytosis and hyperamylasemia. An upper G-I series revealed a stricture at the pyloric region. At operation a spherical mass of 6 cm in diameter was present between the greater curvature of the pyloric region and the head of the pancreas. The cyst was removed by hemigastrectomy and partial resection of the head of the pancreas. The accessory pancreatic duct opened into the cyst. Taking particular note of the communication with the pancreatic duct, this cyst was diagnosed as one of duodenal duplication. A duodenal duplication cyst communicating with the accessory pancreatic duct is rare and is dissimilar to any of the 20 cases of duodenal duplication reported in Japan since 1960. abstract_id: PUBMED:35986117 Pancreatic ductal anatomy: more than meets the eye. Purpose: Anatomical variations of the pancreas are relatively frequent and often understudied. The ductal system of the pancreas has multiple variations, which are not frequently reported in the literature. Materials And Methods: The anatomy of the pancreas was studied through macroscopic anatomical dissection on 50 organ complexes (the pancreas, spleen, and duodenum) donated to the department of human anatomy, from patients, who died of causes not related to pancreatic diseases. Results: In type I, the main pancreatic duct (Wirsung's duct, MPD) and the accessory pancreatic duct (Santorini's duct, APD) were merged but most of the head was drained by the MPD (10% of cases). In type II, the MPD and APD were merged but most of the head was drained by the APD (4% of cases). In type III, the APD was absent and the head was drained by the MPD (14% of cases). In type IV, there was an inverted pancreas divisum where the ducts did not merge but each drained a part of the head (6% of cases). Classical pancreas divisum where the ducts did not merge but each drained a part of the head was considered as type V (4% of cases). In type VI, the MPD and APD merged and each drained a part of the head (48% of cases). In type VII, the MPD and APD merged but the upper part of the head was drained by the main pancreatic duct (4% of cases). In type VIII, the MPD and APD merged but the lower part of the head was drained by the main pancreatic duct (4% of cases). In the IX type, the MPD and APD merged but the head was drained by the branches of the MPD (6% of cases). Conclusions: There are several drainage patterns of the pancreas. In some cases, one of the ducts provides more drainage of the gland than the other. This is clinically relevant since blockage of the main source of drainage leads to pancreatic juice stasis. It also explains cases when partial or total blockage of the duct results in the pancreatitis of an isolated zone. Answer: The research suggests that a patent accessory pancreatic duct (APD) may play a role in preventing acute pancreatitis. Several studies have found that the patency of the APD is significantly lower in patients with acute pancreatitis compared to control groups without pancreatitis. For instance, one study found that the patency of the APD was only 17% in patients with acute pancreatitis, which was significantly lower than the 43% patency observed in control cases (PUBMED:14696497). Another study estimated the APD patency in acute pancreatitis patients to be only 21%, based on the terminal shape of the APD on endoscopic retrograde pancreatography (ERP), which was again significantly lower than in control cases (PUBMED:21175482). The APD is thought to function as a second drainage system that can reduce the pressure in the main pancreatic duct, thereby potentially preventing acute pancreatitis (PUBMED:15293129). The terminal shapes of the APD with low patency were found to be more frequent in acute pancreatitis patients, supporting the idea that a patent APD may help prevent the condition (PUBMED:20857518). Furthermore, a study on post-endoscopic retrograde cholangiopancreatography (ERCP) pancreatitis found that the estimated APD patency was only 16% in patients with post-ERCP pancreatitis, significantly lower than the 43% in controls, suggesting that a patent APD may also help prevent post-ERCP pancreatitis (PUBMED:20551660). In summary, the available evidence indicates that a patent accessory pancreatic duct may have a protective effect against the development of acute pancreatitis by serving as an alternative drainage route for pancreatic secretions, thereby reducing ductal pressure and the risk of pancreatitis.
Instruction: Are we all doing it wrong? Abstracts: abstract_id: PUBMED:31934519 An Update on Wrong-Site Spine Surgery. Study Design: Broad narrative review of current literature and adverse event databases. Objective: The aim of this review is to report the current state of wrong-site spine surgery (WSSS), whether the Universal Protocol has affected the rate, and the current trends regarding WSSS. Methods: An updated review of the current literature on WSSS, the Joint Commission sentinel event statistics database, and other state adverse event statistics database were performed. Results: WSSS is an adverse event that remains a potentially devastating problem, and although the incidence is difficult to determine, the rate is low. However, given the potential consequences for the patient as well as the surgeon, WSSS remains an event that continues to be reported alarmingly as often as before the implementation of the Universal Protocol. Conclusions: A systems-based approach like the Universal Protocol should be effective in preventing wrong-patient, wrong-procedure, and wrong-sided surgeries if the established protocol is implemented and followed consistently within a given institution. However, wrong-level surgery can still occur after successful completion of the Universal Protocol. The surgeon is the sole provider who can establish the correct vertebral level during the operation, and therefore, it is imperative that the surgeon design and implement a patient-specific protocol to ensure that the appropriate level is identified during the operation. abstract_id: PUBMED:34088279 Analysis and outcomes of wrong site thyroid surgery. Background: In thyroid surgery, wrong-site surgery (WSS) is considered a rare event and seldom reported in the literature. Case Presentation: This report presents 5 WSS cases following thyroid surgery in a 20-year period. We stratified the subtypes of WSS in wrong target, wrong side, wrong procedure and wrong patient. Only planned and elective thyroid surgeries present WSS cases. The interventions were performed in low-volume hospitals, and subsequently, the patients were referred to our centres. Four cases of wrong-target procedures (thymectomies [n = 3] and lymph node excision [n = 1] performed instead of thyroidectomies) and one case of wrong-side procedure were observed in this study. Two wrong target cases resulting additionally in wrong procedure were noted. Wrong patient cases were not detected in the review. Patients experienced benign, malignant, or suspicious pathology and underwent traditional surgery (no endoscopic or robotic surgery). 40% of WSS led to legal action against the surgeon or a monetary settlement. Conclusion: WSS is also observed in thyroid surgery. Considering that reports regarding the serious complications of WSS are not yet available, these complications should be discussed with the surgical community. Etiologic causes, outcomes, preventive strategies of WSS and expert opinion are presented. abstract_id: PUBMED:25673117 Wrong site surgery : Incidence, risk factors and prevention Background: Wrong site surgery defines a category of rare but totally preventable complications in surgery and other invasive disciplines. Such complications could be associated with severe morbidity or even death. As such complications are entirely preventable, wrong site surgery has been declared by the World Health Organization to be a "never event". Material And Methods: A selective search of the PubMed database using the MeSH terms "wrong site surgery", "wrong site procedure", "wrong side surgery" and "wrong side procedure" was performed. Results: The incidence of wrong site surgery has been estimated at 1 out of 112,994 procedures; however, the number of unreported cases is estimated to be higher. Although wrong site surgery occurs in all surgical specialities, the majority of cases have been recorded in orthopedic surgery. Breakdown in communication has been identified as the primary cause of wrong site surgery. Risk factors for wrong site surgery include time pressure, emergency procedures, multiple procedures on the same patient by different surgeons and obesity. Check lists have the potential to reduce or prevent the occurrence of wrong site surgery. Conclusion: The awareness that to err is human and the individual willingness to recognize and prevent errors are the prerequisites for reducing and preventing wrong site surgery. abstract_id: PUBMED:21660270 Wrong-level surgery: A unique problem in spine surgery. Background: Even though a lot of effort has gone into preventing operating at the wrong site and wrong patient, wrong-level surgery is a unique problem in spine surgery. Methods: The current method to prevent wrong level spine surgery performed is mainly relied on intra-operative X-ray. Unfortunately, because of the unique features and anatomy of the spinal column, wrong level spine surgery still happens. There are situations that even with intraoperative X-ray, correct level still cannot be reliably identified. Results: Examples of patient whose surgery can easily be performed on the wrong level are illustrated. A protocol to prevent wrong-level spine surgery preformed is developed. Conclusion: The consequence of wrong-level spine surgery not only generates another surgery of the intended level; it is usually also associated with lawsuit. Strictly following this protocol can prevent wrong-level spine surgery. abstract_id: PUBMED:34221617 A perspective on wrong level, wrong side, and wrong site spine surgery. Background: Four of the most common "errors" in spine surgery include: operating on the wrong patient, doing the wrong procedure, performing wrong-level surgery (WLS), and/or performing wrong-sided surgery (WSS). Although preoperative verification protocols (i.e. Universal Protocol, routine Time-Outs, and using the 3 R's (i.e. right patient, right procedure, right level/side)) have largely limited the first two "errors," WLS and WSS still occur with an unacceptably high frequency. Methods: In 20 studies, we identified the predominant factors contributing to WLS/WSS; unusual/anatomical anomalies/variants (i.e. sacralized lumbar vertebrae. lumbarized sacral vertebra, Klippel-Feil vertebrae, block vertebrae, butterfly vertebrae, obesity/morbid obesity), inadequate/poor interpretation of X-rays/fluoroscopic intraoperative images, and failure to follow different verification protocols. Results: "Human error" was another major risk factor contributing to the failure to operate at the correct level/side (WLS/WSS). Factors comprising "human error" included; surgeon/staff fatigue, rushing, emergency circumstances, lack of communication, hierarchical behavior in the operating room, and failure to "speak up". Conclusion: Utilizing the Universal Protocol, routine Time Outs, and the 3 R's largelly avoid operating on the wrong spine patient, and performing the wrong procedure. However, these guidelines have not yet sufficiently reduced the frequently of WLS and WSS. Greater recognition of the potential pitfalls contributing to WLS/WSS as reviewed in this perspective should better equip spine surgeons to avert/limit such "errors" in the future. abstract_id: PUBMED:37996298 Wrong-level spine surgery: A multicenter retrospective study. Background: Wrong-level spine surgery is a rare but serious complication of spinal surgery that increases patient harm and legal risks. Although such surgeries have been reported by many spine surgeons, they have not been adequately investigated. Therefore, this study aimed to examine the causes and preventive measures for wrong-level spine surgeries. Methods: This study analyzed cases of wrong-level spine surgeries from 10 medical centers. Factors such as age, sex, body mass index, preoperative diagnosis, surgical details, surgeon's experience, anatomical variations, responses, and causes of the wrong-level spine surgeries were studied. The methods used by the surgeons to confirm the surgical level were also surveyed using a questionnaire for each surgical procedure and site. Results: Eighteen cases (13 men and 5 women; mean age, 61.2 years; mean body mass index, 24.5 kg/m2) of wrong-level spine surgeries were evaluated in the study. Two cases involved emergency surgeries, three involved newly introduced procedures, and five showed anatomical variations. Wrong-level spine surgeries occurred more frequently in patients who underwent posterior thoracic surgery than in those who underwent other techniques (p &lt; 0.01). Twenty-two spinal surgeons described the methods used to confirm the levels preoperatively and intraoperatively. In posterior thoracic laminectomies, half of the surgeons used preoperative markers to confirm the surgical level and did not perform intraoperative fluoroscopy. In posterior thoracic fusion, all surgeons confirmed the level using fluoroscopy preoperatively and intraoperatively. Conclusions: Wrong-level spine surgeries occurred more frequently in posterior thoracic surgeries. The thoracic spine lacks the anatomical characteristics observed in the cervical and lumbar spine. The large drop in the spinous process can make it challenging for surgeons to determine the positional relationship between the spinous process and the vertebral body. Moreover, unfamiliarity with the technique and anatomical variations were also risk factors for wrong-level spine surgeries. abstract_id: PUBMED:28195826 Wrong-Site Surgery in California, 2007-2014. Objective The implementation of a universal surgical safety protocol in 2004 was intended to minimize the prevalence of wrong-site surgery (WSS). However, complete elimination of WSS in the operating room continues to be a challenge. The purpose of this study is to evaluate the prevalence and etiology of WSS in the state of California. Study Design A retrospective study of all WSS reports investigated by the California Department of Public Health between 2007 and 2014. Methods Prevalence of overall and specialty-specific WSS, causative factors, and recommendations on further improvement are discussed. Results A total of 95 cases resulted in incident reports to the California Department of Public Health and were included in our study. The most common errors were operating on the wrong side of the patient's body (n = 60, 62%), performing the wrong procedure (n = 21, 21%), operating on the wrong body part (n = 12, 12%), and operating on the wrong patient (n = 2, 2%). WSS was most prevalent in orthopedic surgery (n = 33, 35%), followed by general surgery (n = 26, 27%) and neurosurgery (n = 16, 17%). All 3 otolaryngology WSS cases in California are associated with the ear. Conclusion WSS continues to surface despite national efforts to decrease its prevalence. Future research could establish best practices to avoid these "never events" in otolaryngology and other surgical specialties. abstract_id: PUBMED:23960510 Experience of wrong-site tooth extraction among Nigerian dentists. Objective: To report the experience of wrong-site tooth extraction among Nigerian dentists. Study Design: A self-administered questionnaire was distributed among a cross-section of Nigerian dentists. Information requested included personal experience on wrong-site tooth/teeth extraction and its after-effect, possible reasons for wrong-site tooth extraction and documentation of the event in patients' case. Respondents were also asked if they were aware of any colleagues who had previously experienced wrong-site tooth extraction and possible legal implication of the event, and if they aware of the universal protocol for preventing wrong site, wrong procedure, and wrong person surgery. Results: Twenty-two (13%) of the respondents reported having extracted a wrong tooth. The event occurred within 5 years after graduation in most cases. Most respondents (53.6%) informed the patient immediately after the event. Only 68% of the respondents documented the event in patient's case record. Most common reasons for wrong-site tooth extraction were heavy workload, presence of multiple condemned teeth and miscommunication between dentists. Fifty-five percent of respondents were aware of a colleague who had extracted a wrong tooth. The most probable legal implication of wrong-site tooth extraction according to the respondents was litigation by the patient. Only 25% of dentists were aware of a universal protocol for preventing wrong-site surgery. Conclusions: Wrong tooth/teeth extraction is not an uncommon event in the studied environment. The need to be familiar with universal protocol on wrong-site surgery and its legal implications are highlighted. abstract_id: PUBMED:23730251 Wrong site surgery-where are we and what is the next step? Background: Wrong site surgery is estimated to occur 40 times per week in hospitals and clinics in USA. The universal protocol was implemented by the joint commission board of commissioners to address wrong site, wrong procedure, and wrong person surgery. Discussion: The universal protocol has three principal components: preoperative verification, marking of the operative site, and a time-out. Despite this organized approach to this problem, current data do not demonstrate any progress. In fact some data suggest that the problem may be getting worse. It is apparent that a process relying on surgeon and surgical team memory is doomed to ultimate failure. Recommendations are made for a more in depth checklist process based on the recommendations of the World Health Organization, reports in the literature of known areas of weakness in the current process, and personal experience in hopes of establishing a more bullet proof system to avoid wrong site procedure. abstract_id: PUBMED:23508946 Patient safety in spine surgery: regarding the wrong-site surgery. Patient safety regarding wrong site surgery has been one of the priority issues in surgical fields including that of spine care. Since the wrong-side surgery in the DM foot patient was reported on a public mass media in 1996, the wrong-site surgery issue has attracted wide public interest as regarding patient safety. Despite the many wrong-site surgery prevention campaigns in spine care such as the operate through your initial program by the Canadian Orthopaedic Association, the sign your site program by the American Academy of Orthopedic Surgeon, the sign, mark and X-ray program by the North American Spine Society, and the Universal Protocol program by the Joint Commission, the incidence of wrong-site surgery has not decreased. To prevent wrong-site surgery in spine surgeries, the spine surgeons must put patient safety first, complying with the hospital policies regarding patient safety. In the operating rooms, the surgeons need to do their best to level the hierarchy, enabling all to speak up if any patient safety concerns are noted. Changing the operating room culture is the essential part of the patient safety concerning spine surgery. Answer: The question "Are we all doing it wrong?" seems to be a rhetorical one, possibly referring to the prevalence and prevention of wrong-site surgeries (WSS), including wrong-level spine surgeries. The provided abstracts collectively suggest that despite the implementation of protocols and safety measures, such as the Universal Protocol, wrong-site surgeries, including wrong-level spine surgeries, continue to occur at a concerning rate. The abstracts indicate that wrong-site surgeries are rare but serious and preventable events that can lead to severe morbidity or even death (PUBMED:25673117). Wrong-site spine surgery (WSSS) remains a potentially devastating problem, and the rate of occurrence has not significantly decreased after the implementation of the Universal Protocol (PUBMED:31934519). Wrong-level surgery is a unique problem in spine surgery, and despite efforts to prevent it, such as intraoperative X-ray, it still happens due to the unique features and anatomy of the spinal column (PUBMED:21660270). In thyroid surgery, wrong-site surgery is also observed, and it can lead to legal action against the surgeon or monetary settlements (PUBMED:34088279). Wrong-level spine surgeries are reported to occur more frequently in posterior thoracic surgeries due to anatomical variations and unfamiliarity with the technique (PUBMED:37996298). Human error, including factors such as fatigue, rushing, emergency circumstances, lack of communication, and hierarchical behavior in the operating room, is a major risk factor contributing to wrong-level and wrong-sided surgeries (PUBMED:34221617). The abstracts suggest that a systems-based approach, like the Universal Protocol, should be effective in preventing wrong-patient, wrong-procedure, and wrong-sided surgeries if consistently implemented (PUBMED:31934519). However, the incidence of wrong-site surgery has not decreased, and further research is needed to establish best practices to avoid these "never events" (PUBMED:28195826). The awareness of the potential pitfalls contributing to wrong-level and wrong-sided surgeries should better equip surgeons to avert such errors in the future (PUBMED:34221617). In conclusion, the abstracts collectively highlight that while protocols and safety measures are in place, wrong-site surgeries, including wrong-level spine surgeries, still occur. This suggests that current strategies may not be sufficient, and there is a need for continued efforts to improve compliance, education, and perhaps the development of more robust systems to prevent these errors.
Instruction: Childhood catatonia, autism and psychosis past and present: is there an 'iron triangle'? Abstracts: abstract_id: PUBMED:23350770 Childhood catatonia, autism and psychosis past and present: is there an 'iron triangle'? Objective: To explore the possibility that autism, catatonia and psychoses in children are different manifestations of a single underlying form of brain pathology - a kind of 'Iron Triangle' of symptomatology - rather than three separate illnesses. Method: Systematic evaluation of historical case literature on autism to determine if catatonic and psychotic symptoms accompanied the diagnosis, as is found in some challenging present-day cases. Results: It is clear from the historical literature that by the 1920s all three diagnoses in the Iron Triangle - catatonia, autism and childhood schizophrenia - were being routinely applied to children and adolescents. Furthermore, it is apparent that children diagnosed with one of these conditions often qualified for the other two as well. Although conventional thinking today regards these diagnoses as separate entities, the presence of catatonia in a variety of conditions is being increasingly recognized, and there is also growing evidence of connections between childhood-onset psychoses and autism. Conclusion: Recognition of a mixed form of catatonia, autism and psychosis has important implications for both diagnosis and treatment. None of the separate diagnoses provides an accurate picture in these complex cases, and when given single diagnoses such as 'schizophrenia', the standard treatment options may prove markedly ineffective. abstract_id: PUBMED:36719136 The Triad of Childhood-Onset Schizophrenia, Autism Spectrum Disorder, and Catatonia: A Case Report. Childhood-onset schizophrenia (COS) is a rare and severe form of schizophrenia with an estimated prevalence of 1/10,000. Schizophrenia and Autism spectrum disorder (ASD) have shared phenotypic features and shared genetic etiology. There is growing research surrounding the co-occurrence of psychomotor syndromes like catatonia with neurodevelopmental disorders like ASD or psychiatric disorders like schizophrenia. In 2013, Shorter and Wachtel described a phenomenon of the 'Iron Triangle' where COS, ASD, and catatonia often co-occur. The Iron Triangle theory is based on observation of historical case literature, which showed that all three diagnoses in the Iron Triangle were routinely assigned to children and adolescents. The pattern of this "Iron Triangle" suggests there may be a single underlying pathology resulting in a unique mixed form of catatonia, autism, and psychosis. We describe the case of a boy with sequential development of COS, ASD, and catatonia who also has syndromic facial and musculoskeletal features. This case highlights overlapping diagnostic features of these three disorders and can help us better understand how "hidden" features of catatonia may occur in patients with COS or ASD but go unrecognized, because they are grouped as features under autism/schizophrenia rather than a distinct diagnosis of catatonia. Further study is warranted to elucidate if this phenotypic pattern constitutes a new single diagnosis that is not well understood, an endophenotype of schizophrenia, or if this is the result of phenomenological overlap between catatonia, ASD, and COS. abstract_id: PUBMED:16355606 Schizophrenia and related disorders in children and adolescents. This paper reviews the concept and recent studies on childhood and adolescent psychoses with special reference to schizophrenia. After a short historical introduction, the definition, classification, and epidemiology of child- and adolescent-onset psychoses are described, pointing out that some early-onset psychotic states seem to be related to schizophrenia (such as infantile catatonia) and others not (such as desintegrative disorder). The frequency of childhood schizophrenia is less than 1 in 10,000 children, but there is a remarkable increase in frequency between 13 and 18 years of age. Currently, schizophrenia is diagnosed according to ICD-10 and DSM-IV criteria. The differential diagnosis includes autism, desintegrative disorder, multiplex complex developmental disorder (MCDD) respectively multiple developmental impairment (MDI), affective psychoses, Asperger syndrome, drug-induced psychosis and psychotic states caused by organic disorders. With regard to etiology, there is strong evidence for the importance of genetic factors and for neurointegrative deficits preceding the onset of the disorder. Treatment is based upon a multimodal approach including antipsychotic medication (mainly by atypical neuroleptics), psychotherapeutic measures, family-oriented measures, and specific measures of rehabilitation applied in about 30% of the patients after completion of inpatient treatment. The long-term course of childhood- and adolescent-onset schizophrenia is worse than in adulthood schizophrenia, and the patients with manifestation of the disorder below the age of 14 have a very poor prognosis. abstract_id: PUBMED:33602769 Catatonia in a 10-year-old boy with early childhood neglect and disruptive behaviours in psychiatric residential treatment. Catatonia is a rare medical condition that can be fatal in paediatric patients if left untreated. It is often misdiagnosed or underdiagnosed. There are no published cases of catatonia in traumatised children living in long-term psychiatric care. However, there is some evidence that childhood maltreatment in its variant forms may be a risk for the development of catatonia in children and adolescents. In this case, a 10-year-old boy with intrauterine exposure to alcohol and multiple drugs and early childhood deprivation, developed neuroleptic-induced catatonia in an intensive psychiatric residential treatment centre approximately 24 hours after receiving a first-time intramuscular injection of haloperidol 5 mg for acute agitation. He had no known predisposing factors for catatonia such as psychosis, autism, neurological or general medical problems. This 10-year-old child's early childhood trauma should be considered as a predisposing factor for catatonia. abstract_id: PUBMED:31994515 Thrombodynamic correlates of catatonia severity in children with autism Aim: To study a correlation between the values of thrombodynamics parameters of hypercoagulation measured by the thrombodynamics test and the severity of catatonia in children with infantile psychosis in childhood autism (F84.02). Material And Methods: Twenty-four patients (22 boys and 2 girls) aged from 3 to 13 years, were studied. The severity of catatonia was determined by BFCRS. A thrombodynamic test was performed in platelet-free plasma using the analyzer T-2 Thrombodynamics Device (Hemacore LLC, Russia). Results: Thrombodynamic (TD) parameters of clot growth rates from the activator (V, Vi and Vst) were statistically significantly higher than normal values. Similar results were obtained for Clot Size at 30 min (CS, μm): Tlag and D values were within normal limits. The values of Time of appearance of spontaneous clots (Tsp min) were less than the lower limit values for the norm (30 min). Correlation analysis showed that the severity of catatonia is positively correlated with the initial clot growth rate (Vi) (p=0.009) and negatively with Tsp (p=0.002). With an increase in the time of appearance of spontaneous clots (due to a decrease in the procoagulant activity of platelet microparticles in the plasma of patients), the severity of catatonia in children with ASD decreases. Conclusion: The results suggest that normalizing plasma and platelet hemostasis is important for increasing the effectiveness of treatment of patients with ASD with catatonia. abstract_id: PUBMED:15002313 Clinical aspects of childhood autism with endogenous manifesting psychosis and mental retardation Sixty-eight patients with childhood autism manifesting with psychoses of different psychopathological structure at the age up to 3 years have been observed. Four psychotic types--catatonic-regressive (19%), polymorphic-regressive (25%), catatonic with mental retardation during psychosis (32%), polymorphic with retardation of mental development during psychosis (24%)--are described. Correlations between a structure of deficit state and these psychoses were determined: 1) development of severe autistic state with pronounced mental development retardation and a lack of speech in the most cases of catatonic-regressive psychoses; 2) autistic state of moderate severity with mental development retardation of different severity and phrase speech formation in a les than half of the patients in polymorphic-regressive psychosis; 3) development of autistic state of moderate severity with mild mental development retardation and phrase speech formation in the majority of the cases in catatonic psychosis; (4) formation of moderately pronounced autistic state in the absence of mental development retardation or its mild degree and phrase speech acquirement after polymorphic psychosis. abstract_id: PUBMED:23773168 The potential role of electroconvulsive therapy in the 'Iron Triangle' of pediatric catatonia, autism, and psychosis. N/A abstract_id: PUBMED:37122292 Case Report: Clinical delineation of CACNA1D mutation: New cases and literature review. Background: Calcium ions are involved in several human cellular processes; nevertheless, the relationship between calcium channelopathies (CCs) and autism spectrum disorder (ASD) or intellectual disability (ID) has been previously investigated. We delineate the spectrum of clinical phenotypes and the symptoms associated with a syndrome caused by an inherited gain-of-function mutation in CACNA1D in a family with a history of neuropsychiatric disorders. We also review the clinical and molecular phenotype of previously reported variants of CACNA1D. Case Presentation: We report the case of a 9-year-old female patient, diagnosed with ASD, severe ID, hyperactivity, and aggressive impulsive behaviors. The father, who was a 65-year-old at the time of his death, had ID and developed major depressive disorder with catatonic features and nihilistic delusion, followed by rapidly progressive dementia. He died after experiencing prolonged seizures followed by post-cardiac arrest. The patient's sister was a 30-year-old woman, known to have a severe ID with aggressive behaviors and sleep disorders. The sister has been diagnosed with bipolar disorder and psychosis. Through whole exome sequencing, a heterozygous previously identified and functionally characterized missense likely pathogenic variant was identified in the CACNA1D gene NM_001128840.3: c.2015C &gt; T (p.Ser672Leu). These findings are consistent with the genetic diagnosis of autosomal dominant primary aldosteronism, seizures, and neurological abnormalities. This variant was found in the heterozygous status in the patient, her father, and her affected sister. Conclusion: This case report will help to determine the key clinical features of this syndrome, which exhibits variable clinical presentations. abstract_id: PUBMED:16697288 Classification matters for catatonia and autism in children. Despite its chequered history, Kahlbaum's 1874 description of catatonia (tension insanity) and its categorization as a clinical illness is in outline still valid. Kahlbaum also acknowledged the existence of catatonia in children. Corresponding case studies have also been analyzed. The originators and disciples of the Wernicke-Kleist-Leonhard school proved catatonia in early childhood as a discrete entity with specific psychopathology. This does not mean that catatonic symptoms do not occur in other illnesses and in particular in organic psychoses. These are, however, of a totally different nature. Autism, as first described in connection with schizophrenic negativism by Bleuler in 1910, is one of the key symptoms of schizophrenia. As identified by Kanner in 1943, abnormal social interaction and communication, together with retarded development, are the main characteristics of autism in early childhood. Asperger's concept of autistic disorder (1944), although based on psychopathological theory, did not include retardation in development as an aspect. Consequently, autistic behavior can occur in a variety of mental disorders. Research into possible etiological and pathogenetic factors has been undertaken, but no clear link found as yet. abstract_id: PUBMED:19190507 Catatonia and autism: a historical review, with implications for electroconvulsive therapy. Current autism research is historically separated from catatonia and other childhood psychotic disorders, although catatonia and autism share several common symptoms (mutism, echolalia, stereotypic speech and repetitive behaviors, posturing, grimacing, rigidity, mannerisms, and purposeless agitation). Electroconvulsive therapy (ECT) effectively treats catatonia and catatonia-related conditions of intractable compulsions, tics, and self-injury in people with autism. We assess the incidence of catatonic symptoms in autism, examine emerging ECT indications in people with autism and related developmental disorders, and encourage ethical debate and legal-administrative action to assure equal access to ECT for people with autism. Answer: The concept of an 'Iron Triangle' suggests that childhood catatonia, autism, and psychosis may not be three separate illnesses but could represent different manifestations of a single underlying form of brain pathology. Historical case literature indicates that by the 1920s, diagnoses of catatonia, autism, and childhood schizophrenia were routinely applied to children and adolescents, with many qualifying for more than one of these conditions (PUBMED:23350770). This pattern of co-occurrence has led to the hypothesis that there may be a single underlying pathology resulting in a mixed form of catatonia, autism, and psychosis (PUBMED:36719136). The recognition of this mixed form has important implications for diagnosis and treatment, as none of the separate diagnoses may provide an accurate picture in complex cases. For instance, when given a single diagnosis such as 'schizophrenia', standard treatment options may prove markedly ineffective (PUBMED:23350770). The case of a boy with sequential development of childhood-onset schizophrenia (COS), autism spectrum disorder (ASD), and catatonia, who also exhibited syndromic facial and musculoskeletal features, highlights the overlapping diagnostic features of these disorders. This case suggests that "hidden" features of catatonia may occur in patients with COS or ASD but go unrecognized because they are grouped under autism/schizophrenia rather than as a distinct diagnosis of catatonia (PUBMED:36719136). Further study is warranted to elucidate whether this phenotypic pattern constitutes a new single diagnosis that is not well understood, an endophenotype of schizophrenia, or if it is the result of phenomenological overlap between catatonia, ASD, and COS (PUBMED:36719136). The historical and current perspectives on the relationship between childhood catatonia, autism, and psychosis support the notion of an 'Iron Triangle', indicating a need for a more integrated approach to diagnosis and treatment of these complex cases.
Instruction: Measuring the appropriateness of prescribing in primary care: are current measures complete? Abstracts: abstract_id: PUBMED:16336285 Measuring the appropriateness of prescribing in primary care: are current measures complete? Background And Objectives: Appropriateness of prescribing is often assessed by standard instruments. We wished to establish whether judgements of appropriateness that included patients' perspectives and contextual factors could lead to different conclusions when compared with commonly used instruments. To explore the predictive accuracy of these instruments. Methods: The design was interviews of patients, audio recordings of the consultation and interviews of the doctors, in varied primary care practices in England. Participants were patients who were likely to discuss a medication issue. The outcome measures were judgements of appropriateness made by the researchers and by two instruments: the Prescribing Appropriateness Index and the Medication Appropriateness Index. Implications for the predictive accuracy of the measures was also investigated. Results: From 35 cases there was agreement between the judges and the instruments in 22 cases, 16 were appropriate and 6 inappropriate. Of 10 cases classified as inappropriate by the instruments the judges thought four were appropriate. Of 18 cases classified as appropriate by the instruments, two were considered inappropriate by the judges. In seven cases the prescribing decisions could not be classified by the instruments because the decision was to not prescribe. Conclusions: Current measures of appropriateness of prescribing depend predominantly on pharmacological criteria, and so do not represent cases that would be judged appropriate when including the patient's views and contextual factors. If most prescribing is appropriate then use of these measures may lead to more false negatives than real negatives. The instruments should be renamed as measures of 'pharmacological appropriateness' and are useful where the incidence of this type of inappropriate prescribing is relatively high. abstract_id: PUBMED:29490061 Defining the appropriateness and inappropriateness of antibiotic prescribing in primary care. Objectives: To assess the appropriateness of prescribing systemic antibiotics for different clinical conditions in primary care, and to quantify 'ideal' antibiotic prescribing proportions in conditions for which antibiotic treatment is sometimes but not always indicated. Methods: Prescribing guidelines were consulted to define the appropriateness of antibiotic therapy for the conditions that resulted in antibiotic prescriptions between 2013 and 2015 in The Health Improvement Network (THIN) primary care database. The opinions of subject experts were then formally elicited to quantify ideal antibiotic prescribing proportions for 10 common conditions. Results: Of the antibiotic prescriptions in THIN, 52.5% were for conditions that could be assessed using prescribing guidelines. Among these, the vast majority of prescriptions (91.4%) were for conditions where antibiotic appropriateness is conditional on patient-specific indicators. Experts estimated low ideal prescribing proportions in acute, non-comorbid presentations of many of these conditions, such as cough (10% of patients), rhinosinusitis (11%), bronchitis (13%) and sore throat (13%). Conversely, antibiotics were believed to be appropriate in 75% of non-pregnant women with non-recurrent urinary tract infection. In impetigo and acute exacerbation of chronic obstructive pulmonary disease, experts clustered into distinct groups that believed in either high or low prescribing. Conclusions: In English primary care, most antibiotics are prescribed for conditions that only sometimes require antibiotic treatment, depending on patient-specific indicators. Experts estimated low ideal prescribing proportions in many of these conditions. Incomplete prescribing guidelines and disagreement about prescribing in some conditions highlight further research needs. abstract_id: PUBMED:26297239 Impact of an enhanced pharmacy discharge service on prescribing appropriateness criteria: a randomised controlled trial. Background: Older people are at increased risk of drug-related problems (DRPs) caused by inappropriate use or underuse of medications which may be increased during care transitions. Objective: To examine the effects of applying a validated prescribing appropriateness criteria-set during medication review in a cohort of older (≥65 years) Australians at the time of discharge from hospital. Setting: Private hospital and homes of older patients in Sydney, Australia. Methods: Cognitively well English speaking patients aged 65 years or over taking five or more medications were recruited. A prescribing appropriateness criteria-set and SF-36 health-related quality of life health (HRQoL) survey were applied to all patients at discharge. Patients were then randomly assigned to receive either usual care (control, n = 91) or discharge medication counselling and a medication review by a clinical pharmacist (intervention, n = 92). Medication review recommendations were sent to the general practitioners of intervention group patients. All patients were followed up at 3 months post discharge, where the prescribing appropriateness criteria-set was reapplied and HRQoL survey repeated. MAIN OUTCOME MEASURES change in the number of prescribing appropriateness criteria met; change in HRQoL; number and causes of DRPS identified by medication review; intervention patient medication recommendation implementation rates. Results: There was no significant difference in the number of criteria applicable and met in intervention patients, compared to control patients, between follow-up and discharge (0.09 ≤ p ≤ 0.97). While the difference between groups was positive at follow-up for SF-36 scores, the only domain that reached statistical significance was that for vitality (p = 0.04). Eighty-eight intervention patient medication reviews identified 750 causes of DRPs (8.5 ± 2.7 per patient). No causes of DRPs were identified in four patients. Of these causes, 76.4 % (573/750) were identified by application of the prescribing appropriateness criteria-set. GPs implemented a relatively low number (42.4 %, 318/750) of recommendations. Conclusion: Application of a prescribing appropriateness criteria-set during medication review in intervention patients did not increase the number of criteria met, nor result in a significant improvement in HRQoL. Higher recommendation implementation rates may require additional facilitators, including a higher quality of collaboration. abstract_id: PUBMED:34356764 Antimicrobial Prescribing in the Emergency Department; Who Is Calling the Shots? Objective: Inappropriate antimicrobial prescribing in the emergency department (ED) can lead to poor outcomes. It is unknown how often the prescribing clinician is guided by others, and whether prescriber factors affect appropriateness of prescribing. This study aims to describe decision making, confidence in, and appropriateness of antimicrobial prescribing in the ED. Methods: Descriptive study in two Australian EDs using both questionnaire and medical record review. Participants were clinicians who prescribed antimicrobials to patients in the ED. Outcomes of interest were level of decision-making (self or directed), confidence in indication for prescribing and appropriateness (5-point Likert scale, 5 most confident). Appropriateness assessment of the prescribing event was by blinded review using the National Antibiotic Prescribing Survey appropriateness assessment tool. All analyses were descriptive. Results: Data on 88 prescribers were included, with 61% making prescribing decisions themselves. The 39% directed by other clinicians were primarily guided by more senior ED and surgical subspecialty clinicians. Confidence that antibiotics were indicated (Likert score: 4.20, 4.35 and 4.35) and appropriate (Likert score: 4.07, 4.23 and 4.29) was similar for juniors, mid-level and senior prescribers, respectively. Eighty-five percent of prescriptions were assessed as appropriate, with no differences in appropriateness by seniority, decision-making or confidence. Conclusions: Over one-third of prescribing was guided by senior ED clinicians or based on specialty advice, primarily surgical specialties. Prescriber confidence was high regardless of seniority or decision-maker. Overall appropriateness of prescribing was good, but with room for improvement. Future qualitative research may provide further insight into the intricacies of prescribing decision-making. abstract_id: PUBMED:36753656 Comparing the appropriateness of antimicrobial prescribing among medical patients in two tertiary hospitals in Malaysia. Introduction: Malaysia is an upper-middle-income country with national antimicrobial stewardship programs in place. However, hospitals in this country are faced with a high incidence of multidrug-resistant organisms and high usage of broad-spectrum antibiotics. Therefore, this study aimed to use a standardized audit tool to assess clinical appropriateness, guideline compliance, and prescribing patterns of antimicrobial use among medical patients in two tertiary hospitals in Malaysia to benchmark practice. Methodology: A prospective hospital-wide point prevalence survey was carried out by a multidisciplinary team in April 2019 at the University Malaya Medical Centre (UMMC) and the Hospital Canselor Tuanku Muhriz (HCTM), Kuala Lumpur, Malaysia. Data was collected from the patient's electronic medical records and recorded using the Hospital National Antimicrobial Prescribing Survey toolkit developed by the National Centre for Antimicrobial Stewardship, Australia. Results: The appropriateness of prescriptions was 60.1% (UMMC) and 67% (HCTM), with no significant difference between the two hospitals. Compliance with guidelines was 60.0% (UMMC) and 61.5% (HCTM). Amoxicillin-clavulanic acid was the most commonly prescribed antimicrobial (UMMC = 16.9%; HCTM = 11.9%). Conclusions: The appropriateness of antimicrobial prescribing in medical wards, compliance with guidelines, and prescribing patterns were similar between the two hospitals in Malaysia. The survey identified several areas of prescribing that would need targeted AMS interventions. abstract_id: PUBMED:24372683 Using periodic point-prevalence surveys to assess appropriateness of antimicrobial prescribing in Australian private hospitals. Background And Aims: Appropriateness of antimicrobial use is a measure of key importance in evaluating safety and quality of prescribing but has been difficult to define and assess on a wide scale. Published work is limited and has generally focused on tertiary public hospitals, whereas the private sector provides a significant proportion of care in many countries. Information on prescribing in the private hospital context is needed to identify where intervention might be required. An antimicrobial prescribing survey tool was utilised to assess the appropriateness of antimicrobial prescribing among large private hospitals in Australia. Methods: 'Appropriateness' of antimicrobial therapy was evaluated by a team consisting of an infectious diseases physician and specialist infectious diseases pharmacist based on clear criteria. Results: Thirteen hospital-wide point-prevalence surveys were conducted. Three thousand, four hundred and seventy-two inpatient medication charts were reviewed to identify 1125 (32.4%) inpatients on 1444 antimicrobials. An indication was documented in 911 (63.1%) of surveyed prescriptions, and overall, 757 (52.4%) of antimicrobials were assessed as appropriate. Antimicrobials prescribed for treatment had a higher proportion of appropriateness when compared with antimicrobials prescribed for surgical prophylaxis (80.4% vs 40.6%). The main reason for a treatment prescription to be considered inappropriate was incorrect selection, while prolonged duration (&gt;24 h) was the main reason for inappropriate surgical prophylaxis prescriptions. Conclusions: This study provides important data on antimicrobial prescribing patterns in Australian private hospitals. Results can be used to target areas for improvement, with documentation of indication and surgical antibiotic prophylaxis requiring initial attention. abstract_id: PUBMED:29895310 Developing a measure of polypharmacy appropriateness in primary care: systematic review and expert consensus study. Background: Polypharmacy is an increasing challenge for primary care. Although sometimes clinically justified, polypharmacy can be inappropriate, leading to undesirable outcomes. Optimising care for polypharmacy necessitates effective targeting and monitoring of interventions. This requires a valid, reliable measure of polypharmacy, relevant for all patients, that considers clinical appropriateness and generic prescribing issues applicable across all medications. Whilst there are several existing measures of potentially inappropriate prescribing, these are not specifically designed with polypharmacy in mind, can require extensive clinical input to complete, and often cover a limited number of drugs. The aim of this study was to identify what experts consider to be the key elements of a measure of prescribing appropriateness in the context of polypharmacy. Methods: Firstly, we conducted a systematic review to identify generic (not drug specific) prescribing indicators relevant to polypharmacy appropriateness. Indicators were subject to content analysis to enable categorisation. Secondly, we convened a panel of 10 clinical experts to review the identified indicators and assess their relative clinical importance. For each indicator category, a brief evidence summary was developed, based on relevant clinical and indicator literature, clinical guidance, and opinions obtained from a separate patient discussion panel. A two-stage RAND/UCLA Appropriateness Method was used to reach consensus amongst the panel on a core set of indicators of polypharmacy appropriateness. Results: We identified 20,879 papers for title/abstract screening, obtaining 273 full papers. We extracted 189 generic indicators, and presented 160 to the panel grouped into 18 classifications (e.g. adherence, dosage, clinical efficacy). After two stages, during which the panel introduced 18 additional indicators, there was consensus that 134 indicators were of clinical importance. Following the application of decision rules and further panel consultation, 12 indicators were placed into the final selection. Panel members particularly valued indicators concerned with adverse drug reactions, contraindications, drug-drug interactions, and the conduct of medication reviews. Conclusions: We have identified a set of 12 indicators of clinical importance considered relevant to polypharmacy appropriateness. Use of these indicators in clinical practice and informatics systems is dependent on their operationalisation and their utility (e.g. risk stratification, targeting and monitoring polypharmacy interventions) requires subsequent evaluation. Trial Registration: Registration number: PROSPERO ( CRD42016049176 ). abstract_id: PUBMED:37362384 Development and Validation of Quality Measures for Testosterone Prescribing. Context: Accurate measures to assess appropriateness of testosterone prescribing are needed to improve prescribing practices. Objective: This work aimed to develop and validate quality measures around the initiation and monitoring of testosterone prescribing. Methods: This retrospective cohort study comprised a national cohort of male patients receiving care in the Veterans Health Administration who initiated testosterone during January or February 2020. Using laboratory data and diagnostic codes, we developed 9 initiation and 7 monitoring measures. These were based on the current Endocrine Society guidelines supplemented by expert opinion and prior work. We chose measures that could be operationalized using national VA electronic health record (EHR) data. We assessed criterion validity for these 16 measures by manual review of 142 charts. Main outcome measures included positive and negative predictive values (PPVs, NPVs), overall accuracy (OA), and Matthews Correlation Coefficients (MCCs). Results: We found high PPVs (&gt;78%), NPVs (&gt;98%), OA (≥94%), and MCCs (&gt;0.85) for the 10 measures based on laboratory data (5 initiation and 5 monitoring). For the 6 measures relying on diagnostic codes, we similarly found high NPVs (100%) and OAs (≥98%). However, PPVs for measures of acute conditions occurring before testosterone initiation (ie, acute myocardial infarction or stroke) or new conditions occurring after initiation (ie, prostate or breast cancer) PPVs were much lower (0% to 50%) due to few or no cases. Conclusion: We developed several valid EHR-based quality measures for assessing testosterone-prescribing practices. Deployment of these measures in health care systems can facilitate identification of quality gaps in testosterone-prescribing and improve care of men with hypogonadism. abstract_id: PUBMED:37627692 Comments by Microbiologists for Interpreting Antimicrobial Susceptibility Testing and Improving the Appropriateness of Antibiotic Therapy in Community-Acquired Urinary Tract Infections: A Randomized Double-Blind Digital Case-Vignette Controlled Superiority Trial. In primary care, urinary tract infections (UTIs) account for the majority of antibiotic prescriptions. Comments from microbiologists on interpreting the antimicrobial susceptibility testing (AST) profile for urinalysis were made to improve the prescription of antibiotics. We aimed to explore the added value of these comments on the quality of antibiotic prescribing by a superior double-blind digital randomized case-vignette trial among French general practitioners (GPs). One case vignette with (intervention) or without (control) a 'comment' after AST was randomly assigned to GPs. Among 815 participating GPs, 64.7% were women, at an average age of 37 years. Most (90.1%) used a computerized decision support system for prescribing antibiotics. Empirical antibiotic therapy was appropriate in 71.9% (95% CI, 68.8-75.0) of the cases, without differences between arms. The overall appropriateness of targeted antibiotic therapy (primary outcome) was not significantly increased when providing 'comments': 83.4% vs. 79.9% (OR = 1.26, 95% CI, 0.86-1.85). With the multivariate analysis, the appropriateness was improved by 2-folds (OR = 2.38, 95% CI, 1.02-6.16) among physicians working in healthcare facilities. Among digital-affine young general practitioners, the adjunction of a 'comment' by a microbiologist to interpret urinalysis in community-acquired UTIs did not improve the overall level of appropriateness of the targeted antibiotic. abstract_id: PUBMED:23748124 Appropriateness of medication prescribing using the STOPP/START criteria in veterans receiving home-based primary care. Objective: To evaluate the appropriateness/inappropriateness of medication prescribing using the Screening Tool of Older Person's Prescriptions/Screening Tool to Alert doctors to Right Treatment (STOPP/START) criteria in elderly homebased primary care (HBPC) veterans and to understand the potential impact of the HBPC team on the STOPP/START criteria. Design: Retrospective chart review between September 1, 2007, and September 30, 2009. Setting: Edward Hines, Jr. VA Hospital, Hines, Illinois, a tertiary care referral center. Participants: Two hundred patients 65 years of age and older who had an initial interdisciplinary evaluation conducted by the HBPC team. Exclusion criteria included admission to the hospital between the initial and follow-up medication review, admission to the HBPC program for palliative or hospice care, or death before completion of a follow-up evaluation. Intervention: Appropriateness/inappropriateness of medication prescribing was assessed by comparing the medication list at the initial pharmacist medication review and follow-up review. Initial and follow-up STOPP and START scores were calculated for each patient. Main Outcome Measures: The primary outcome was to evaluate the appropriateness/inappropriateness of medication prescribing using the STOPP/START criteria. The secondary outcome was to understand the potential impact of the HBPC team on the STOPP/START criteria. Results: There was a statistically significant decrease in the STOPP score between the initial and follow-up medication review (P = 0.0014). There was not a statistically significant increase in the START score between the initial and follow-up medication review (P = 0.5720). Conclusions: A statistically significant decrease in the STOPP score was found. Although the START score increased, the change was not found to be statistically significant. Answer: The current measures for assessing the appropriateness of prescribing in primary care are not considered complete. Research has indicated that standard instruments used to measure prescribing appropriateness, such as the Prescribing Appropriateness Index and the Medication Appropriateness Index, rely predominantly on pharmacological criteria and may not fully capture patient perspectives and contextual factors. This can lead to different conclusions about the appropriateness of prescribing decisions when compared with assessments that include these additional considerations. For instance, in a study involving varied primary care practices in England, there were discrepancies between the judgements made by researchers, which included patient views, and the conclusions drawn by the standard instruments. The study suggested that these instruments should be renamed as measures of 'pharmacological appropriateness' and noted that they might be more useful in settings with a higher incidence of pharmacologically inappropriate prescribing (PUBMED:16336285). Moreover, in the context of antibiotic prescribing, most prescriptions in primary care are for conditions that only sometimes require antibiotics, depending on patient-specific indicators. Experts have estimated low ideal prescribing proportions for many of these conditions, indicating that guidelines are incomplete and there is disagreement about prescribing in some conditions, highlighting the need for further research (PUBMED:29490061). Additionally, an enhanced pharmacy discharge service applying a validated prescribing appropriateness criteria-set did not significantly increase the number of criteria met or result in a significant improvement in health-related quality of life. This suggests that higher recommendation implementation rates may require additional facilitators, including better collaboration (PUBMED:26297239). In summary, while current measures provide some insight into pharmacological appropriateness, they may not fully represent the complexity of prescribing decisions in primary care, which should also consider patient preferences, clinical judgement, and the broader context of each patient's situation. Therefore, there is a need for more comprehensive measures that encompass these factors to more accurately assess the appropriateness of prescribing in primary care settings.
Instruction: Does surveillance for hepatocellular carcinoma in HCV cirrhotic patients improve treatment outcome mainly due to better clinical status at diagnosis? Abstracts: abstract_id: PUBMED:11100360 Does surveillance for hepatocellular carcinoma in HCV cirrhotic patients improve treatment outcome mainly due to better clinical status at diagnosis? Background/aims: Cirrhotic patients with hepatitis C virus infection are a group at higher risk for hepatocellular carcinoma. Conventional screening programs detect only few early hepatocellular carcinomas that are eligible for radical treatment. Our aim was to compare characteristics of patients, modality of treatment, and outcome in anti-HCV positive cirrhotics with hepatocellular carcinoma diagnosed during follow-up, or incidentally. Methodology: Sixty-one hepatocellular carcinomas were consecutively diagnosed in cirrhotic anti-HCV patients from 1993-1998 among which 34 during biannual ultrasonographic-biochemical follow-up and the others incidentally. Child-Pugh's score, alpha-fetoprotein levels, uni- or multifocality of the tumor, and treatment and survival of the patients were then analyzed on the basis of modality of diagnosis. Results: Surgical treatment was feasible only in a minority of patients. Radical and palliative treatment was more frequent among patients with HCC diagnosed during follow-up. Child-Pugh's score was lower in these patients, moreover their survival rate was better. Analysis of survival of patients treated with the same procedure and grouped by modality of diagnosis did not demonstrate any differences. Regression analysis showed that patients with a lower Child-Pugh's score, one nodule, with a tumor diagnosed during follow-up and who were treated had a better survival rate. Conclusions: In our population surveillance did not detect a higher percentage of curable HCC. Nevertheless the results of palliative treatment and of curative treatment overlapped. Overall better outcome was observed in patients with preserved liver function whatever the treatment. Surveillance allowed us to diagnose HCC in patients with these characteristics thus leading to an improved survival rate. abstract_id: PUBMED:20060710 Hepatocellular carcinoma surveillance and appropriate treatment options improve survival for patients with liver cirrhosis. Objective/aim: Hepatocellular carcinoma (HCC) surveillance is a common practice for patients with liver cirrhosis. The aims of the study were to assess impacts of surveillance and therapeutic options on survival of patients with HCC. Methods: A total of 1436 cirrhotic patients with newly diagnosed HCC were enrolled between January 2002 and December 2004. Patients with HCC detected within periodic surveillance were the surveillance group (n=318, 22.1%). The other patients with HCC incidentally detected were the non-surveillance group (n=1118, 77.95%). Initial treatment options were recorded and overall survival was analysed. Results: Compared with patients in the non-surveillance group, larger proportions of patients in the surveillance group possessed small tumours, at an early stage without vascular invasion or metastases, and afforded more curative treatment options including surgical resection, radiofrequency ablation and percutaneous ethanol injection. The overall survival was better for patients in surveillance (3-year survival rate: 59.1% versus 29.3%, p&lt;0.001), early stages by Barcelona Clinic Liver Cancer (BCLC) staging or curative treatment options. Multivariate analysis demonstrated surveillance, hepatitis aetiology, alpha-fetoprotein, tumour gross type, tumour stage and treatment options were associated factors for patients' survival. Moreover, surveillance patients in curative BCLC stage following the treatment guideline for HCC proposed by the American association for the study of liver disease (AASLD) had a significantly better 3-year survival rate (77.1% versus 55.2%, p&lt;0.001). Conclusions: HCC surveillance for cirrhotic patients could detect HCC at early and curative stages. However, appropriate treatment options following AASLD guideline further improve the survival for patients in early stage. abstract_id: PUBMED:31124041 Hepatocellular Carcinoma Surveillance in a Cohort of Chronic Hepatitis C Virus-Infected Patients with Cirrhosis. Background: Six-monthly hepatocellular carcinoma (HCC) screening in cirrhotic patients has been recommended since 2011. HCC prognosis is associated with diagnosis at an early stage. We examined the prevalence and correlates of 6-monthly HCC surveillance in a cohort of HCV-infected cirrhotic patients. Methods: Data were obtained from the medical records of patients receiving care from four hospitals between January 2011 and December 2016. Frequencies and logistic regression were conducted. Results: Of 2,933 HCV-infected cirrhotic patients, most were ≥ 60 years old (68.5%), male (62.2%), White (65.8%), and had compensated cirrhosis (74.2%). The median follow-up period was 3.5 years. Among these patients, 10.9% were consistently screened 6 monthly and 21.4% were never screened. Patients with a longer history of cirrhosis (AOR = 0.86, 95% CI = 0.80-0.93) were less likely to be screened 6 monthly while decompensated cirrhotic patients (AOR = 1.39, 95% CI = 1.06-1.81) and cirrhotic patients between 18 and 44 years (AOR = 2.01, 95% CI = 1.07-3.74) were more likely to be screened 6 monthly compared to compensated cirrhotic patients and patients 60 years and older respectively. There were no significant differences by race, gender, or insurance type. Conclusion: The prevalence of consistent HCC surveillance remains low despite formalized recommendations. One in five patients was never surveilled. Patients with a longer history of cirrhosis were less likely to be surveilled consistently despite their greater HCC risk. Improving providers' knowledge about current HCC surveillance guidelines, educating patients about the benefits of consistent HCC surveillance, and systemic interventions like clinical reminders and standing HCC surveillance protocols can improve guideline-concordant surveillance in clinical practice. abstract_id: PUBMED:31323163 Surveillance factors change outcomes in patients with hepatocellular carcinoma due to chronic hepatitis C virus infection in New Zealand. Although surveillance for Hepatocellular Carcinoma (HCC) with 6 monthly imaging is recommended for patients with cirrhosis secondary to chronic hepatitis C virus (HCV) infection, international studies report poor adherence and there is paucity of data on its effect on patient outcomes. The primary aim of this study was to review cases of HCC secondary to HCV to determine the impact of adherence with HCC surveillance on survival. A total of 520 patients with confirmed HCC secondary to chronic HCV from 31 January 2001 to 31 May 2018 were identified from a prospective national HCC database. Computerized clinical records, general practitioner referral letters and secondary care clinic letters were subsequently retrospectively analysed for methods of HCC detection. HCC was detected through routine surveillance in only 224 patients (44%). HCC was detected either incidentally or following the onset of symptoms in nonadherent (12%), suboptimal surveyed (3%), undiagnosed cirrhotic (12%) or recently diagnosed HCV patients (21%) or were never offered surveillance (2%). Routine surveillance improved overall survival, OR 0.41 (95% CI [0.32, 0.53], P &lt; .0001), with an overall mean survival of 91.5 months (95% CI 76.4, 106.6) compared to 43.0 (95% CI 34.2, 51.9) for those patients not receiving regular surveillance Outcome following diagnosis of HCC secondary to chronic HCV is determined by early detection when curative intervention is possible. Lack of diagnosis of HCV and nonadherence to HCC surveillance results in late diagnosis and poor outcomes. Under-diagnosis of HCV infection and lack of diagnosis of cirrhosis in patients known to have HCV infection reduce the benefit of current HCC surveillance strategies. abstract_id: PUBMED:31615295 Impact of Toll-like Receptors 2(TLR2) and TLR 4 Gene Variations on HCV Susceptibility, Response to Treatment and Development of Hepatocellular Carcinoma in Cirrhotic HCV Patients. Background and Aims: Genetic polymorphisms of Toll-like receptors (TLRs) have been proposed to affect susceptibility to HCV infection and progression to end-stage liver disease. This study was conducted to clarify the association of SNPS of TLR2 and TLR4 with clinical outcome of hepatitis C, response to treatment and development of HCC.Methods: The current study examined 3295 individuals from 725 families that were categorized into groups comprising chronic HCV (CH), spontaneous viral clearance (SC) and control subjects. Treated patients were classified into responders (RT) and non-responders (NRT). In addition, patients with liver cirrhotic (LC), and hepatocellular carcinoma (HCC) were also included. All subjects were genotyped for five single nucleotide polymorphisms (SNPs) of TLR2 and four SNPs of TLR4 and their haplotypes using allelic discrimination real-time PCR.Results: Results demonstrated strong association with allele A of rs13105517 of TLR2 and allele C of rs10116253 of TLR4 with CH in comparison to SC group. However, The peak of risk of HCC was observed with allele C of rs3804099 of TLR2 and C allele of rs10116253 TLR4 (p &lt; 0.001).A strong association was found with allele T of rs1816702 of TLR2 and allele A of rs5030728 of TLR4 in non responder group in comparison to responders (p &lt; 0.001). Haplotypes CAGT of TLR4 and ATAC of TLR2 showed significant association with CH and HCC groups in comparison to other groups.Conclusions: This study shows an association of minor alleles of TLR2 and TLR4 with outcome of HCV infection, response to therapy and development of HCC in cirrhotic patients. abstract_id: PUBMED:37599312 Prognostic value of S100A4 and Glypican-3 in hepatocellular carcinoma in cirrhotic HCV patients. Aims: Both S100A4 and Glypican-3 have been known to be engaged in HCC development and progression. This study aimed to evaluate both S100A4 and GPC3 expression in HCC tissues as a prognostic markers. Methods: Tissues from 70 patients of HCC in cirrhotic HCV patients were evaluated by immunohistochemistry using antibodies against SA100A4 and GPC3 and compared with tumor-adjacent tissue (controls). All cases were followed for 40 months. Results: GPC3 was more expressed in HCC (79%) than S100A4 (21%). S100A4 was more significantly expressed in cases showing metastasis, microscopic vascular emboli, necrosis, and grade III tumors. There was no relationship between overall survival and both S100A4 and GPC3. The only significant independent predictor for recurrence was decompensation (OR 3.037), while metastasis was significantly predicted by S100A4 expression (OR 9.63) and necrosis (OR 8.33). Conclusion: S100A4 might be used as a prognostic marker for HCC, while GPC3 is a reliable marker of HCC diagnosis. abstract_id: PUBMED:26020028 Surveillance, diagnosis, treatment, and outcome of liver cancer in Japan. Background: Hepatocellular carcinoma (HCC) is the fifth most common type of cancer and the third leading cause of cancer-related death worldwide. HCC is most common in Asia, but its prevalence is rapidly increasing in Western countries; consequently, HCC is a global medical issue that urgently needs to be addressed. Japan is the only developed country that has experienced both hepatitis B-related and hepatitis C-related HCC and has a long history of innovation when it comes to new diagnostic and therapeutic modalities, such as computed tomography angiography, anatomical resection, ablation, and transarterial chemoembolization. Among these innovations, a nationwide surveillance program was well established by the 1980s, and such a long-term national program does not exist anywhere else in the world. Summary: More than 60% of the initially detected HCCs in Japan are Barcelona Clinic Liver Cancer stage 0 or A, which can undergo curative therapies such as resection, ablation, or transplantation. The recent 5-year survival rate of HCC patients in Japan was 43% and the median survival time was 50 months. In addition, both incidence and mortality rates are drastically declining as a result of the successful surveillance program, careful diagnostic flow, and extensive repeated treatments. Key Message: Japan's successful model in the surveillance, diagnosis, and treatment of HCC should be adopted as widely as possible to improve the survival of HCC patients worldwide. abstract_id: PUBMED:12431721 Malnutrition is a risk factor in cirrhotic patients undergoing surgery. Cirrhotic patients may become candidates for elective and emergency surgery. This may be due to conditions requiring operations such as cholecystectomy, herniotomy, or gastrointestinal malignancies, more common in cirrhotics when compared with the general population, or to complications of the liver disease such as resectable hepatocellular carcinomas or surgical portosystemic shunts to treat portal hypertension. It has been estimated that 10% of cirrhotics undergo at least one operative procedure during the final 2 y of their lives. Many studies have documented a high risk of morbidity and mortality associated with surgical procedures in these patients, and several factors influencing the postoperative outcome have been identified. Malnutrition, which is frequently encountered in cirrhotic patients, has been shown to have an important impact on the surgical risk. A poor nutrition status also has been associated with a higher risk of complications and mortality in patients undergoing liver transplantation. Few data are available concerning the perioperative nutrition support in surgical cirrhotic patients. The results of these studies are sometimes encouraging in reporting that the nutrition therapy may improve the clinical outcome in cirrhotic patients undergoing general surgery and/or liver transplantation. The limited number of patients and their heterogeneity, however, do not allow definitive conclusions, and more research on this issue is needed. abstract_id: PUBMED:31531321 Surveillance and diagnosis of hepatocellular carcinoma: A systematic review. Background: Hepatocellular carcinoma (HCC) appears in most of cases in patients with advanced liver disease and is currently the primary cause of death in this population. Surveillance of HCC has been proposed and recommended in clinical guidelines to obtain earlier diagnosis, but it is still controversial and is not accepted worldwide. Aim: To review the actual evidence to support the surveillance programs in patients with cirrhosis as well as the diagnosis procedure. Methods: Systematic review of recent literature of surveillance (tools, interval, cost-benefit, target population) and the role of imaging diagnosis (radiological non-invasive diagnosis, optimal modality and agents) of HCC. Results: The benefits of surveillance of HCC, mainly with ultrasonography, have been assessed in several prospective and retrospective analysis, although the percentage of patients diagnosed in surveillance programs is still low. Surveillance of HCC permits diagnosis in early stages allows better access to curative treatment and increases life expectancy in patients with cirrhosis. HCC is a tumor with special radiological characteristics in computed tomography and magnetic resonance imaging, which allows highly accurate diagnosis without routine biopsy confirmation. The actual recommendation is to perform biopsy only in indeterminate nodules. Conclusion: The evidence supports the recommendation of performing surveillance of HCC in patients with cirrhosis susceptible of treatment, using ultrasonography every 6 mo. The diagnosis evaluation of HCC can be established based on noninvasive imaging criteria in patients with cirrhosis. abstract_id: PUBMED:32615276 Personalized surveillance for hepatocellular carcinoma in cirrhosis - using machine learning adapted to HCV status. Background & Aims: Refining hepatocellular carcinoma (HCC) surveillance programs requires improved individual risk prediction. Thus, we aimed to develop algorithms based on machine learning approaches to predict the risk of HCC more accurately in patients with HCV-related cirrhosis, according to their virological status. Methods: Patients with compensated biopsy-proven HCV-related cirrhosis from the French ANRS CO12 CirVir cohort were included in a semi-annual HCC surveillance program. Three prognostic models for HCC occurrence were built, using (i) Fine-Gray regression as a benchmark, (ii) single decision tree (DT), and (iii) random survival forest for competing risks survival (RSF). Model performance was evaluated from C-indexes validated externally in the ANRS CO22 Hepather cohort (n = 668 enrolled between 08/2012-01/2014). Results: Out of 836 patients analyzed, 156 (19%) developed HCC and 434 (52%) achieved sustained virological response (SVR) (median follow-up 63 months). Fine-Gray regression models identified 6 independent predictors of HCC occurrence in patients before SVR (past excessive alcohol intake, genotype 1, elevated AFP and GGT, low platelet count and albuminemia) and 3 in patients after SVR (elevated AST, low platelet count and shorter prothrombin time). DT analysis confirmed these associations but revealed more complex interactions, yielding 8 patient groups with varying cancer risks and predictors depending on SVR achievement. On RSF analysis, the most important predictors of HCC varied by SVR status (non-SVR: platelet count, GGT, AFP and albuminemia; SVR: prothrombin time, ALT, age and platelet count). Externally validated C-indexes before/after SVR were 0.64/0.64 [Fine-Gray], 0.60/62 [DT] and 0.71/0.70 [RSF]. Conclusions: Risk factors for hepatocarcinogenesis differ according to SVR status. Machine learning algorithms can refine HCC risk assessment by revealing complex interactions between cancer predictors. Such approaches could be used to develop more cost-effective tailored surveillance programs. Lay Summary: Patients with HCV-related cirrhosis must be included in liver cancer surveillance programs, which rely on ultrasound examination every 6 months. Hepatocellular carcinoma (HCC) screening is hampered by sensitivity issues, leading to late cancer diagnoses in a substantial number of patients. Refining surveillance periodicity and modality using more sophisticated imaging techniques such as MRI may only be cost-effective in patients with the highest HCC incidence. Herein, we demonstrate how machine learning algorithms (i.e. data-driven mathematical models to make predictions or decisions), can refine individualized risk prediction. Answer: Yes, surveillance for hepatocellular carcinoma (HCC) in HCV cirrhotic patients does improve treatment outcomes, mainly due to better clinical status at diagnosis. Studies have shown that surveillance programs lead to the detection of HCC at earlier and potentially curable stages, which allows for more effective treatment options and improved survival rates (PUBMED:20060710). Patients diagnosed with HCC during follow-up surveillance tend to have a lower Child-Pugh's score, indicating better liver function, and consequently, they have a better survival rate compared to those diagnosed incidentally (PUBMED:11100360). Additionally, surveillance has been associated with the detection of smaller tumors, without vascular invasion or metastases, and a higher likelihood of receiving curative treatments such as surgical resection, radiofrequency ablation, and percutaneous ethanol injection (PUBMED:20060710). Moreover, adherence to surveillance guidelines, such as six-monthly imaging, has been shown to improve overall survival, with routine surveillance leading to significantly better outcomes compared to patients not receiving regular surveillance (PUBMED:31323163). The benefits of surveillance, including early diagnosis and access to curative treatment, have been supported by evidence from systematic reviews and are recommended in clinical guidelines (PUBMED:31531321). However, despite the formalized recommendations, the prevalence of consistent HCC surveillance remains low, and efforts to improve guideline-concordant surveillance in clinical practice are needed (PUBMED:31124041). In summary, surveillance for HCC in patients with HCV-related cirrhosis improves treatment outcomes primarily due to the detection of HCC at an earlier stage, which is associated with better liver function and a higher eligibility for curative treatments, leading to improved survival rates.
Instruction: Do NSAIDs cause dyspepsia? Abstracts: abstract_id: PUBMED:34769120 Effect of NSAIDs Supplementation on the PACAP-, SP- and GAL-Immunoreactive Neurons in the Porcine Jejunum. Side effects associated with nonsteroidal anti-inflammatory drugs (NSAIDs) treatment are a serious limitation of their use in anti-inflammatory therapy. The negative effects of taking NSAIDs include abdominal pain, indigestion nausea as well as serious complications such as bleeding and perforation. The enteric nervous system is involved in regulation of gastrointestinal functions through the release of neurotransmitters. The present study was designed to determine, for the first time, the changes in pituitary adenylate cyclase-activating polypeptide (PACAP), substance P (SP) and galanin (GAL) expression in porcine jejunum after long-term treatment with aspirin, indomethacin and naproxen. The study was performed on 16 immature pigs. The animals were randomly divided into four experimental groups: control, aspirin, indomethacin and naproxen. Control animals were given empty gelatin capsules, while animals in the test groups received selected NSAIDs for 28 days. Next, animals from each group were euthanized. Frozen sections were prepared from collected jejunum and subjected to double immunofluorescence staining. NSAIDs supplementation caused a significant increase in the population of PACAP-, SP- and GAL-containing enteric neurons in the porcine jejunum. Our results suggest the participation of the selected neurotransmitters in regulatory processes of the gastrointestinal function and may indicate the direct toxic effect of NSAIDs on the ENS neurons. abstract_id: PUBMED:33390114 Synthetic Strategies Towards Safer NSAIDs Through Prodrug Approach: A Review. Non-steroidal anti-inflammatory drugs (NSAIDs) are agents that are used for various properties such as analgesic activity, anti-inflammatory activity, antipyretic activity, etc. But this class of drugs is associated with different side effects such as dyspepsia, gastroduodenal ulcers, gastrointestinal (GI) bleeding and perforation. The prodrug approach is quite beneficial to curb these side effects. Prodrugs are the inactive compounds that, on metabolism get converted into an active metabolite exhibiting desired activities. Commonly used approaches for synthesizing prodrugs are amide, esters, and mutual prodrugs by suppressing the free carboxylic groups responsible for these side effects. In this review, different schemes reported for the synthesis of NSAIDs that are devoid of undesired side effects such as irritation to gastric mucosa, gastrotoxicity, and ulcerogenicity have been compiled. Docking studies and the structure-activity relationship of some compounds are also discussed. The paper shall help the researchers to understand the methods to expedite the synthesis by carrying out substitutions of various groups of the parent compound and establish the mechanism of action of these derivatives of masking the unwanted side effects. abstract_id: PUBMED:16971869 NSAIDs-induced gastrointestinal damage. Review. Non-steroidal anti-inflammatory drugs (NSAIDs), along with analgesics, are the most widely prescribed medication in the world. However, NSAIDs cause numerous side effects, being the gastrointestinal (GI) events the most common ones with an increase of risk of serious GI complications between 2.5- and 5-fold, as compared with individuals not taking NSAIDs. Factors that increase the risk for serious events in NSAID-using patients include a history of ulcer or ulcer complications, advanced age (=or&gt;65 years), the use of high-dose NSAIDs, more than one NSAID, anticoagulants or corticosteroid therapy. If NSAID therapy is required, we must balance GI and cardiovascular risk and the therapy should be prescribed at the lowest possible dose and for the shortest period of time. The use of NSAID without gastroprotective co-therapy is considered appropriate in patients&lt;65 years, not taking aspirin and having no GI history. In patients with GI risk factors, but no cardiovascular risk, coxibs or NSAIDs plus proton pump inhibitor (PPI) or misoprostol are valid options. Patients with a history of ulcer bleeding should receive coxib plus PPI therapy and should be tested and treated for Helicobacter pylori infection. Coxib therapy has better GI tolerance than traditional NSAIDs and PPI therapy is effective both in treatment and prevention of NSAID-induced dyspepsia and should be considered in patients who develop dyspepsia during NSAID or coxib therapy. abstract_id: PUBMED:23226825 Patient's Knowledge and Perception Towards the use of Non-steroidal Anti-Inflammatory Drugs in Rheumatology Clinic Northern Malaysia. Objective: In Rheumatology, non-steroidal anti-inflammatory drugs (NSAIDs) has been widely prescribed and used. However, despite their clinical benefits in the management of inflammatory and degenerative joint disease, NSAIDs have considerable side effects, mostly affecting the upper gastrointestinal system, which therefore, limit their use. This study was conducted to determine the patients' knowledge and perception regarding the used of NSAIDS. Methods: A total of 120 patients who attended the rheumatology clinic Hospital, Raja Permaisuri Bainun, Malaysia, and received NSAIDs more than 3 months were interviewed irrespective of their rheumatological conditions. Patient's knowledge and perception on the side effects of NSAIDs were recorded. Result: Fifty-four percent of the patients obtained information regarding the side effect of NSAIDs either from the rheumatologist, rheumatology staff nurse or other medical staffs (75.4%). The remaining 45.8% were naive of such knowledge. Fifteen percent obtained the information by surfing the internet and 9.2% from printed media. Twenty-four (24.2%) patients, experienced indigestion and/or stomach discomfort attributed to NSAIDs used. Two patients (1.7%) had hematemesis and malena once. Conclusion: This study shows that half of the patients who attended the rheumatology clinic were unaware of the side effect of NSAIDs. Available data showed that most of the knowledgeable patients are more conscience and self-educated. This study also reveals the important roles of clinicians, trained staff nurses as well as the pharmacist in providing the guidance and knowledge of any medication taken by patients. abstract_id: PUBMED:18516356 Review of the cardiovascular safety of COXIBs compared to NSAIDS. There is no doubt that NSAIDs and COXIBS are the mainstay for managing pain and inflammation in arthritis. Overall, at therapeutically equivalent doses, both NSAIDs and COXIBs provide equivalent analgesic and anti-inflammatory efficacy. However, the gastrointestinal risk associated with NSAIDs is considerable. More recently, the cardiovascular risk associated with NSAIDs and COXIBs has become a concern. Most patients, particularly the young, can benefit from NSAIDs without the risk of serious adverse gastrointestinal or cardiovascular events. However, patients with a previous history of serious gastrointestinal complications and the elderly, who could be at risk, do require alternatives. COXIBs have significant benefits over NSAIDs in reducing the incidence of serious gastrointestinal complications (perforations, ulcers and gastric bleeding). Currently two oral COXIBs are available, celecoxib and lumiracoxib, and one parenteral COXIB, parecoxib. Celecoxib has been on the market for longer and has the largest body of evidence. The older NSAIDs, such as meloxicam, with preferential COX-2 inhibition do not have good long-term evidence of reducing the incidence of serious gastrointestinal complications. However, these agents do have evidence of tolerability, ie, reducing the less-serious gastrointestinal effects, mainly dyspepsia. The South African Rheumatoid Arthritis Association's guidelines, amended in November 2005 recommend COXIBs for elderly patients (&gt; 60 years) with previous gastropathy and those on warfarin and/or corticosteroids, providing they do not have contra-indications. However, caution is advised when prescribing COXIBs for patients with risk factors for heart disease. These recommendations are very similar to those made by the National Institute for Clinical Excellence (NICE). In addition, it should be noted that for those patients without any cardiovascular complications but with gastrointestinal risk factors or on aspirin, it may be necessary to add a proton pump inhibitor (PPI). PPIs, however, provide little benefit for bleeding and ulceration of the lower intestine. One consequence of this low-grade bleeding is anaemia and a general feeling of malaise in patients with rheumatic disease. Current evidence suggests that COXIBs such as rofecoxib and celecoxib do not increase small intestinal permeability and that celecoxib does not cause lower intestinal bleeding and may be of benefit to those patients with lower gastrointestinal complications. In patients at risk for cardiovascular complications, both NSAIDs and COXIBs have been shown to increase the risk of myocardial infarctions (MI), hypertension and heart failure. Studies comparing COXIBs and non-specific NSAIDs should, however, be interpreted with caution. One needs to take into account the underlying baseline cardiovascular risk of the populations being compared. COXIBs appear to be prescribed preferentially to patients who were at an increased risk of cardiovascular events compared with patients prescribed non-specific NSAIDs. When the overall risk of cardiovascular complications is relatively low and an anti-inflammatory agent is required, current evidence suggests that celecoxib is an agent of choice because of its lower cardiovascular toxicity potential compared to NSAIDs and other COXIBs. abstract_id: PUBMED:15366677 Treatment of NSAIDs related dyspepsia NSAIDs-induced dyspepsia occurs in 10 to 30% of patients treated with NSAIDs, leading to discontinuation of treatment in 5 to 15%. Gastrointestinal tolerability of COX-2 selective inhibitors is better than for non-selective NSAIDs. Helicobacter pylori infection does not influence gastrointestinal tolerability of NSAIDs. Therefore, patients should not be tested and treated for H. pylori infection unless they have a history of peptic ulcer. Symptoms of NSAIDs-induced dyspepsia are poorly correlated with gastroduodenal mucosal damage. Therefore, upper gastrointestinal endoscopy should be performed only if symptom relief is not achieved with the first line empirical treatment and/or if symptoms suggestive of complications, such as bleeding, develop. Proton pump inhibitors appear to be the treatment of choice for NSAIDs-induced dyspepsia. abstract_id: PUBMED:24713339 Cardiovascular and gastrointestinal safety of NSAIDs All NSAIDs may induce cardiotoxicity. In this respect naproxen is relatively the safest choice. Selective cyclo-oxygenase-2-inhibitors (coxibs) are at least as effective in preventing clinically relevant gastrointestinal toxicity as non-selective NSAIDs plus a protonpump inhibitor (PPI). Non-selective NSAIDs plus a PPI are more effective in prevention of dyspepsia than coxibs. After a serious gastrointestinal complication while using NSAIDs, in principal the patient should no longer use NSAIDs. If needed, a coxib plus a PPI is the first choice. abstract_id: PUBMED:11336566 COX-2 inhibitors vs. NSAIDs in gastrointestinal damage and prevention. Non-steroidal anti-inflammatory drugs (NSAIDs) inhibit production of protective gastric mucosal prostaglandins and also have a direct topical irritant effect. In some patients this results in dyspepsia and development of gastroduodenal erosions and ulceration. The risk of ulcer complications, such as bleeding, perforation and death is increased approximately 4-fold in NSAID users. Patients at high risk of ulcer complications include the elderly, those taking anticoagulants, steroids and aspirin, those with a previous history of peptic ulceration and patients with concomitant serious medical problems. The interaction of NSAIDs with Helicobacter pylori (the major cause of peptic ulceration in non-NSAID users) is controversial and some studies suggest that H. pylori infection may even protect against NSAID-induced ulceration. Selective inhibitors of the inducible cyclooxygenase-2 (COX-2) enzyme spare COX-1 in the gastric mucosa and, hence, do not inhibit production of mucosal prostaglandins. COX-2-selective inhibitors are associated with a significant reduction in gastroduodenal damage compared with traditional NSAIDs. Proton pump inhibitors (PPI) are probably the best agents for healing and prevention of NSAID-induced ulcers. Preliminary studies suggest that COX-2 selective inhibitors, like traditional NSAIDs, may prevent lower gastrointestinal cancer. Further studies are needed but they may be useful in individuals at high risk of certain types of lower gastrointestinal malignancy with increased gastrointestinal tolerability and safety. abstract_id: PUBMED:31175745 Medico-social value of osteoarthritis. secondary prevention and treatment of osteoarthritis in comorbidity with chronic gastritis. Objective: Introduction: Medico-social significance of osteoarthritis is due to a number of factors, one of which is associated with the need for long-term anti-inflammatory therapy, which is associated with undesirable side effects. The aim: Identify the features of the course of chronic gastritis in patients taking selective NSAIDs for osteoarthritis. Patients And Methods: Materials and methods: Were examined 122 patients with osteoarthrosis, who had verified chronic gastritis in the anamnesis (50 males and 72 females), aged from 42 to 64. Control group included 40 patients with osteoarthrosis without gastroduodenal zone pathology in the anamnesis. For arthralgia relief, patients were prescribed to intake meloxicam (average dose - 12.5±1.39 mg/day) or nimesulide (average dose - 150±14.91 mg/day). Results: Results: It was determined that prescription of selective NSAIDs (meloxicam and nimesulide) raised the risk of NSAIDs gastropathy/dyspepsia in 2.9 times (P&lt;0.03) in patients with chronic gastritis in the anamnesis than in patients without associated gastroduodenal zone pathology. Atrophy of gastric mucous membrane was associated with higher risks (P&gt;0.05) of erosive gastropathy. Conclusion: Conclusions: With the purpose of gastropathy prevention upon taking NSAIDs, patients with chronic gastritis in the anamnesis are recommended to undergo gastroprotective therapy. abstract_id: PUBMED:8292467 Management of NSAIDs-related dyspepsia in the community. Little is known about how non-steroidal anti-inflammatory drugs (NSAIDs)-related dyspepsia is managed in general practice. A survey was conducted to ascertain the preferred methods of investigation and management of NSAIDs-related dyspepsia by GPs in the Leeds district in the summer of 1992. A total of 261/300 (87%) replied to the questionnaire. Most (87%) GPs discontinued NSAIDs, and 12% changed the drug. Only 42 (16%) suggested endoscopy. Answer: Yes, NSAIDs can cause dyspepsia. Non-steroidal anti-inflammatory drugs (NSAIDs) are associated with various side effects, including dyspepsia, gastroduodenal ulcers, gastrointestinal (GI) bleeding, and perforation (PUBMED:33390114; PUBMED:16971869; PUBMED:15366677; PUBMED:11336566). Dyspepsia occurs in 10 to 30% of patients treated with NSAIDs, leading to discontinuation of treatment in 5 to 15% (PUBMED:15366677). NSAIDs-induced dyspepsia is poorly correlated with gastroduodenal mucosal damage, and proton pump inhibitors appear to be the treatment of choice for NSAIDs-induced dyspepsia (PUBMED:15366677). Additionally, a study conducted in a rheumatology clinic in Northern Malaysia found that 24.2% of patients experienced indigestion and/or stomach discomfort attributed to NSAIDs use (PUBMED:23226825).
Instruction: Proximal Tibia Reconstruction After Bone Tumor Resection: Are Survivorship and Outcomes of Endoprosthetic Replacement and Osteoarticular Allograft Similar? Abstracts: abstract_id: PUBMED:27103142 Proximal Tibia Reconstruction After Bone Tumor Resection: Are Survivorship and Outcomes of Endoprosthetic Replacement and Osteoarticular Allograft Similar? Background: The proximal tibia is one of the most challenging anatomic sites for extremity reconstructions after bone tumor resection. Because bone tumors are rare and large case series of reconstructions of the proximal tibia are lacking, we undertook this study to compare two major reconstructive approaches at two large sarcoma centers. Questions/purposes: The purpose of this study was to compare groups of patients treated with endoprosthetic replacement or osteoarticular allograft reconstruction for proximal tibia bone tumors in terms of (1) limb salvage reconstruction failures and risk of amputation of the limb; (2) causes of failure; and (3) functional results. Methods: Between 1990 and 2012, two oncologic centers treated 385 patients with proximal tibial resections and reconstruction. During that time, the general indications for those types of reconstruction were proximal tibia malignant tumors or bone destruction with articular surface damage or collapse. Patients who matched the inclusion criteria (age between 15 and 60 years old, diagnosis of a primary bone tumor of the proximal tibia treated with limb salvage surgery and reconstructed with endoprosthetic replacement or osteoarticular allograft) were included for analysis (n = 149). In those groups (endoprosthetic or allograft), of the patients not known to have reached an endpoint (death, reconstructive failure, or limb loss) before 2 years, 85% (88 of 104) and 100% (45 of 45) were available for followup at a minimum of 2 years. A total of 88 patients were included in the endoprosthetic group and 45 patients in the osteoarticular allograft group. Followup was at a mean of 9.5 (SD 6.72) years (range, 2-24 years) for patients with endoprosthetic reconstructions, and 7.4 (SD 5.94) years for patients treated with allografts (range, 2-21 years). The following variables were compared: limb salvage reconstruction failure rates, risk of limb amputation, type of failures according to the Henderson et al. classification, and functional results assessed by the Musculoskeletal Tumor Society system. Results: With the numbers available, after competitive risk analysis, the probability of failure for endoprosthetic replacement of the proximal tibia was 18% (95% confidence interval [CI], 10.75-27.46) at 5 years and 44% (95% CI, 31.67-55.62) at 10 years and for osteoarticular allograft reconstruction was 27% (95% CI, 14.73-40.16) at 5 years and 32% (95% CI, 18.65-46.18) at 10 years. There were no differences in terms of risk of failures at 5 years (p = 0.26) or 10 years (p = 0.20) between the two groups. Fifty-one of 88 patients (58%) with proximal tibia endoprostheses developed a reconstruction failure with mechanical causes being the most prevalent (32 of 51 patients [63%]). A total of 19 of 45 osteoarticular allograft reconstructions failed (42%) and nine of 19 (47%) of them were caused by early infection. Ten-year risk of amputation after failure for endoprosthetic reconstruction was 10% (95% CI, 5.13-18.12) and 11% (95% CI, 4.01-22.28) for osteoarticular allograft with no difference between the groups (p = 0.91). With the numbers available, there were no differences between the groups in terms of the mean Musculoskeletal Tumor Society score (26.58, SD 2.99, range, 19-30 versus 27.52, SD 1.91, range, 22-30; p = 0.13; 95% CI, -2,3 to 0.32). Mean extension lag was more severe in the endoprosthetic group than the osteoarticular allograft group: 13.56° (SD 18.73; range, 0°-80°) versus 2.41° (SD 5.76; range, 0°-30°; p &lt; 0.001; 95% CI, 5.8-16.4). Conclusions: Reconstruction of the proximal tibia with either endoprosthetic replacement or osteoarticular allograft appears to offer similar reconstruction failures rates. The primary cause of failure for allograft was infection and for endoprosthesis was mechanical complications. We believe that the treating surgeon should have both options available for treatment of patients with malignant or aggressive tumors of the proximal tibia. (S)he might consider an allograft in a younger patient to achieve better extensor mechanism function, whereas in an older patient or one with a poorer prognosis where return to function and ambulation quickly is desired, an endoprosthesis may be advantageous. Level Of Evidence: Level III, therapeutic study. abstract_id: PUBMED:37315710 Outcomes of proximal humeral reconstruction with cemented osteoarticular allograft in pediatric patients: a retrospective cohort study. Background: There is no agreement on the best choice of proximal humeral reconstruction following tumor resection in pediatric patients. We reviewed the functional outcomes, oncologic outcomes, and surgical complications in pediatric patients after proximal humeral reconstruction with cemented osteoarticular allograft. Methods: Eighteen patients aged 8-13 years who underwent proximal humeral osteoarticular allograft reconstruction following resection of primary bone sarcoma were included. The mean follow-up period was 88 ± 31.7 months. At the last follow-up assessment, limb function was evaluated based on shoulder range of motion, Musculoskeletal Tumor Society score, and Toronto Extremity Salvage Score. Tumor recurrence and postoperative complications were extracted from the patients' medical records. Results: Mean active forward flexion of the shoulder was 38° ± 18°. Mean active abduction was 48° ± 18°. Mean active external rotation was 23° ± 9°. The mean Musculoskeletal Tumor Society score was 73.4% ± 11.2%. The mean Toronto Extremity Salvage Score was 75.6% ± 12.9%. Local recurrence occurred in 1 patient. Metastasis developed after the operation in 2 additional patients. We recorded 6 postoperative complications in this series, including 1 superficial infection, 1 late-onset deep infection, 1 allograft fracture, 2 cases of nonunion, and 2 cases of shoulder instability. Two complications required allograft removal. Conclusion: In pediatric patients, reconstruction of the proximal humerus with cemented osteoarticular allograft results in acceptable oncologic and functional outcomes while the postoperative complication rate seems to be lower than that of other available techniques. abstract_id: PUBMED:28105636 Cost-utility of osteoarticular allograft versus endoprosthetic reconstruction for primary bone sarcoma of the knee: A markov analysis. Background: The most cost-effective reconstruction after resection of bone sarcoma is unknown. The goal of this study was to compare the cost effectiveness of osteoarticular allograft to endoprosthetic reconstruction of the proximal tibia or distal femur. Methods: A Markov model was used. Revision and complication rates were taken from existing studies. Costs were based on Medicare reimbursement rates and implant prices. Health-state utilities were derived from the Health Utilities Index 3 survey with additional assumptions. Incremental cost-effectiveness ratios (ICER) were used with less than $100 000 per quality-adjusted life year (QALY) considered cost-effective. Sensitivity analyses were performed for comparison over a range of costs, utilities, complication rates, and revisions rates. Results: Osteoarticular allografts, and a 30% price-discounted endoprosthesis were cost-effective with ICERs of $92.59 and $6 114.77. One-way sensitivity analysis revealed discounted endoprostheses were favored if allografts cost over $21 900 or endoprostheses cost less than $51 900. Allograft reconstruction was favored over discounted endoprosthetic reconstruction if the allograft complication rate was less than 1.3%. Allografts were more cost-effective than full-price endoprostheses. Conclusions: Osteoarticular allografts and price-discounted endoprosthetic reconstructions are cost-effective. Sensitivity analysis, using plausible complication and revision rates, favored the use of discounted endoprostheses over allografts. Allografts are more cost-effective than full-price endoprostheses. abstract_id: PUBMED:20020336 Proximal tibia osteoarticular allografts in tumor limb salvage surgery. Background: Resection of large tumors of the proximal tibia may be reconstructed with endoprostheses or allografts with fixation. Endoprosthetic replacement is associated with high failure rates and complications. Proximal tibia osteoarticular allografts after tumor resection allows restoration of bone stock and reconstruction of the extensor mechanism, but the long-term failure rates and complications are not known. Questions/purposes: We therefore determined (1) the middle- and long-term survival of proximal tibia osteoarticular allografts, (2) their complications, and (3) functional (Musculoskeletal Tumor Society score) and radiographic (International Society of Limb Salvage) outcomes in patients treated with this reconstruction. Patients And Methods: We retrospectively reviewed 52 patients (58 reconstructions including six repeat reconstructions) who underwent osteoarticular proximal tibia allograft reconstructions after resection of a bone tumor. The minimum followup of the 46 surviving patients was 72 months (mean, 123 months; range, 10-250 months). Survival of the allograft was estimated using the Kaplan-Meier method. We documented outcomes using the Musculoskeletal Tumor Society functional scoring system and the International Society of Limb Salvage radiographic scoring system. Results: Six patients died from tumor-related causes without allograft failure before the 5-year radiographic followup. At last followup, 32 of the 52 remaining allografts were still in place; 20 failed owing to infections, local recurrences, or fractures. Overall allograft survival was 65% at 5 and 10 years, with an average Musculoskeletal Tumor Society functional score of 26 points and an average radiographic result of 87%. Conclusions: Based on these data we believe proximal tibia osteoarticular allograft is a valuable reconstructive procedure for large defects after resection of bone tumors. Level Of Evidence: Level IV, therapeutic study. See the Guidelines for Authors for a complete description of levels of evidence. abstract_id: PUBMED:11607901 Reconstruction of the extensor mechanism after proximal tibia endoprosthetic replacement. The proximal tibia is a difficult area in which to perform a wide resection of a bone tumor. This difficulty is due to the intimate relationship of tumor in this location to the nerves and blood vessels of the leg, inadequate soft tissue coverage after endoprosthetic reconstruction, and the need to reconstruct the extensor mechanism. Competence of the extensor mechanism is the major determinant of functional outcome of these patients. Between 1980 and 1997, 55 patients underwent proximal tibia resection with endoprosthetic reconstruction for a variety of malignant and benign-aggressive tumors. Reconstruction of the extensor mechanism included reattachment of the patellar tendon to the prosthesis with a Dacron tape, reinforcement with autologous bone-graft, and attachment of an overlying gastrocnemius flap. All patients were followed for a minimum of 2 years; 6 patients (11%) had a transient peroneal nerve palsy, 4 patients (7.2%) had a fasciocutaneous flap necrosis, and 2 patients (3.6%) had a deep wound infection. Full extension to extension lag of 20 degrees was achieved in 44 patients, and 8 patients required secondary reinforcement of the patellar tendon. Function was estimated to be good to excellent in 48 patients (87%). Reattachment of the patellar tendon to the prosthesis and reinforcement with an autologous bone-graft and a gastrocnemius flap are reliable means to restore extension after proximal tibia endoprosthetic reconstruction. abstract_id: PUBMED:32409940 Successful mid- to long-term outcome after reconstruction of the extensor apparatus using proximal tibia-patellar tendon composite allograft. Purpose: The purpose of the study was to assess the outcomes of extensor mechanism reconstruction with proximal tibia-patellar tendon composite allograft. Methods: 24 consecutive patients treated with allograft-prosthetic composite for proximal tibia tumour resection and a conventional total knee arthroplasty were included. Extensor mechanism reconstruction was performed with a proximal tibia-patellar tendon composite allograft and the suture of the donor tendon to the remnant native patellar tendon. Function was evaluated by the Musculoskeletal Tumor Society score (MSTS) and range of motion. Western Ontario and MacMaster University (WOMAC) and visual analogue scale for pain also were used. Results: After a mean follow-up of 11.7 (range 3-15) years, mean MSTS score was 22.4 (range 20-30), mean flexion was 94.0° (range 84°-110°), and mean extension lag was 7.2° (range 0°-18°). The mean VAS-pain was 4.3 (range 2-6), and WOMAC score was 72.4 (range 58-100). There was no failure of the reconstructed extensor mechanism. Conclusion: Patellar tendon reconstruction with allogeneic tissue from the proximal tibia allograft sutured to the recipient's remnant patellar tendon provides the mechanical support needed for healing of the reconstructed extensor mechanism with a substantial functional benefit to stabilize active knee extension and successful reconstruction survival at long-term. Level Of Evidence: III. abstract_id: PUBMED:27126893 Predictors of soft-tissue complications and deep infection in allograft reconstruction of the proximal tibia. Background And Objectives: Reconstruction of the proximal tibia after wide resection of malignant tumors in the pediatric population is very challenging. Advocates of allograft reconstruction argue as advantages bone preservation, biological reconstruction that facilitates reattachment of the extensor mechanism and other soft-tissue structures, delay of metallic prosthesis use and preservation of the distal femoral growth plate. However, complications are significant, infection being very common. Methods: Under IRB-approved protocol, 32 patients (17 males, 15 females), 13 years old in average (2-20), who underwent 33 allograft reconstructions of the proximal tibia, were evaluated for the occurrence of soft-tissue complications and/or deep infection (infection affecting the allograft). Potential predictors of soft-tissue complications and deep infection, categorized as pre- and perioperative variables, were analyzed in relation to the risk for developing a soft-tissue complication or a deep infection. Results: The prevalence of soft-tissue complications was 48% (16/33). However, we were not able to identify any significant predictors. The prevalence of deep (allograft) infection was 15% (5/33). Multivariate logistic regression determined higher BMI at the index surgical procedure and lower pre-operative WBC to be independent predictors of deep infection. For each unit of increase in BMI, the odds of deep infection increased by 40% (OR = 1.40; CI = 1.07-3.06; P &lt; 0.05). For each one unit (1,000) of increase in the pre-operative white cell-count, the odds of deep infection decreased by 70% (OR = 0.30; 95%CI = 0.01-0.89; P &lt; 0.05). Four of the five deep infections were in patients with soft-tissue complications, mainly wound dehiscence. However, wound dehiscence or soft-tissue complications were not predictive of deep infection. Conclusion: Soft-tissue complications are prevalent in allograft reconstruction of the proximal tibia. Prevention is important as these may progress to deep infection. Careful attention to nutritional (BMI) and immunological status may help in patient selection for allograft reconstruction. If allograft reconstruction is opted for, efforts should focus on optimization of these factors as they proved to be independent predictors of subsequent deep infection. J. Surg. Oncol. 2016;113:811-817. © 2016 Wiley Periodicals, Inc. abstract_id: PUBMED:33559165 Osteoarticular allograft reconstruction after distal radius tumor resection: Reoperation and patient reported outcomes. Background: The aims of this study are to evaluate the rate of wrist joint preservation, allograft retention, factors associated with reoperation and to report the patient reported outcomes after osteoarticular allograft reconstruction of the distal radius. Methods: Retrospective chart review identified 33 patients who underwent distal radius resection followed by osteoarticular allograft reconstruction, including 27 giant cell tumors and 6 primary malignancies. Ten patients with a preserved wrist joint completed the QuickDASH, PROMIS-CA physical function, and Toronto extremity salvage score (TESS) at a median of 13 years postoperatively. Results: The allograft retention rate was 89%, and an allograft fracture predisposed to conversion to wrist arthrodesis. The reoperation rate was 55% and 36% underwent wrist arthrodesis at a median of 4.2 years following index surgery. The use of locking plate fixation was associated with lower reoperation and allograft fracture rates. Patients reported a median QuickDASH of 10.2 (range: 0-52.3), a mean PROMIS physical function of 57.8 (range: 38.9-64.5) and the median TESS was 95.5 (range: 67.0-98.4). Conclusion: Osteoarticular allograft reconstruction results in acceptable long-term patient reported outcomes, despite a high revision rate. Allograft fixation with locking plates seems to reduce the number of reoperations and allograft fractures, along with reduction in wrist arthrodesis rates. abstract_id: PUBMED:28625098 Endoprosthetic replacement versus cement spacer in reconstruction of proximal humerus after tumor resection: Cost and benefits. Background: Proximal end of the humerus is a common site for both primary and metastatic bone tumors. Limb salvage with endoprosthetic replacement is the most common means of reconstruction, but it has been proved that cement spacer are more beneficial for inferior shoulder function. Thus, limb salvage can be replaced by cheaper spacers especially in poor societies. Patients And Methods: This study included 20 patients, of whom 14 were female, with a mean age of 40.4 years (range 12-60). Among the study population, six were diagnosed with osteosarcomas, two chondrosarcomas, two myeloma, two lymphoma, four metastatic carcinoma in the breast, two giant cell tumor, and two recurrent chondroplastoma. Limb salvage was successfully done in all patients: Tikhoff-Linberg type I in 12 cases and type V in 8. Endoprosthetic replacement was done in eight cases. An on-table fabricated cement spacer was used in 12 cases. Results: Follow-up ranged from 12 to 75 months, with a mean of 25.9 months. Functional outcome was almost comparable in both types of reconstruction, especially patient's satisfaction, with a mean function of 65%. Conclusion: A relatively expensive endoprosthesis could be replaced by a much cheaper cement spacer if their function is comparable. abstract_id: PUBMED:10872373 Endoprosthetic replacement of the proximal tibia. We have performed endoprosthetic replacement after resection of tumours of the proximal tibia on 151 patients over a period of 20 years. During this period limb-salvage surgery was achieved in 88% of patients with tumours of the proximal tibia. Both the implant and the operative technique have been gradually modified in order to reduce complications. An initial rate of infection of 36% has been reduced to 12% by the use of a flap of the medial gastrocnemius, to which the divided patellar tendon is attached. Loosening and breakage of the implant have been further causes of failure. We found that the probability of further surgical procedures being required was 70% at ten years and the risk of amputation, 25%. The development of a new rotating hinge endoprosthesis may lower the incidence of mechanical problems. Limb salvage for tumours of the proximal tibia is fraught with complications, but the good functional outcome in successful cases justifies its continued use. Answer: According to the study comparing endoprosthetic replacement and osteoarticular allograft reconstruction for proximal tibia bone tumors, the survivorship and outcomes of both methods appear to be similar (PUBMED:27103142). The probability of failure for endoprosthetic replacement of the proximal tibia was 18% at 5 years and 44% at 10 years, while for osteoarticular allograft reconstruction, it was 27% at 5 years and 32% at 10 years. There were no significant differences in terms of risk of failures at 5 years (p = 0.26) or 10 years (p = 0.20) between the two groups. The primary cause of failure for allograft was infection, and for endoprosthesis, it was mechanical complications. The ten-year risk of amputation after failure was similar for both groups, with no significant difference (p = 0.91). Functional results assessed by the Musculoskeletal Tumor Society score also showed no significant differences between the groups. However, mean extension lag was more severe in the endoprosthetic group than in the osteoarticular allograft group. The study suggests that both endoprosthetic replacement and osteoarticular allograft are viable options for the reconstruction of the proximal tibia following malignant or aggressive tumor resection. The choice between the two may depend on factors such as patient age, desired speed of return to function, and prognosis. For younger patients who may benefit from better extensor mechanism function, an allograft might be considered, whereas an endoprosthesis may be advantageous for older patients or those with a poorer prognosis who desire a quicker return to ambulation (PUBMED:27103142).