input
stringlengths
6.82k
29k
Instruction: Is Lolium pollen from an urban environment more allergenic than rural pollen? Abstracts: abstract_id: PUBMED:12199966 Is Lolium pollen from an urban environment more allergenic than rural pollen? Background: Allergy to grass pollen is a highly prevalent allergic disease. Hay fever is more predominant in urban than in rural areas, despite the increasingly smaller areas of surrounding grassland. The effect of vehicle exhaust pollutants, mainly diesel particles, and other industrial sources of atmospheric pollution leading to plant damage has been implicated in this phenomenon. Objective: This study compared the in vivo and in vitro allergenicity of pooled samples of Lolium perenne grass pollen harvested from 10 different urban areas with that of samples of the same pollen from 10 neighboring rural areas. Methods: Lolium perenne pollen from different parts of a city and from a nearby rural area was harvested in 1999 and 2000 during the peak pollination period. Protein composition was compared by SDS-PAGE and in vivo and in vitro IgE-binding capacity was compared by skin-prick tests, RAST-inhibition and measurement of the major allergen, Lol p 5. Results: In the two years under study, urban samples contained approximately twice the protein content of the rural samples. Biological activity and Lol p 5 content was higher in urban pollen than in rural pollen and showed differences in the two years under study. Conclusions: The protein content and allergenicity of Lolium perenne pollen was higher in urban areas than in rural areas. These differences might explain why allergy to grass pollen is more prevalent in urban areas. This finding should be taken into account in diagnosis, preventive measures and specific immunotherapy. abstract_id: PUBMED:33028739 Exposure to pollen allergens in allergic rhinitis expressed by diurnal variation of airborne tree pollen in urban and rural area. Exposure to airborne pollen allergens results in allergic symptoms in subjects who are sensitized. The paper presents diurnal variation in the counts of airborne allergenic pollen of selected trees (Betula L., Alnus Mill., Corylus L., Fagus L. and Ulmus L.) in two localities differing in the degree of urbanization: the city of Szczecin (urban) and the village of Gudowo, West Pomerania in northwest Poland (rural) in the years 2012-2014. The measurements were made by the volumetric method using a Burkardtype sampler operating in a continuous mode. The greatest similarities in the beginning of the main pollen season between the two sites studied were observed for birch and elm trees, while in the length of the main pollen season, for birch and alder trees. Pollen counts of alder and hazel reached higher levels in the rural area, while the levels of ash tree pollen counts were higher in the urban area. The level of birch tree pollen counts was similar in the two sites studied. For the majority of taxons observed in the urban and rural areas the dynamics of hourly changes in tree pollen counts were similar. The pollination peak was noted in the daytime, usually in the afternoon. For ash and elm trees increased pollen counts were observed at nighttime, while the birch tree pollen counts were at a high level for most of the 24 h cycle. The knowledge of seasonal and diurnal variations in tree pollen counts is crucial for prevention in patients with allergic rhinitis, sensitized to tree pollen allergens. abstract_id: PUBMED:17195999 Airborne grass and ragweed pollen in the southern Panonnian Valley--consideration of rural and urban environment. The aims of this study were to describe and compare the characteristics of grass and ragweed airborne pollen in rural and urban areas in the southern Panonnian Valley. Airborne pollen data were collected by using Hirst type volumetric samplers simultaneously in rural and urban localities. If rural and urban environment are considered, both grass and ragweed daily pollen concentrations showed a significant degree of association. Observed parameters (pollen index, maximum daily concentration, number of days during which the pollen is recorded in the air and start day of main pollen season), showed year-to-year variations for both grass and ragweed aeropollen. Average values of these parameters were higher in the rural environment, but the difference was statistically significant only for grass pollen index. Such a low difference indicates the possibility for conducting dose response clinical trials based on data obtained from one sampling station. The least year-to-year variations as well as the least difference between rural and urban environment, have been observed in the case of start date of the MPS. Such a situation suggests the possibility for using data obtained in one type of environment for the development of long-term forecast models for an entire region. abstract_id: PUBMED:21531404 Human exposure to allergenic pollens: a comparison between urban and rural areas. Background: Pollinosis is found more frequently in urban areas than in rural environments. This could be partly related to the different types of pollen exposure in these dissimilar areas. The objective of this study was to compare the distribution of pollen in these environments across an urbanization gradient. Methods: Daily pollen abundances were obtained in France using Hirst-type sensors. Sampling was conducted from January to June in 2003 and 2006 in a rural area, a semi-rural area and in two urban areas, which were characterized by several urbanization criteria. Results: Total allergenic pollen abundance was higher in rural and semi-rural areas than in urban areas irrespective of the sampling year. Multivariate analyses showed that pollen exposures differed according to the type of area and were strongly explained by the urbanization gradient. Grass, ash, birch, alder, hornbeam, hazel and plantain pollen quantities exceeded the allergy threshold more often in rural settings than in urban areas. In urban areas, only plane pollen quantities exceeded the allergy threshold more often than in rural areas. Conclusions: Allergenic pollen exposure is higher in rural areas than in urban areas, and the most abundant pollen in each area did not originated from the same taxa. This result should be taken into account in epidemiological studies comparing allergies in rural and urban areas to adapt the panel of pollen extracts for human environmental exposure. In addition, this study highlights that some ornamental trees produce a large number of allergenic pollens and provide new sources of aeroallergens. abstract_id: PUBMED:33986328 Strong variations in urban allergenicity riskscapes due to poor knowledge of tree pollen allergenic potential. Exposure to allergenic tree pollen is an increasing environmental health issue in urban areas. However, reliable, well-documented, peer-reviewed data on the allergenicity of pollen from common tree species in urban environments are lacking. Using the concept of 'riskscape', we present and discuss evidence on how different tree pollen allergenicity datasets shape the risk for pollen-allergy sufferers in five cities with different urban forests and population densities: Barcelona, Montreal, New York City, Paris, and Vancouver. We also evaluate how tree diversity can modify the allergenic risk of urban forests. We show that estimates of pollen exposure risk range from 1 to 74% for trees considered to be highly allergenic in the same city. This variation results from differences in the pollen allergenicity datasets, which become more pronounced when a city's canopy is dominated by only a few species and genera. In an increasingly urbanized world, diverse urban forests offer a potentially safer strategy aimed at diluting sources of allergenic pollen until better allergenicity data is developed. Our findings highlight an urgent need for a science-based approach to guide public health and urban forest planning. abstract_id: PUBMED:28316024 Spatial distribution of allergenic pollen through a large metropolitan area. For nearly a decade, the majority of the world's population has been living in cities, including a considerable percentage of people suffering from pollen allergy. The increasing concentration of people in cities results in larger populations being exposed to allergenic pollen at the same time. There is almost no information about spatial distribution of pollen within cities as well as a lack of information about the possible impact to human health. To obtain this increasing need for pollen exposure studies on an intra-urban scale, a novelty screening network of 14 weekly changed pollen traps was established within a large metropolitan area-Berlin, Germany. Gravimetric pollen traps were placed at a uniform street-level height from March until October 2014. Three important allergenic pollen types for Central Europe-birch (Betula), grasses (Poaceae), and mugwort (Artemisia)-were monitored. Remarkable spatial and temporal variations of pollen sedimentation within the city and the influences by urban local sources are shown. The observed differences between the trap with the overall highest and the trap with the overall lowest amount of pollen sedimentation were in the case of birch pollen 245%, grass pollen 306%, and mugwort pollen 1962%. Differences of this magnitude can probably lead to different health impacts on allergy sufferers in one city. Therefore, pollen should be monitored preferably in two or more appropriate locations within large cities and as a part of natural air quality regulations. abstract_id: PUBMED:10469034 Composition and stability of allergenic extracts made from gamma-irradiated rye grass (Lolium perenne) pollen. Background: Phenol is commonly added to allergenic extracts as a bacteriostatic agent, but it is poisonous and also detrimental to proteins, which accelerates extract degradation. Sterilization by gamma-irradiation of the source material could be an alternative to the use of phenol. Objective: To analyse the potential effects of gamma-irradiation of pollen on the composition, potency, and stability of the resulting extract, and compare them with those of phenol. Methods: Ryegrass (Lolium perenne) pollen was sterilized by gamma-irradiation at a dose of 25 kGy. Extracts prepared from the irradiated pollen were then compared by electrophoresis techniques and RAST inhibition to extracts, without or with 0.5% phenol, from nonirradiated pollen. In addition, proteolytic activity was compared in extracts from irradiated and nonirradiated pollen. To evaluate the stability of extracts on storage, they were analysed after forced degradation for up to 7 days at 37 degrees C. Results: When fresh extracts were analysed, there were no noticeable differences between the three types, as judged by immunoblotting and RAST inhibition experiments. However, on storage, extracts from irradiated pollen appeared to be superior to extracts from nonirradiated pollen, as some proteins were more stable in the former. This could be related to the lower proteolytic activity we have also observed in extracts from irradiated pollen. In contrast, extracts containing phenol degraded much faster, as proven by all our methods of investigation. Conclusion: Gamma-irradiation of pollen did not influence the IgE-binding capacity of the resulting extracts, but did yield extracts with somewhat improved stability, probably by reducing the proteolytic activity. It may be concluded that gamma-irradiation of the source material represents a good alternative to the use of phenol for the preparation of allergenic extracts. abstract_id: PUBMED:26452296 Fine-Scale Exposure to Allergenic Pollen in the Urban Environment: Evaluation of Land Use Regression Approach. Background: Despite the recent developments in physically and chemically based analysis of atmospheric particles, no models exist for resolving the spatial variability of pollen concentration at urban scale. Objectives: We developed a land use regression (LUR) approach for predicting spatial fine-scale allergenic pollen concentrations in the Helsinki metropolitan area, Finland, and evaluated the performance of the models against available empirical data. Methods: We used grass pollen data monitored at 16 sites in an urban area during the peak pollen season and geospatial environmental data. The main statistical method was generalized linear model (GLM). Results: GLM-based LURs explained 79% of the spatial variation in the grass pollen data based on all samples, and 47% of the variation when samples from two sites with very high concentrations were excluded. In model evaluation, prediction errors ranged from 6% to 26% of the observed range of grass pollen concentrations. Our findings support the use of geospatial data-based statistical models to predict the spatial variation of allergenic grass pollen concentrations at intra-urban scales. A remote sensing-based vegetation index was the strongest predictor of pollen concentrations for exposure assessments at local scales. Conclusions: The LUR approach provides new opportunities to estimate the relations between environmental determinants and allergenic pollen concentration in human-modified environments at fine spatial scales. This approach could potentially be applied to estimate retrospectively pollen concentrations to be used for long-term exposure assessments. Citation: Hjort J, Hugg TT, Antikainen H, Rusanen J, Sofiev M, Kukkonen J, Jaakkola MS, Jaakkola JJ. 2016. Fine-scale exposure to allergenic pollen in the urban environment: evaluation of land use regression approach. Environ Health Perspect 124:619-626; http://dx.doi.org/10.1289/ehp.1509761. abstract_id: PUBMED:23843211 Determination of allergenic load and pollen count of Cupressus arizonica pollen by flow cytometry using Cup a1 polyclonal antibody. Background: There is an increase in the incidence of pollen related allergy, thus information on pollen schedules would be a great asset for physicians to improve the clinical care of patients. Like cypress pollen sensitization shows a high prevalence among the causes of allergic rhinitis, and therefore it is of interest to use it like a model of study, distinguishing cypress pollen, pollen count and allergenic load level. In this work, we use a flow cytometry based technique to obtain both Cupressus arizonica pollen count and allergenic load, using specific rabbit polyclonal antibody Cup a1 and its comparison with optical microscopy technique measurement. Methods: Airborne samples were collected from Burkard Spore-Trap and Burkard Cyclone Cupressus arizonica pollen was studied using specific rabbit polyclonal antibody Cup a1, labelled with AlexaFluor® 488 or 750 and analysed by Flow Cytometry in both an EPICS XL and Cyan ADP cytometers (Beckman Coulter® ). Optical microscopy study was realized with a Leica optical microscope. Bland & Altman was used to determine agreement between both techniques measured. Results: We can identify three different populations based on rabbit polyclonal antibody Cup a1 staining. The main region (44.5%) had 97.3% recognition, a second region (25%) with 28% and a third region (30.5%) with 68% respectively. Immunofluorescence and confocal microscopy showed that main region corresponds to whole pollen grains, the second region are pollen without exine and the third region is constituted by smaller particles with allergenic properties. Pollen schedule shows a higher correlation measured by optical microscopy and flow cytometry in the pollen count with a p-value: 0.0008E-2 and 0.0002 with regard to smaller particles, so the Bland & Altman measurement showed a good correlation between them, p-value: 0,0003. Conclusion: Determination of pollen count and allergenic load by flow cytometry represents an important tool in the determination of airborne respiratory allergens. We showed that not only whole pollen but also smaller particles could induce allergic sensitization. This is the first study where flow cytometry is used for calculating pollen counts and allergenic load. © 2013 Clinical Cytometry Society. abstract_id: PUBMED:23141838 First annual register of allergenic pollen in Talca, Chile. Background: There are no data on atmospheric pollen in Talca. In the present work, our aim is to describe the amount of pollen grain in the atmosphere of the city of Talca likely to cause pollinosis of its inhabitants. Methods: A volumetric Hirst sampler (Burkard seven-day recording device) was used to study pollen levels. It was placed in the centre of Talca from May 2007 to April 2008. Results: The highest airborne presence of pollen, as measured in weekly averages, was Platanus acerifolia with a maximum weekly daily average of 203 grains/m³ registered during September and October. The second highest was Acer pseudoplatanus with a maximum weekly daily average of 116 grains/m³. Populus spp. had a maximum weekly daily average 103 grains/m³. Olea europaea reached 19 grains/m³ in November. Grasses presented high levels of pollen counts with a maximum weekly daily average of 27 grains/m³ from the end of August until the end of January. Pollens of Plantago spp. Rumex acetosella and Chenopodium spp. had a similar distribution and were present from October to April with maximum weekly daily average of 7 grains/m³, 7 grains/m³ and 3 grains/m³ respectively. Significant concentrations of Ambrosia artemisiifolia were detected from February until April. Conclusion: The population of Talca was exposed to high concentrations of allergenic pollen, such as P. acerifolia, A. pseudoplatanus, and grasses in the months of August through November. The detection of O. europaea and A. artemisiifolia is important as these are emergent pollens in the city of Talca. Aerobiological monitoring will provide the community with reliable information about the level of allergenic pollens, improving treatment and quality of life of patients with respiratory allergy. Answer: Yes, Lolium pollen from an urban environment is more allergenic than rural pollen. A study compared the in vivo and in vitro allergenicity of pooled samples of Lolium perenne grass pollen harvested from urban areas with that of samples from neighboring rural areas. The results showed that urban samples contained approximately twice the protein content of the rural samples. Additionally, the biological activity and Lol p 5 content, which is a major allergen, was higher in urban pollen than in rural pollen. These differences might explain why allergy to grass pollen is more prevalent in urban areas (PUBMED:12199966).
Instruction: Do pain problems in young school children persist into early adulthood? Abstracts: abstract_id: PUBMED:15109969 Do pain problems in young school children persist into early adulthood? A 13-year follow-up. Design: In a longitudinal study, 335 children ages 8, 11 and 14, first studied in 1989 were followed-up on two occasions in 1991 and 2002. The subjects filled in questionnaires on pain, the first two times in school, the last as a postal survey. Purposes: To determine if headache and back pain during the school years were transitory or if they grew into pain problems in adulthood; to determine predictors of pain. Results: In the 2002 study, 59% of the women and 39% of the men reported pain at 21, 24 and 27 years. A total of 68 (52 women, 16 men) or 20% of the subjects reported pain symptoms in all three studies. The cumulative incidence rate for the presence of pain in the cohort studied was 31% for 1989-2002 and 43% for 1991-2002. Four of the 10 individuals with pain also reported signs of stress. Three predictors were found: reported back pain in 8-14-year-olds (p < 0.0001); reported headaches once a week or more in the same age group (p < 0.0001); and a positive response in the ages 10-16 to the question: "Do you often feel nervous?" (OR=2.1, 95% CI 1.3-3.4). When adjusted for age, sex and all psychosocial risk determinants studied in multiple logistic regression, a positive answer to this question was a significant predictor of pain in young adulthood. A positive response by the 10-16-year-olds to "Do you find it difficult to describe your feelings?" was a predictor of pathological anxiety in early adulthood, but stress perceived in childhood/adolescence did not predict future pain or stress. Conclusions: Since pain reports in childhood and early adolescence seem to be associated with the report of pain in early adulthood, more attention should be given to the way ill-health is managed in adolescence in this vulnerable group. abstract_id: PUBMED:20041057 Surgical management and outcome of tethered cord syndrome in school-aged children, adolescents, and young adults. Objective: The adolescent presentation of tethered cord syndrome (TCS) is well-recognized, but continues to pose significant diagnostic and management controversies. The authors conducted a retrospective study of clinical outcomes after surgical intervention in 24 school-aged children, adolescents, and young adults with TCS. Methods: All 83 patients with a lipomyelomeningocele (LMMC) underwent untethering surgery for caudal cord tethering between 1987 and 2007. The clinical charts and follow-up data were reviewed. Of these patients, 24 school-aged children, adolescents, and young adults with TCS were studied with respect to the clinical, radiologic, pathologic features, and surgical outcomes. Results: Untethering procedures were performed in 24 patients (age range, 7-25 years) for TCS of various origins (lipoma, lipomyelomeningocele, and tight filum terminale). Specific circumstances involving additional tugging of the already tight conus, and direct trauma to the back precipitated the onset of symptom in 50% of the patients. Diffuse and non-dermatomal leg pain, often referred to the anorectal region, was the most common presenting symptom. Progressive sensorimotor deficits in the lower extremities, as well as bladder and bowel dysfunction, were also common findings, but progressive foot and spinal deformities were noted less frequently. The most common tethered lesions were intradural lipomas, thickened filum and fibrous band adhesions into the placode sac. The surgical outcome was gratifying in relation to pain and motor weakness, but disappointing with respect to resolution of bowel and bladder dysfunction. Of the 24 patients with TCS, pre-operative deficits improved after surgery in 14 (58.3%), remained stable in 8 (33.4%), and worsened in 2 (8.3%). Conclusion: The pathologic lesions of tethered cord syndrome in school-aged children, adolescents, and young adults, are mostly intradural lipomas and tight filum. It is suggested that the degree of cord traction results in neurologic dysfunction in late life due to abnormal tension, aggravated by trauma or repeated tugging of the conus during exercise. Early diagnosis and adequate surgical release might be the keys to the successful outcome in school-aged children, adolescents, and young adults with TCS. abstract_id: PUBMED:29546193 Parent-reported Mental Health Problems and Mental Health Services Use in South Australian School-aged Children. Background: Monitoring and reporting childhood mental health problems and mental health services utilization over time provide important information to identify mental health related issues and to guide early intervention. This paper aims to describe the recent prevalence of parent-reported mental health problems among South Australian (SA) children; to identify mental health problems associated characteristics; and to describe mental health services utilization and its related characteristics among this population. Methods: Parent-reported mental health problems were assessed against the first item of the Strength and Difficulties Questionnaire. School-aged children were randomly sampled monthly and data were collected using a surveillance system between 2005 and 2015. Associations between mental health problems and various factors were analysed using univariable analysis and multivariable logistic regression modelling. Results: Prevalence of parent-reported mental health problems among children was 9.1% and 9.3% for children aged 5 to 11 years and children aged 12 to 15 years, respectively. No change in prevalence was observed during the past decade. Mental health problems were associated with male sex, long-term illness or pain, negative school experiences, not living with biological parents, and living in a rental dwelling. Less than half (48.7%) of the children with mental health problems received professional help. An increasing trend was found in mental health services utilisation among children aged 5 to 15 years. Utilization of mental health services was associated with male sex, older age, long-term illness or pain, and feeling unhappy at school. Conclusion: This study reports the prevalence of parent-reported mental and mental health services utilisation among SA school-aged children. Identified characteristics associated with mental health problems and mental health services utilisation provide useful information for the planning of catered population initiatives. abstract_id: PUBMED:35784655 Social, health-related, and environmental factors influencing sleep problems of children, adolescents and young adults. Sleep is a relevant factor for functioning and well-being of young people. The paper provides a differentiated description of sleep difficulties in this population group including social, health-related, and environmental factors. The analyses included n=6,728 11- to 17-year-olds of the KiGGS baseline study (2003-2006) and 6,072 young adults (age 18-31), who provided information relating sleep in the survey KiGGS Wave 2 (2014-2017). Information from 3,567 people was evaluated at two survey points. 22.0% of the 11- to 17-year-olds reported sleep difficulties. A significant impact for the sex (female), living with a single parent, and with siblings is reflected in the logistic regression. The risk for sleep difficulties increases significantly in the case of mental problems and pain. Among the 18- to 31-year-olds, 19.6% complained of difficulties falling asleep and sleeping through the night. In addition to sex, noise exposure, a low level of education, the professional situation, and living with children were reflected as important influencing factors in the logistic regressions. Over one third of those, who suffered from sleep problems as children and adolescents, also indicated sleep difficulties almost ten years later. The high prevalence of sleep problems and the associated health risks illustrate the high public health relevance of the topic. In addition to sex, health-related and environmental variables also turned out to be significant and need to be considered in the development of interventions. abstract_id: PUBMED:31826713 Perceived problems with academic achievement in school-aged children with recurrent pain - a longitudinal study. Aims: The overall aim was to explore the relationship between recurrent pain and perceived problems with academic achievement among boys and girls in middle and late elementary school. Methods: This 3-year follow-up study was based on data from the Study of Health in School-aged Children from Umeå (Sweden) and included children attending grade 6 in years 2003 and 2006, and a follow-up 3 years later in grade 9 (n = 1524, participation rate 90%). Results: Recurrent pain (head, stomach or back) at least doubled the odds of concurrent- and subsequent perceived problems with academic achievement. This applied for pain on a monthly and weekly basis, from single and multiple sites, and from each of the three studied pain sites. The odds increased with increasing pain frequency and number of pain sites. Problems with sleep, concentration or school absenteeism did not explain the association. Conclusions: Recurrent pain seems to be a potential predictor of perceived problems with academic achievement for school-aged children. This emphasises the importance of early identification and prevention of recurrent pain problems. abstract_id: PUBMED:38364615 Sleep problems in a population-based cohort of primary school age children with Cerebral Palsy. Aims: To examine sleep problems in a population-based sample of school-aged children (8-12yo) with Cerebral Palsy (CP) METHOD: Eighty-six children (mean 9 years, 5 months, SD = 1 year, 6 months; male = 60) with CP (Gross Motor Function Classification System; GMFCS I=46; II=21; III=9; IV=6; V=6) participated. Classifications/assessments included: Sleep Disturbance Scale for Children (SDSC), Gross Motor Function Measure (GMFM-66), Manual Ability Classification System (MACS), Communication Function Classification System (CFCS), Strengths and Difficulties Questionnaire (SDQ) and the Cerebral Palsy- Quality of Life (CP-QOL) Pain Impact subscale. Analysis included linear and logistic regression. Results: 38 (44 %) children were within the clinical range for sleep problems. Sleep problems were significantly associated with epilepsy, (95 % CI) = 14.48 (7.95 to 21.01), gross motor function, -0.13 (-0.26 to -0.01), manual ability, 7.26 (0.82 to 13.69), communication, 10.01 (2.21 to 17.80), child behaviour, 1.134 (0.74 to 1.53), and pain related QOL, 0.33 (0.12 to 0.53). For the multivariable model, sleep problems remained significantly associated with epilepsy, b (95 % CI) = 11.72 (4.88 to 18.57), child behaviour, 1.03 (0.65 to 1.41) and pain-related QOL, 0.21 (0.29 to 0.38). Conclusions: Sleep problems are common and associated with epilepsy, child behaviour and pain related QOL. abstract_id: PUBMED:36119161 Identification of musculoskeletal problems using pGALS examination in school going boys in Chennai, India. Background And Aim: pGALS (pediatric Gait Arms Legs Spine) is a useful screening tool in identifying musculoskeletal problems in children. Although musculoskeletal problems are common in children, only a small fraction of them have an underlying serious pathology. There is limited data on this subject from north India but none from south India. Our aim was to identify the prevalence of musculoskeletal problems in school going boys in the local community. Methods: We conducted a prospective cross-sectional study among school boys 7-17 years of age from September 2017 to July 2019, in Chennai. The parents or the boys were asked pGALS screening questions and this was followed by the extended pGALS examination including Beighton's scoring for hypermobility. If abnormalities in the screening tool were found, the subjects were then referred to a pediatric rheumatologist for further evaluation. Results: A total of 1543 boys were included in this study. Of these 37% of the study group were found to be hypermobile. According to our study, the prevalence of growing pains was 4.6%. Of 2 boys with suspected pathology 1 boy had enthesitis-related arthritis and another had slipped upper femoral epiphysis (SUFE). The estimated prevalence of juvenile idiopathic arthritis (JIA) was 65/100000. According to our study 12.3% of the children were obese. Conclusion: pGALS is a useful tool to screen for musculoskeletal problems in school going boys. The estimated prevalence of JIA was 65/100000. Of them, 37% of the children had hypermobility. Obese children were more likely to complain of musculoskeletal pain (P value 0.00). abstract_id: PUBMED:24567017 Long-term outcomes of adolescents with juvenile-onset fibromyalgia in early adulthood. Objective: This prospective longitudinal study examined the long-term physical and psychosocial outcomes of adolescents with juvenile-onset fibromyalgia (JFM), compared with healthy control subjects, into early adulthood. Methods: Adolescent patients with JFM initially seen at a pediatric rheumatology clinic (n = 94) and age- and gender-matched healthy control subjects (n = 33) completed online measures of demographic characteristics, pain, physical functioning, mood symptoms, and health care utilization at ∼6 years' follow-up (mean age: 21 years). A standard in-person tender-point examination was conducted. Results: Patients with JFM had significantly higher pain (P < .001), poorer physical function (P < .001), greater anxiety (P < .001) and depressive symptoms (P < .001), and more medical visits (P < .001)than control subjects. The majority (>80%) of JFM patients continued to experience fibromyalgia symptoms into early adulthood, and 51.1% of the JFM sample met American College of Rheumatology criteria for adult fibromyalgia at follow-up. Patients with JFM were more likely than control subjects to be married and less likely to obtain a college education. Conclusions: Adolescent patients with JFM have a high likelihood of continued fibromyalgia symptoms into young adulthood. Those who met criteria for fibromyalgia in adulthood exhibited the highest levels of physical and emotional impairment. Emerging differences in educational attainment and marital status were also found in the JFM group. JFM is likely to be a long-term condition for many patients, and this study for the first time describes the wide-ranging impact of JFM on a variety of physical and psychosocial outcomes that seem to diverge from their same-age peers. abstract_id: PUBMED:26208402 Victimization in childhood: General and specific associations with physical health problems in young adulthood. Objective: The goal of the current study was to examine the direct relationship between diverse types of childhood victimization and physical health problems in early adulthood, controlling for other common factors that contribute to physical health problems, including psychopathology and health risk behaviors. The associations between types of victimization (e.g., physical assault) and specific health problems (e.g., pain) were also examined. Methods: 2500 Swedish young adults reported on their exposure to victimization in childhood and their current mental and physical health as adults. Results: Using multiple regression, results indicated that the amount of childhood victimization was a significant predictor of health problems in adulthood, controlling for the significant negative effects of health risk behaviors and mental health problems on physical health. Logistic regressions indicated that physical assaults and sexual abuse were associated with all types of health problems assessed. Sleep problems were associated with almost all types of victimization history. Conclusions: The long-term effects of childhood victimization on physical health in adulthood are serious and warrant significant attention. Primary care providers should include assessments of past victimization as one way of screening for health risk. Health providers should also consider multiple points of intervention that may help to reduce physical illness. For example, providing a mental health intervention or social service support related to victimization experiences may not only address these difficulties, but also more broadly impact physical health as well. abstract_id: PUBMED:32065326 Trajectories of depressive symptoms through adolescence and young adulthood: social and health outcomes. Depressive symptoms display heterogeneous trajectories across adolescence, which can lead to different consequences. This study aimed to identify trajectories of depressive symptoms from adolescence to young adulthood, assessing the association with social and health outcomes at young adulthood. Adolescents born in 1990, enrolled in schools of Porto, Portugal, in 2003-2004 (EPITeen study) completed the Beck Depression Inventory II (BDI-II) at 13, 17 and 21 years. Mixed models and model-based clustering were used to describe the trajectories in the BDI-II score (n = 2010). Outcomes were assessed at age 21 years with self-administered questionnaires and face-to-face interviews (n = 1594). Odds ratios or regression coefficients, with 95% confidence intervals, were estimated using logistic and linear regression. Three trajectory classes of depressive symptoms were identified, similar in shape in both sexes: High (8.4%), Moderate (31.3%) and Low (60.2%). Participants in High or Moderate classes were more likely to describe lower scores of community involvement, more medical appointments during the last year, higher levels of pain and had higher probability of self-rating health as "good" or "fair or poor". Females in the High and Moderate classes were more likely to be current smokers, to describe ever using drugs and to report more sexual partners, emergency room visits and the use of antidepressants. The risk of depressive symptoms in adulthood is likely to be early determined in adolescence. The trajectory classes with higher levels of symptoms were associated with worse social and health outcomes. Answer: Yes, pain problems in young school children can persist into early adulthood. A longitudinal study that followed 335 children from the ages of 8, 11, and 14 for 13 years found that a significant number of subjects reported pain symptoms in all three studies conducted in 1989, 1991, and 2002. Specifically, 59% of women and 39% of men reported pain at ages 21, 24, and 27 years. The cumulative incidence rate for the presence of pain in the cohort studied was 31% for 1989-2002 and 43% for 1991-2002. The study concluded that since pain reports in childhood and early adolescence are associated with the report of pain in early adulthood, more attention should be given to managing ill-health in adolescence in this vulnerable group (PUBMED:15109969).
Instruction: Patterns of testicular cytology in men with primary infertility: any change since the Gulf War? Abstracts: abstract_id: PUBMED:15581166 Patterns of testicular cytology in men with primary infertility: any change since the Gulf War? Objective: To evaluate the patterns of testicular cytology in men with primary infertility, to compare the morphologic patterns between the periods 1990-1995, immediately after Gulf War II, and 1997-2001 and to determine whether there is a correlation between hormonal profile, testicular volume and morphologic pattern. Study Design: Retrospective study of men with primary infertility. History, complete physical examination, hormonal assay and testicular ultrasound were evaluated. A total of 545 patients had samples for testicular cytology obtained from both testes. The patient's consent was obtained in all cases. Smears were interpreted under light microscopy after treatment with Diff-Quik. A total of 104 healthy, fertile subjects were used for comparison of the hormonal profile and testicular volume. Results: The mean (+/- SD) age was 28.66 +/- 4.36 years and duration of marriage 4.4 +/- 4.36 years. There were 11.2% patients with normal cytology, 55.8% with hypospermatogenesis, 28.4% with testicular atrophy, 2.9% with Sertoli cells only and 1.7% with maturation arrest. A significant increase in hypospermatogenesis and decrease in the Sertoli cell-only pattern were noted in 1997-2001 when compared with 1990-1995. The mean left testicular volume was 10.53 +/- 5.51 mL3 in the infertile group vs. 15.2 +/- 4.97 in the fertile group (p < 0.003); right testicular volume was 10.84 +/- 4.77 vs. 15.15 +/- 5.31 (p < 0.003). The hormonal profile revealed higher luteinizing hormone and follicle-stimulating hormone levels in the infertile group vs. control group (8.53 +/- 9.03 and 16.44 +/- 19.243 vs. 6.98 +/- 4.53 and 7.37 +/- 6.63, respectively [p < 0.001]). Free testosterone was higher in the fertile group (39.69 +/- 12.76 vs. 20.28 +/- 8.5 [p < 0.000]). Conclusion: The majority of infertile males in our cohort had hypospermatogenesis; testicular atrophy was the next most common disorder. There was no major change in overall absolute numbers since the Gulf War. Testicular cytology by fine needle aspiration is a safe and well-tolerated complementary investigation for unexplained male infertility. abstract_id: PUBMED:16687262 Reproductive health of Gulf War veterans. In this review we summarize the scientific literature on reproductive health following deployment to the first Gulf war by armed service personnel. All the studies examined had methodological limitations, making interpretation difficult. Nonetheless we conclude that for male veterans there is no strong or consistent evidence to date for an effect of service in the first Gulf war on the risk of major, clearly defined, birth defects or stillbirth in offspring conceived after deployment. Effects on specific rare defects cannot be excluded at this stage since none of the studies had the statistical power to examine them. For miscarriage and infertility, there is some evidence of small increased risks associated with service, but the role of bias is likely to be strong. For female veterans, there is insufficient information to make robust conclusions, although the weight of evidence to date does not indicate any major problem associated specifically with deployment to the Gulf. None of the studies have been able to examine risk according particular exposures, and so possible associations with specific exposures for smaller groups of exposed veterans cannot be excluded. We suggest that the way forward to address the question of veterans' reproductive health with confidence in the future is prospective surveillance following deployment. Anything less will result in further problems of interpretation and continued anxiety for parents, as well as prospective parents, in the armed forces. abstract_id: PUBMED:16434429 Health consequences of the first Persian Gulf War on French troops. Since 1993, many studies on the health of Persian Gulf War Veterans (PGWV) have been undertaken. These studies have concluded that there has been an increased mortality due to external causes, no excess of recognized diseases, and no effect on PGWV children. When compared with the non-deployed, PGWV have reported a higher frequency of infertility as well as different symptoms, but a specific Gulf War syndrome was not identified. In October 2000, the French government asked an independent working group to analyse the scientific literature on PGWV health. The group concluded that an exhaustive study of French PGWV was to be undertaken. The objectives of this study were to describe the exposures of PGWV in the operations theatre, to report on the symptoms and diseases that occurred in PGWV and their children during and after the military campaign, and to explore the possibility of a Gulf War syndrome. This exhaustive cross-sectional study, which included all civilians and troops who served in the Gulf from August 1990 to July 1991, began in January 2002. Data were collected by postal self-administered questionnaires. A standardized clinical evaluation was performed by 27 clinics of occupational diseases and nine military hospitals. Symptoms and diseases which appeared after the campaign are described. To date, among 20,261 PGWV, 5,666 participated in the study (28%). The most frequent symptoms described since the return from the Gulf were headaches (83%), neurological or psychological symptoms, and back pain. Apart from well-known symptoms associations (respiratory, neurocognitive, psychological and musculo-skeletal syndromes), no other cluster was highlighted by our analysis. abstract_id: PUBMED:12523940 The study of reproductive outcome and the health of offspring of UK veterans of the Gulf war: methods and description of the study population. Background: The aim of this study is to determine whether Gulf war veterans and their partners are at increased risk of adverse reproductive events and whether their children have increased risk of serious health problems. Methods and response to the study are reported here. Methods: This was a retrospective cohort study of reproduction among UK Gulf war veterans, with a comparison cohort of Armed Service personnel who were not deployed to the Gulf. Reproductive history and details of children's health was collected by means of a validated postal questionnaire. A separate study of non-responders was conducted. Results: Questionnaires were returned by a total of 25,084 Gulf war veterans (24,379 men) and 19,003 (18,439 men) subjects in the comparison group. After adjusting for undelivered mail the response rate was 53% for Gulf war veterans and 42% for non-Gulf veterans among men, 72% and 60% among women. Data from the non-responder study suggests that failure to respond to the main survey was largely unrelated to reproduction. 11,155 (46%) male Gulf war veterans and 7,769 (42%) male non-Gulf war veterans had conceived, or attempted to conceive, since the Gulf war. They reported 16442 and 11517 pregnancies respectively in that period. For women, 313 (44%) Gulf veterans and 235 (42%) non-Gulf veterans reported 484 and 377 pregnancies respectively conceived since the Gulf war. Conclusions: This survey enabled collection of information on a range of reproductive outcomes from veterans of the Gulf war and a suitably matched comparison cohort. Although the response rate for men was disappointing, selection bias related to reproduction does not appear to be strong in these data. abstract_id: PUBMED:18442369 Reproductive health and pregnancy outcomes among French gulf war veterans. Background: Since 1993, many studies on the health of Persian Gulf War veterans (PGWVs) have been undertaken. Some authors have concluded that an association exists between Gulf War service and reported infertility or miscarriage, but that effects on PGWV's children were limited. The present study's objective was to describe the reproductive outcome and health of offspring of French Gulf War veterans. Methods: The French Study on the Persian Gulf War (PGW) and its Health Consequences is an exhaustive cross-sectional study on all French PGWVs conducted from 2002 to 2004. Data were collected by postal self-administered questionnaire. A case-control study nested in this cohort was conducted to evaluate the link between PGW-related exposures and fathering a child with a birth defect. Results: In the present study, 9% of the 5,666 Gulf veterans who participated reported fertility disorders, and 12% of male veterans reported at least one miscarriage among their partners after the PGW. Overall, 4.2% of fathers reported at least one child with a birth defect conceived after the mission. No PGW-related exposure was associated with any birth defect in children fathered after the PGW mission. Concerning the reported health of children born after the PGW, 1.0% of children presented a pre-term delivery and 2.7% a birth defect. The main birth defects reported were musculoskeletal malformations (0.5%) and urinary system malformations (0.3%). Birth defect incidence in PGWV children conceived after the mission was similar to birth defect incidence described by the Paris Registry of Congenital Malformations, except for Down syndrome (PGWV children incidence was lower than Registry incidence). Conclusion: This study did not highlight a high frequency of fertility disorders or miscarriage among French PGW veterans. We found no evidence for a link between paternal exposure during the Gulf War and increased risk of birth defects among French PGWV children. abstract_id: PUBMED:16832610 Self-reported reproductive outcomes among male and female 1991 Gulf War era US military veterans. Background: Following the 1991 Gulf War, some veterans expressed concerns regarding their reproductive health. Our objective was to assess whether an association exists between deployment to the 1991 Gulf War and self-reported adverse pregnancy outcomes. Methods: Using a modified Dillman technique with telephone follow-up, we conducted a survey via a postal questionnaire from February 1996-August 1997 to compare selected reproductive outcomes among 10,000 US veterans deployed to the 1991 Gulf War with those of 10,000 nondeployed Gulf War era veterans. Results: A total of 8742 individuals responded to the survey, a response rate of 51 percent. Using multivariable analyses, results showed no differences in number of reported pregnancies between Gulf War veterans and nondeployed veterans. Among 2233 female and 2159 male participants, there were no differences in birth weight of infants born to Gulf War veterans compared with nondeployed Gulf War era veterans. In multivariable models, male and female Gulf War veterans did not significantly differ in risk for ectopic pregnancies, stillbirths, or miscarriages when compared with nondeployed veterans of the same era. Conclusions: These results do not suggest an association between service in the 1991 Gulf War and adverse reproductive outcomes for both male and female veterans during the 4 years after the war. abstract_id: PUBMED:15253919 Infertility among male UK veterans of the 1990-1 Gulf war: reproductive cohort study. Objectives: To examine the hypothesis that, theoretically at least, exposure to toxicants of the type present in the Gulf war could affect spermatogenesis, which might be observed as increased levels of infertility. Design: Retrospective reproductive cohort analysis. Setting: Male UK Gulf war veterans and matched comparison group of non-deployed servicemen, surveyed by postal questionnaire. Participants: 42,818 completed questionnaires were returned, representing response rates of 53% for Gulf veterans and 42% for non-Gulf veterans; 10,465 Gulf veterans and 7376 non-Gulf veterans reported fathering or trying to father pregnancies after the Gulf war. Main Outcome Measures: Failure to achieve conceptions (type I infertility) or live births (type II infertility) after the Gulf war, having tried for at least a year and consulted a doctor; time to conception among pregnancies fathered by men not reporting fertility problems. Results: Risk of reported infertility was higher among Gulf war veterans than among non-Gulf veterans (odds ratio for type I infertility 1.41, 95% confidence interval 1.05 to 1.89; type II 1.50, 1.18 to 1.89). This small effect was constant over time since the war and was observed whether or not the men had fathered pregnancies before the war. Results were similar when analyses were restricted to clinically confirmed diagnoses. Pregnancies fathered by Gulf veterans not reporting fertility problems also took longer to conceive (odds ratio for > 1 year 1.18, 1.04 to 1.34). Conclusions: We found some evidence of an association between Gulf war service and reported infertility. Pregnancies fathered by Gulf veterans with no fertility problems also reportedly took longer to conceive. abstract_id: PUBMED:32002508 Promoting men's awareness, self-examination, and help-seeking for testicular disorders: a systematic review of interventions. Background: Testicular cancer (TC) is among the most commonly diagnosed cancers in men aged 15-40 years. The incidence of TC is on the rise. Benign testicular disorders, such as testicular torsion and epididymitis, can lead to testicular ischemia, sepsis, and infertility if left untreated. This updated systematic review aims to evaluate the effectiveness of studies promoting men's knowledge and awareness of testicular disorders and/or self-examination, behaviours and/or intentions to examine their testes, and help-seeking behaviours and/or intentions for testicular disorder symptoms. Methods: Academic Search Complete, Medline, CINAHL, PsycINFO, ERIC, the Cochrane Library, the World Health Organisation International Clinical Trials Registry Platform, and Clinicaltrials.gov were searched for studies published between April 2018 and August 2023. Methodological quality was assessed and results were synthesised meta-narratively. Results: Five studies were included. The majority of the reviewed interventions were successful in increasing men's awareness of TC and self-examination, including a PowerPoint presentation, an online educational brochure, video-assisted teaching, a motivational video, and a virtual reality game. Only one study addressed help-seeking for testicular symptoms and promoted men's awareness of benign as well as malignant testicular diseases. Conclusions: This review highlights the importance of evaluating innovative educational interventions aimed at younger men, whilst raising their awareness of testicular disorders and increasing their help-seeking intentions for testicular disorder symptoms. Given the lack of consensus around scheduled testicular self-examination among younger men, clinicians are encouraged to instruct men to familiarise themselves with the look and feel of their own testes and to seek timely medical attention for abnormalities. Registration: The protocol of the previous version of this review was registered with the International Prospective Register of Systematic Reviews (PROSPERO) under the registration number CRD42018093671. abstract_id: PUBMED:14665866 Bilateral testicular microlithiasis predicts the presence of the precursor of testicular germ cell tumors in subfertile men. Purpose: A high prevalence of testicular microlithiasis has been described in adolescent and adult clinical cases of invasive testicular germ cell tumor (TGCT), that is seminomas and nonseminomas. However, to our knowledge it remains to be established whether testicular microlithiasis also indicates the presence of the pre-invasive lesion of this cancer, known as carcinoma in situ (CIS). We determined the predictive value of unilateral and bilateral testicular microlithiasis for CIS in subfertile men, a known risk population for TGCTs (approximately 1%). Materials And Methods: In a retrospective cross-sectional study the association between testicular microlithiasis and CIS was studied in a group of 263 men referred for subfertility. Testicular microlithiasis and CIS were diagnosed in all men by scrotal ultrasound and in testicular histology specimens as part of the routine evaluation of all patients. Results: Of the 263 subfertile men 53 (20%) had testicular microlithiasis. No CIS or TGCT was identified in the 23 men with unilateral testicular microlithiasis. In contrast, 6 of the 30 men (20%) with bilateral testicular microlithiasis were diagnosed with CIS. Therefore, the prevalence of CIS in subfertile men with bilateral testicular microlithiasis is significantly higher than in patients without testicular microlithiasis (1 of 210, 0.5%) and with unilateral testicular microlithiasis (0 of 23, 0%) (p <0.0001). Conclusions: Bilateral testicular microlithiasis is indicative for CIS in subfertile men. Since these men are at particular risk for invasive TGCT, an assessment of testicular microlithiasis is a valuable tool for the early diagnosis of this disease. abstract_id: PUBMED:33856180 Men's Health: Scrotal and Testicular Conditions. Scrotal and testicular conditions include benign masses, infections, testicular torsion, and testicular cancer. Common palpable benign scrotal masses include spermatocele, varicocele, and hydrocele. Most patients with these masses require no treatment. Some varicoceles are associated with impaired fertility, probably due to an increase in scrotal temperature that leads to testicular hyperthermia, oxidative stress, and reduced spermatogenesis. Patients with documented infertility or scrotal pain should be referred to a urology subspecialist for consideration of surgical management. Epididymitis and epididymo-orchitis are caused by infection with Neisseria gonorrhoeae, Chlamydia trachomatis, or enteric bacteria. Antibiotics and supportive measures (eg, scrotal elevation, bed rest) are recommended for management of acute epididymitis. Testicular torsion is a urologic emergency that requires rapid surgical exploration and orchidopexy to reduce the risk of testicular loss due to ischemia. Salvage rates exceed 90% when surgical exploration is performed within 6 hours of symptom onset. Testicular cancer commonly manifests as a painless, incidentally discovered mass in a single testis. Ultrasonography is recommended to confirm the diagnosis. The recommended primary intervention for a suspected malignant testicular tumor is radical inguinal orchiectomy. Answer: The study "Patterns of testicular cytology in men with primary infertility: any change since the Gulf War?" (PUBMED:15581166) aimed to evaluate the patterns of testicular cytology in men with primary infertility and to compare the morphologic patterns between the periods 1990-1995, immediately after Gulf War II, and 1997-2001. The study also sought to determine whether there was a correlation between hormonal profile, testicular volume, and morphologic pattern. The results showed that the majority of infertile males in the cohort had hypospermatogenesis, with testicular atrophy being the next most common disorder. There was a significant increase in hypospermatogenesis and a decrease in the Sertoli cell-only pattern in 1997-2001 compared to 1990-1995. However, the study concluded that there was no major change in overall absolute numbers since the Gulf War. Testicular cytology by fine needle aspiration was found to be a safe and well-tolerated complementary investigation for unexplained male infertility.
Instruction: Role of central obesity in risk stratification after an acute coronary event: does central obesity add prognostic value to the Global Registry of Acute Coronary Events (GRACE) risk score in patients with acute coronary syndrome? Abstracts: abstract_id: PUBMED:24367827 Role of central obesity in risk stratification after an acute coronary event: does central obesity add prognostic value to the Global Registry of Acute Coronary Events (GRACE) risk score in patients with acute coronary syndrome? Background: Accurate risk stratification is an important step in the initial management of acute coronary syndrome (ACS), and current guidelines recommend the use of risk scores, such as the Global Registry of Acute Coronary Events risk score (GRACE RS). Recent studies have suggested that abdominal obesity is associated with cardiovascular events in patients with ACS. However, little is known about the additional value of abdominal obesity beyond risk scores. The aim of our study was thus to assess whether waist circumference, a surrogate of abdominal adiposity, adds prognostic information to the GRACE RS. Methods: This was a retrospective cohort study of ACS patients admitted consecutively to a cardiac care unit between June 2009 and July 2010. The composite of all-cause mortality or myocardial reinfarction within six months of index hospitalization was used as the endpoint for the analysis. Results: A total of 285 patients were studied, 96.1% admitted for myocardial infarction (with or without ST elevation) and 3.9% for unstable angina. At the end of the follow-up period, 10 patients had died and the composite endpoint had been reached in 27 patients (9.5%). More than 70% of the study population were obese or overweight, and abdominal obesity was present in 44.6%. The GRACE RS showed poor predictive accuracy (area under the curve 0.60), and most of the GRACE variables did not reach statistical significance in multivariate analysis. The addition of waist circumference to the GRACE RS did not improve its discriminatory performance. Conclusion: Abdominal obesity does not add prognostic information to the GRACE RS to predict six-month mortality or myocardial reinfarction. abstract_id: PUBMED:34537077 Comparison of various insulin resistance surrogates on prognostic prediction and stratification following percutaneous coronary intervention in patients with and without type 2 diabetes mellitus. Background: Insulin resistance (IR), evaluation of which is difficult and complex, is closely associated with cardiovascular disease. Recently, various IR surrogates have been proposed and proved to be highly correlated with IR assessed by the gold standard. It remains indistinct whether different IR surrogates perform equivalently on prognostic prediction and stratification following percutaneous coronary intervention (PCI) in non-ST-segment elevation acute coronary syndrome (NSTE-ACS) patients with and without type 2 diabetes mellitus (T2DM). Methods: The present study recruited patients who were diagnosed with NSTE-ACS and successfully underwent PCI. IR surrogates evaluated in the current study included triglyceride-glucose (TyG) index, visceral adiposity index, Chinese visceral adiposity index, lipid accumulation product, and triglyceride-to-high density lipoprotein cholesterol ratio, calculations of which were conformed to previous studies. The observational endpoint was defined as the major adverse cardiovascular and cerebrovascular events (MACCE), including cardiac death, non-fatal myocardial infarction, and non-fatal ischemic stroke. Results: 2107 patients (60.02 ± 9.03 years, 28.0% female) were ultimately enrolled in the present study. A total of 187 (8.9%) MACCEs were documented during the 24-month follow-up. Despite regarding the lower median as reference [hazard ratio (HR) 3.805, 95% confidence interval (CI) 2.581-5.608, P < 0.001] or evaluating 1 normalized unit increase (HR 1.847, 95% CI 1.564-2.181, P < 0.001), the TyG index remained the strongest risk predictor for MACCE, independent of confounding factors. The TyG index showed the most powerful diagnostic value for MACCE with the highest area under the receiver operating characteristic curve of 0.715. The addition of the TyG index, compared with other IR surrogates, exhibited the maximum enhancement on risk stratification for MACCE on the basis of a baseline model (Harrell's C-index: 0.708 for baseline model vs. 0.758 for baseline model + TyG index, P < 0.001; continuous net reclassification improvement: 0.255, P < 0.001; integrated discrimination improvement: 0.033, P < 0.001). The results were consistent in subgroup analysis where similar analyses were performed in patients with and without T2DM, respectively. Conclusion: The TyG index, which is most strongly associated with the risk of MACCE, can be served as the most valuable IR surrogate for risk prediction and stratification in NSTE-ACS patients receiving PCI, with and without T2DM. abstract_id: PUBMED:32943761 Association of obesity indices with in-hospital and 1-year mortality following acute coronary syndrome. Background/objectives: According to the "obesity paradox", adults with obesity have a survival advantage following acute coronary syndrome, compared with those without obesity. Previous studies focused on peripheral obesity and whether this advantage is conferred by central obesity is unknown. The objective of this study was to describe the association of peripheral and central obesity indices with risk of in-hospital and 1-year mortality following acute coronary syndrome (ACS). Subjects/methods: Gulf COAST is a prospective ACS registry that enrolled 4044 patients age ≥18 years from January 2012 through January 2013, across 29 hospitals in four Middle Eastern countries. Associations of indices of peripheral obesity (body-mass index, [BMI]) and central obesity (waist circumference [WC] and waist-to-height ratio [WHtR]) with mortality following ACS were analyzed in logistic regression models (odds ratio, 95% CI) with and without adjustment for Global Registry of Acute Coronary Events risk score. Results: Of 3882 patients analyzed (mean age: 60 years; 33.3% women [n = 1294]), the prevalence of obesity was 34.5% (BMI ≥ 30.0 kg/m2), 72.2% (WC ≥ 94.0 cm [men] or ≥80.0 cm [women]) and 90.0% (WHtR ≥ 0.5). In adjusted models, deciles of obesity indices showed higher risk of mortality at extreme versus intermediate deciles (U-shaped). When defined by conventional cut-offs, peripheral obesity (BMI ≥ 30.0 versus 18.5-29.9 kg/m2) showed inverse association with risk of in-hospital mortality (0.64; 95% CI, 0.42-0.99; P = 0.04; central obesity showed trend toward reduced mortality). In contrast, for risk of 1-year mortality, all indices showed inverse association. Obesity, defined by presence of all three indices, versus nonobesity showed inverse association with risk of 1-year mortality (0.52; 95% CI, 0.35-0.75; P = 0.001). Results were similar among men and women. Conclusion: The degree of obesity paradox following ACS depends on the obesity index and follow-up time. Obesity indices may aid in risk stratification of mortality following ACS. abstract_id: PUBMED:23982542 Risk factors of acute coronary syndrome among Bangladeshi people. Two hundred consecutive patients of acute coronary syndrome aged 30-80 years (inclusive) admitted to the coronary care unit of National institute of Cardiovascular Diseases, Dhaka, were prospectively recruited as cases. Two hundred controls were prospectively selected either from individuals attending in the Shaheed Suhrawardy Medical College Hospital, admitted for elective surgery or in medicine ward for conditions that were unlikely to confound a comparative analysis. The Cases had significantly lower yearly incomes than controls, and significantly higher number of cases is occupied as sedentary worker. Smoking was an important risk factor. About 70% of cases and 45% of controls smoked previously cigarettes. Consumption of non smoke tobacco is another risk factor. The high risk of IHD in developing countries attributed to low consumption of fruit and vegetables, and in our study regular consumption of fruits (taking fruit at least 4 days a week) shows 20.5% cases and 33% of controls (OR 0.524, 95%CI 0.333-0.823) and p value was 0.005. Around 23.5% of the cases were diabetic compared with 4.5% of controls. We found 5% cases and 1.5% controls having known dyslipidemia. Although the body-mass index of cases was 24.68±3.06 Kg/M² and controls 20.54±4.37 Kg/M². The WHR was also significantly greater in cases 0.98±0.05M vs. controls 0.93±0.102M (p<0.001). About 31% of cases and 2% of controls had past history of myocardial infarction. The most predictive independent variables were previous smoking (p<0.001), WHR (p<0.001), history of hypertension (p<0.001), and income (p<0.001). Smoking and WHR were associated with the highest risks. The variables revealed to be significantly associated with acute coronary syndrome by bivariate analyses were all entered into the model directly. Eleven variables entered into the model. Of them Age, occupation, family income (yearly), fruit consumption, known hypertension, known DM, known dyslipidemia, previous MI, previous smoking, BMI and Waist and hip ratio were found to be the independent predictors of acute coronary syndrome. The study found that smoking tobacco, diabetes mellitus, hypertension, visceral obesity and less fruit intake are the important factors of acute coronary syndrome in Bangladesh. abstract_id: PUBMED:33623654 Prevalence of atherosclerosis risk factors in Egyptian patients with acute coronary syndrome: final data of the nationwide cross-sectional 'CardioRisk' project. Background: Little are known about the prevalence of atherosclerosis risk factors in Egyptian patients with acute coronary syndromes (ACS). Objective: Describe the prevalence of these risk factors with focus on gender-specific data and patients with premature presentation. Methods: From November 2015 to August 2018, data were collected from 3224 patients with ACS in 30 coronary care centers covering 11 governorates across Egypt, with focus premature ACS. Results: The vast majority were males (74%) and the most prevalent age group was (56-65 years) representing 37% of whole study population. Among female patients, 92% were post-menopausal. The prevalence of premature ACS was 51%. Forty five percent of total males and 69.6% of total females with ACS had premature presentation (P<0.001). Abdominal obesity was the most prevalent risk factor (66%). Nearly half of the entire study patients were current smokers (48%). We showed a high prevalence of documented dyslipidemia (48%) as well. Early invasive management strategy was employed in 65% of patients with no significant gender disparity noticed. Vascular access for coronary angiography was most commonly femoral (80% of time). Emergent percutaneous coronary intervention (PCI) was attempted in 53% of patients. Thrombolytic therapy (using Streptokinase) was used in 24% of included participants. Conclusion: Among Egyptian patients with ACS, premature presentation is common with greater male preponderance. Abdominal obesity is the most prevalent risk factor followed by hypertension. Most traditional risk factors (apart from smoking) were more prevalent in women than men. abstract_id: PUBMED:34869637 Visceral Adiposity Index Plays an Important Role in Prognostic Prediction in Patients With Non-ST-Segment Elevation Acute Coronary Syndrome and Type 2 Diabetes Mellitus Undergoing Percutaneous Coronary Intervention. Background: Visceral adiposity index (VAI), a surrogate marker of adiposity and insulin resistance, has been demonstrated to be significantly related to cardiovascular disease. It remains indistinct whether VAI predicts adverse prognosis after percutaneous coronary intervention (PCI) for patients with non-ST-segment elevation acute coronary syndrome (NSTE-ACS) and type 2 diabetes mellitus (T2DM). Methods: A total of 798 participants who met the enrollment criteria were finally brought into this study. VAI was determined by waist circumference, body mass index, fasting triglyceride, and high-density lipoprotein cholesterol as previously reported. Adverse prognosis included all-cause death, non-fatal myocardial infarction, non-fatal ischemic stroke, and ischemia-driven revascularization, the composite of which was defined as the primary endpoint. Results: Higher VAI maintained as a significant and independent risk predictor for the primary endpoint, regardless of the adjustment for the various multivariate models [hazard ratio (95% CI) for fully adjusted model: 2.72 (2.02-3.68), p < 0.001]. The predictive value of VAI was further confirmed in sensitivity analysis where VAI was taken as a continuous variate. There was a dose-response relationship of VAI with the risk of the primary endpoint (p for overall association < 0.001). Moreover, the ability of VAI on the prediction of the primary endpoint was consistent between subgroups stratified by potential confounding factors (all p for interaction > 0.05). VAI exhibited a significant incremental effect on risk stratification for the primary endpoint beyond existing risk scores, expressed as increased Harrell's C-index, significant continuous net reclassification improvement, and significant integrated discrimination improvement. Conclusion: VAI is a significant indicator for predicting worse prognosis and plays an important role in risk stratification among patients with NSTE-ACS and T2DM undergoing elective PCI. The present findings require further large-scale, prospective studies to confirm. abstract_id: PUBMED:20466373 Insulin resistance and acute coronary syndrome. Background: Insulin resistance (IR), which can be quantified by HOMA index (fasting glucose X fasting insulin/22.5), is considered the "primum movens" for the development of Metabolic Syndrome. Many authors have suggested that insulin resistance could raise both incidence and mortality of coronary heart disease (CHD). IR is also associated with important predictors of cardiovascular disease, as increased concentration of LDL or triglyceride, decreased concentration of HDL, high systolic blood pressure, visceral obesity. There is accumulating evidence that chronic sub-clinical inflammation, as measured by inflammatory markers as C-reactive protein (CRP) and fibrinogen, is related with insulin resistance. Aim Of The Study: To clarify if insulin resistance would predict cardiovascular disease independently of the other risk factors, such as hypertension, visceral obesity or dyslipidemia, by focusing our attention on the relation between Acute Coronary Syndrome (ACS) and high HOMA index. Methods: We evaluated glucose and insulin levels at baseline and post-prandial phase, in order to estimate HOMA index in both the conditions; we related the data obtained with the incidence of cardiovascular events, also investigating traditional cardiovascular risk factors. The cohort included 118 patients with a clinical diagnosis of ACS and excluded those with type 1 diabetes, acute inflammatory diseases, hepatic or renal failure, disreactive disorders, autoimmunity and cancer. Subjects: Subjects were followed-up for a period of 1 year, being subdivided in three groups: (1) subjects at elevated HOMA (HOMA > or = 6); (2) subjects at intermediate HOMA (HOMA <6 and > or = 2); (3) subjects at low HOMA (HOMA < or = 2). We considered as end points new cardiovascular events, cerebrovascular events (both TIA and stroke), procedures of revascularization with angioplasty or surgery, cardiovascular death, sudden death. Results: Patients with elevated HOMA have a higher incidence of previous cardio- and cerebrovascular events (p=0.03), myocardial infarction without ST elevation (p=0.005), unstable angina (p=0.01), asymptomatic carotid plaques (p=0.05), depressed systolic function (p=0.05); we found, also, a significant statistic correlation between HOMA index and high levels of CRP, fibrinogen, serum creatinine and TnI. Cardiovascular and cerebrovascular events were registered in 61% of patients with elevated HOMA during the follow up, despite of 25% registered in the control group: so we could consider HOMA index as a negative prognostic variable, also in virtue by the statistic correlation with the inflammatory markers, whose power of prediction is already known. Conclusions: Beyond traditional cardiovascular risk factors, insulin resistance quantified by HOMA index seems to significantly have an important prognostic role, both in primary and secondary prevention in patients with Acute Coronary Syndrome. abstract_id: PUBMED:17652886 Coronary plaque vulnerability in metabolic syndrome: assessment of carotid artery morphology in acute coronary syndrome. Background: The aim of the present study was to investigate the clinical implication of metabolic syndrome and carotid artery morphologies on coronary plaque vulnerability in Japanese men with acute coronary syndrome (ACS). Methods And Results: Consecutive ACS Japanese men (n=225) underwent emergent coronary angiography and B-mode ultrasonography within 1 week of the acute coronary event. With a 11.3-MHz linear array transducer, the morphologies of common carotid artery were examined. Common carotid arteries with an intima-media thickness>1.1 mm and interadventitial diameter>8.0 mm were considered to be undergoing carotid artery remodeling. Patients were divided into 2 groups based on the number of complex plaques identified by coronary angiography. Abdominal obesity, low level of high-density lipoprotein and carotid artery remodeling were more often observed in patients with multiple, complex coronary plaques than in patients with a single, complex plaque (p<0.03, p<0.03 and p=0.0001, respectively). Metabolic syndrome and carotid artery remodeling were independent predictors of multiple, complex coronary plaques (odds ratio 1.86, p<0.05; odds ratio 5.96, p<0.0001). Conclusion: Metabolic syndrome and carotid artery remodeling might be useful indicators to assess the efficacy of aggressive treatments for secondary prevention of cardiovascular events in ACS Japanese men. abstract_id: PUBMED:22156634 Baseline characteristics, management practices, and long-term outcomes of Middle Eastern patients in the Second Gulf Registry of Acute Coronary Events (Gulf RACE-2). Background And Objectives: Limited data are available on patients with acute coronary syndromes (ACS) and their long-term outcomes in the Arabian Gulf countries. We evaluated the clinical features, management, in-hospital, and long-term outcomes of in such a population. Design And Setting: A 9-month prospective, multicenter study conducted in 65 hospitals from 6 countries that also included 30 day and 1-year mortality follow-up. Patients And Methods: ACS patients included those with ST-elevation myocardial infarction (STEMI) and non-ST-elevation acute coronary syndrome (NSTEACS), including non-STEMI and unstable angina. The registry collected the data prospectively. Results: Between October 2008 and June 2009, 7930 patients were enrolled. The mean age (standard deviation), 56 (17) years; 78.8% men; 71.2% Gulf citizens; 50.1% with central obesity; and 45.6% with STEMI. A history of diabetes mellitus was present in 39.5%, hypertension in 47.2%, and hyperlipidemia in 32.7%, and 35.7% were current smokers. The median time from symptom onset to hospital arrival for STEMI patients was 178 minutes (interquartile range, 210 minutes); 22.3% had primary percutaneous coronary intervention (PCI) and 65.7% thrombolytic therapy, with 34% receiving therapy within 30 minutes of arrival. Evidence-based medication rates upon hospital discharge were 68% to 95%. The in-hospital PCI was done in 21% and the coronary artery bypass graft surgery in 2.9%. The in-hospital mortality was 4.6%, at 30 days the mortality was 7.2% , and at 1 year after hospital discharge the mortality was 9.4% ; 1-year mortality was higher in STEMI (11.5%) than in NSTEACS patients (7.7%; P<.001). Conclusions: Compared to developed countries, ACS patients in Arabian Gulf countries present at a relatively young age and have higher rates of metabolic syndrome features. STEMI patients present late, and their acute management is poor. In-hospital evidence-based medication rates are high, but coronary revascularization procedures are low. Long-term mortality rates increased severalfold compared with in-hospital mortality. abstract_id: PUBMED:30720847 The pattern of risk-factor profile in Egyptian patients with acute coronary syndrome: phase II of the Egyptian cross-sectional CardioRisk project. Background: Egypt is the most populous country in the Middle East and North Africa and has more than 15% of the cardiovascular deaths in the region, but little is known about the prevalence of traditional risk factors and treatment strategies in acute coronary syndrome (ACS) patients across Egypt. Methods: From November 2015 to August 2017, data were collected from 1 681 patients with ACS in 30 coronary care centres, covering 11 governorates across Egypt, spanning the Mediterranean coast, Nile Delta and Upper Egypt, with a focus on risk factors and management strategies. Results: Women constituted 25% of the patients. Premature ACS was common, with 43% of men aged less than 55 years, and 67% of women under 65 years. Most men had ST-elevation myocardial infarction (STEMI) (49%), while a larger percentage of women had unstable angina and non-ST-elevation myocardial infarction (NSTEMI) (32% each; p < 0.001). Central obesity was present in 80% of men and 89% of women, with 32% of men and women having atherogenic dyslipidaemia. Current smoking was reported by 62% of men and by 72% of men under 55 years. A larger proportion of women had type 2 diabetes (53 vs 34% of men), hypertension (69 vs 49%), dyslipidaemia, and obesity (71 vs 41%) (p < 0.001 for all). There were no gender differences in most diagnostic and therapeutic procedures, but among STEMI patients, 51% of men underwent primary percutaneous coronary intervention compared to 46% of women (p = 0.064). Conclusions: Central obesity and smoking are extremely prevalent in Egypt, contributing to an increased burden of premature ACS, which warrants tailored prevention strategies. The recognised tendency worldwide to treat men more aggressively was less pronounced than expected. Answer: The role of central obesity in risk stratification after an acute coronary event has been investigated in several studies, but the findings suggest that central obesity may not add significant prognostic value to the Global Registry of Acute Coronary Events (GRACE) risk score in patients with acute coronary syndrome (ACS). One study specifically aimed to assess whether waist circumference, a surrogate of abdominal adiposity, adds prognostic information to the GRACE risk score. This retrospective cohort study included 285 ACS patients and used the composite of all-cause mortality or myocardial reinfarction within six months as the endpoint. The study found that more than 70% of the population were obese or overweight, with 44.6% presenting abdominal obesity. However, the GRACE risk score showed poor predictive accuracy, and the addition of waist circumference did not improve its discriminatory performance. Therefore, the study concluded that abdominal obesity does not add prognostic information to the GRACE risk score to predict six-month mortality or myocardial reinfarction (PUBMED:24367827). In contrast, other studies have identified various obesity indices and insulin resistance surrogates that may have prognostic implications in ACS patients. For instance, the Visceral Adiposity Index (VAI) was found to be a significant indicator for predicting worse prognosis and plays an important role in risk stratification among patients with non-ST-segment elevation acute coronary syndrome (NSTE-ACS) and type 2 diabetes mellitus (T2DM) undergoing elective percutaneous coronary intervention (PCI) (PUBMED:34869637). Additionally, the Triglyceride-glucose (TyG) index was identified as a valuable insulin resistance surrogate for risk prediction and stratification in NSTE-ACS patients receiving PCI, with and without T2DM (PUBMED:34537077). Another study, the Gulf COAST registry, found that different obesity indices may aid in risk stratification of mortality following ACS. It reported a U-shaped association between deciles of obesity indices and mortality risk, with an inverse association between peripheral obesity and in-hospital mortality, and a similar inverse association for 1-year mortality (PUBMED:32943761). In summary, while central obesity measured by waist circumference may not add significant prognostic value to the GRACE risk score in ACS patients (PUBMED:24367827), other studies suggest that different obesity indices and insulin resistance surrogates could be relevant for risk stratification in this patient population (PUBMED:34537077; PUBMED:34869637; PUBMED:32943761).
Instruction: The neighborhood energy balance equation: does neighborhood food retail environment + physical activity environment = obesity? Abstracts: abstract_id: PUBMED:24386458 The neighborhood energy balance equation: does neighborhood food retail environment + physical activity environment = obesity? The CARDIA study. Background: Recent obesity prevention initiatives focus on healthy neighborhood design, but most research examines neighborhood food retail and physical activity (PA) environments in isolation. We estimated joint, interactive, and cumulative impacts of neighborhood food retail and PA environment characteristics on body mass index (BMI) throughout early adulthood. Methods And Findings: We used cohort data from the Coronary Artery Risk Development in Young Adults (CARDIA) Study [n=4,092; Year 7 (24-42 years, 1992-1993) followed over 5 exams through Year 25 (2010-2011); 12,921 person-exam observations], with linked time-varying geographic information system-derived neighborhood environment measures. Using regression with fixed effects for individuals, we modeled time-lagged BMI as a function of food and PA resource density (counts per population) and neighborhood development intensity (a composite density score). We controlled for neighborhood poverty, individual-level sociodemographics, and BMI in the prior exam; and included significant interactions between neighborhood measures and by sex. Using model coefficients, we simulated BMI reductions in response to single and combined neighborhood improvements. Simulated increase in supermarket density (from 25(th) to 75(th) percentile) predicted inter-exam reduction in BMI of 0.09 kg/m(2) [estimate (95% CI): -0.09 (-0.16, -0.02)]. Increasing commercial PA facility density predicted BMI reductions up to 0.22 kg/m(2) in men, with variation across other neighborhood features [estimate (95% CI) range: -0.14 (-0.29, 0.01) to -0.22 (-0.37, -0.08)]. Simultaneous increases in supermarket and commercial PA facility density predicted inter-exam BMI reductions up to 0.31 kg/m(2) in men [estimate (95% CI) range: -0.23 (-0.39, -0.06) to -0.31 (-0.47, -0.15)] but not women. Reduced fast food restaurant and convenience store density and increased public PA facility density and neighborhood development intensity did not predict reductions in BMI. Conclusions: Findings suggest that improvements in neighborhood food retail or PA environments may accumulate to reduce BMI, but some neighborhood changes may be less beneficial to women. abstract_id: PUBMED:28920072 Associations between neighborhood socioeconomic environment and physical activity in Cuban immigrants. Physical inactivity is a major public health concern because it is a determinant of obesity and obesity-related chronic diseases. Few longitudinal studies have examined the association between neighborhood socioeconomic (SES) environment and change in physical activity behaviors. Additionally, few studies have examined this association in immigrant groups or Hispanic subgroups such as Cubans. This research aimed to determine if neighborhood SES is associated with longitudinal change in moderate-to-vigorous physical activity (MVPA) among Cuban immigrants who participate in the Cuban Health Study in Miami, Florida. Data on 280 participants [mean age: 37.4 (± 4.6), 48.9% women, mean body mass index: 25.0 (± 2.5)] collected at baseline, 12 months and 24 months were analyzed. Minutes of MVPA were objectively measured during each data collection period using accelerometers. A neighborhood SES score was calculated for each participant's residential census tract from American Community Survey data on median household income, median housing value, educational attainment and occupation. The neighborhood SES score was grouped into tertiles, reflecting low, moderate and high neighborhood SES environment. Multilevel linear models were used to examine the relationship between neighborhood SES and change in MVPA over 24 months. At baseline, 94 (33.6%), 108 (38.6%) and 78 (27.9%) participants resided in low, moderate, and high SES neighborhoods, respectively. After adjusting for age, sex, and body mass index, no difference in average change in MVPA over time was observed between participants residing in low and moderate SES neighborhoods (p=0.48) or low and high SES neighborhoods (p=0.62). In Cuban immigrants, longitudinal change in MVPA may not vary by neighborhood socioeconomic environment. abstract_id: PUBMED:37138266 Neighborhood deprivation, built environment, and overweight in adolescents in the city of Oslo. Background: Even though the social and built environment characteristics of neighborhoods have been studied as potential determinants of social inequalities in obesity among adults, fewer studies have focused on children. Our first aim was to investigate whether there were differences in the food and physical activity environments between different neighborhood deprivation levels in the city of Oslo. We also explored whether there was an association between the prevalence of overweight (including obesity) among adolescents and (i) neighborhood deprivation levels and (ii) food and physical activity environments of the neighborhoods they live in. Methods: We conducted a food and physical activity environment mapping (using ArcGIS Pro) in all neighborhoods of Oslo, which were defined by administrative boundaries (sub-districts). The neighborhood deprivation score was calculated based on the percentage of households living in poverty, unemployment in the neighborhood, and residents with low education. A cross-sectional study including 802 seventh graders from 28 primary schools in Oslo residing in 75 out of 97 sub-districts in Oslo was also performed. MANCOVA and partial correlations were ran to compare the built environment distribution between different neighborhood deprivation levels, and multilevel logistic regression analyses were used to explore the effect of neighborhood deprivation and the food and physical activity environments on childhood overweight. Results: We found that deprived neighborhoods had greater availability of fast food restaurants and fewer indoor recreational facilities compared to low-deprived neighborhoods. Additionally, we observed that the residential neighborhoods of the adolescents with overweight had greater availability of grocery and convenience stores when compared to the residential neighborhoods of the adolescents without overweight. Adolescents living in neighborhoods with high deprivation had a two-fold higher odds (95% CI = 1.1-3.8) to have overweight compared to adolescents living in neighborhoods with low deprivation, regardless of participants' ethnicity and parental education. However, the built environment did not determine the relationship between neighborhood deprivation and overweight in adolescents. Conclusion: The neighborhoods in Oslo with higher deprivation levels had more obesogenic characteristics than the low-deprived neighborhoods. Adolescents living in high-deprived neighborhoods were more likely to have overweight than their counterparts from low-deprived neighborhoods. Thus, preventive measures targeting adolescents from high-deprived neighborhoods should be put in place in order to reduce incidence of overweight. abstract_id: PUBMED:34763543 Association of Neighborhood Food Environment and Physical Activity Environment With Obesity: A Large-Scale Cross-Sectional Study of Fifth- to Ninth-Grade Children in Japan. Objective: This study examined the relationship between neighborhood food and physical activity environment, and obesity among elementary and junior high school students in Japan. Methods: The participants were fifth- to ninth-grade children (n=7277), who were attending municipal schools in Japan. Percent overweight (POW) was calculated using their age, gender, height, and weight, which were collected through a questionnaire. A POW of < 20% was considered non-obese, while ≥ 20% was considered obese. Furthermore, using a geographic information system, we investigated the density of convenience stores, fast-food stores, casual restaurants, supermarkets and department stores, parks, sports facilities, stations, and intersections in the school district. Additionally, from the census, we obtained information regarding the population density of the municipality where the participants' schools were located. Multiple logistic regression analysis was used to examine the relationship between obesity and food environment (the food environment model), between obesity and physical activity environment (the physical activity environment model), and among obesity, food, and physical activity environment (the food and physical activity environment model). Results: In the food environment model and the food and physical activity environment model, the density of convenience stores showed a significant positive association. In the physical activity environment model, the density of stations showed a significant negative association. Conclusion: This study's findings can contribute to the development of appropriate community interventions for improving children's health in Japan and similar areas. abstract_id: PUBMED:19337520 Neighborhood food environment and walkability predict obesity in New York City. Background: Differences in the neighborhood food environment may contribute to disparities in obesity. Objectives: The purpose of this study was to examine the association of neighborhood food environments with body mass index (BMI) and obesity after control for neighborhood walkability. Methods: This study employed a cross-sectional, multilevel analysis of BMI and obesity among 13,102 adult residents of New York City. We constructed measures of the food environment and walkability for the neighborhood, defined as a half-mile buffer around the study subject's home address. Results: Density of BMI-healthy food outlets (supermarkets, fruit and vegetable markets, and natural food stores) was inversely associated with BMI. Mean adjusted BMI was similar in the first two quintiles of healthy food density (0 and 1.13 stores/km2, respectively), but declined across the three higher quintiles and was 0.80 units lower [95% confidence interval (CI), 0.27-1.32] in the fifth quintile (10.98 stores/km2) than in the first. The prevalence ratio for obesity comparing the fifth quintile of healthy food density with the lowest two quintiles combined was 0.87 (95% CI, 0.78-0.97). These associations remained after control for two neighborhood walkability measures, population density and land-use mix. The prevalence ratio for obesity for the fourth versus first quartile of population density was 0.84 (95% CI, 0.73-0.96) and for land-use mix was 0.91 (95% CI, 0.86-0.97). Increasing density of food outlets categorized as BMI-unhealthy was not significantly associated with BMI or obesity. Conclusions: Access to BMI-healthy food stores is associated with lower BMI and lower prevalence of obesity. abstract_id: PUBMED:35022093 Beyond food swamps and food deserts: exploring urban Australian food retail environment typologies. Objective: 'Food deserts' and 'food swamps' are food retail environment typologies associated with unhealthy diet and obesity. The current study aimed to identify more complex food retail environment typologies and examine temporal trends. Design: Measures of food retail environment accessibility and relative healthy food availability were defined for small areas (SA2s) of Melbourne, Australia, from a census of food outlets operating in 2008, 2012, 2014 and 2016. SA2s were classified into typologies using a two-stage approach: (1) SA2s were sorted into twenty clusters according to accessibility and availability and (2) clusters were grouped using evidence-based thresholds. Setting: The current study was set in Melbourne, the capital city of the state of Victoria, Australia. Subjects: Food retail environments in 301 small areas (Statistical Area 2) located in Melbourne in 2008, 2012, 2014 and 2016. Results: Six typologies were identified based on access (low, moderate and high) and healthy food availability including one where zero food outlets were present. Over the study period, SA2s experienced an overall increase in accessibility and healthiness. Distribution of typologies varied by geographic location and area-level socio-economic position. Conclusion: Multiple typologies with contrasting access and healthiness measures exist within Melbourne and these continue to change over time, and the majority of SA2s were dominated by the presence of unhealthy relative to healthy outlets, with SA2s experiencing growth and disadvantage having the lowest access and to a greater proportion of unhealthy outlets. abstract_id: PUBMED:37623185 Neighborhood Social Environment and Body Mass Index: The Mediating Role of Mental Wellbeing. The association between neighborhood-built environment and body mass index (BMI) is well-characterized, whereas fewer studies have explored the mechanisms underlying the relationship between neighborhood social environment and obesogenic behaviors. Using data from a random sample of 16,820 residents ≥18 years from all 169 Connecticut towns and seven ZIP Codes in New York, this study examines the influence of neighborhood social environment on residents' mental wellbeing, physical activity, and BMI. Structural equation modeling was conducted to estimate direct and indirect effects of neighborhood social environment on BMI, using mental wellbeing and physical activity as intermediate variables. There were significant total [β(SE) = 0.741 (0.170), p < 0.0001], direct [β(SE) = 0.456 (0.1890), p = 0.016], and indirect [β(SE) = 0.285 (0.061), p < 0.0001] effects of neighborhood social environment on BMI. Low physical activity was a partial mediator of the effect of non-favorable neighborhood social environment on BMI [β(SE) = -0.071 (0.011), p < 0.0001]. The association between neighborhood social environment and BMI was also mediated by mental wellbeing [β(SE) = 0.214 (0.060), p < 0.0001], and by mental wellbeing through physical activity [β(SE) = 0.071 (0.011), p < 0.0001]. Study findings provide further support for building strong social environments to improve population health and suggest that strategies prioritizing mental wellbeing may benefit behavioral interventions aimed at reducing obesity risk and should be a focus of prevention efforts in and of itself. abstract_id: PUBMED:26780582 Why the Neighborhood Social Environment Is Critical in Obesity Prevention. The continuing obesity epidemic in the USA calls for the examination of antecedents to the well-known risk factors of physical activity and diet. The neighborhood built environment has been extensively studied in relation to obesity noting an increased risk of development and prevalence of obesity in relation to numerous built environment characteristics (lack of green spaces, higher number of fast food restaurants, low walkability indices). The neighborhood social environment, however, has been less extensively studied but is perhaps an equally important component of the neighborhood environment. The neighborhood social environment, particularly constructs of social capital, collective efficacy, and crime, is associated with obesity among both adults and children. Several studies have identified physical activity as a potential pathway of the neighborhood social environment and obesity association. Further work on social networks and norms and residential segregation, as well as the examination of dietary behaviors and mental health as potential mediating pathways, is necessary. Given the existing evidence, intervening on the neighborhood social environment may prove to be an effective target for the prevention on obesity. Intervention studies that promote healthy behaviors and prevent obesity while addressing aspects of the neighborhood social environment are necessary to better identify targets for obesity prevention. abstract_id: PUBMED:31482004 Neighborhood Social Environment and Cardiovascular Disease Risk. Purpose Of Review: Limited physical activity (PA) and obesity are two primary risk factors for cardiovascular disease (CVD). Within a socio-ecological framework, neighborhood social environment may play a key role in influencing PA and obesity. However, the mechanisms underlying this relationship remain ambiguous. Our goals in this review are: (1) to summarize findings from the recent studies on neighborhood social environment in relation to PA and obesity as CVD risk factors, and (2) to briefly describe several innovative approaches to assessing neighborhood social environment. Recent Findings: Almost all recent studies assessed neighborhood social environment around residential areas. There were consistent associations between neighborhood social environment and PA and obesity, with some exceptions (indicating null associations or paradoxical associations). However, a focus on residential social environment may limit results because these studies did not account for any exposures occurring away from individuals' homes. Additionally, the majority of studies utilized a cross-sectional design, which limits our ability to make inferences regarding the causality of the association between social environment and PA or obesity as CV risk factors. Summary: The majority of the studies on neighborhood social environment characterized factors around residential areas and assessed participant activity via self-reported surveys. Future research should leverage tools to account for the spatial mismatch between environmental exposures and outcomes by using global positioning systems, ecological momentary assessments, virtual neighborhood audits, and simulation modeling. These approaches can overcome major limitations by tracking individuals' daily activity and real-time perceptions of neighborhood social environments linked to CVD events. abstract_id: PUBMED:16246381 Environmental influences on food choice, physical activity and energy balance. In this paper, the environment is defined as the macro- and community-level factors, including physical, legal and policy factors, that influence household and individual decisions. Thus, environment is conceived as the external context in which household and individual decisions are made. This paper reviews the literature on the ways the environment affects diet, physical activity, and obesity. Other key environmental factors discussed include economic, legal, and policy factors. Behind the major changes in diet and physical activity in the US and globally lie large shifts in food production, processing, and distribution systems as well as food shopping and eating options, resulting in the increase in availability of energy-dense foods. Similarly, the ways we move at home, work, leisure, and travel have shifted markedly, resulting in substantial reductions in energy expenditure. Many small area studies have linked environmental shifts with diet and activity changes. This paper begins with a review of environmental influences on diet and physical activity, and includes the discussion of two case studies on environmental influences on physical activity in a nationally representative sample of US adolescents. The case studies illustrate the important role of physical activity resources and the inequitable distribution of such activity-related facilities and resources, with high minority, low educated populations at strong disadvantage. Further, the research shows a significant association of such facilities with individual-level health behavior. The inequity in environmental supports for physical activity may underlie health disparities in the US population. Answer: The neighborhood energy balance equation posits that the neighborhood food retail environment combined with the physical activity environment can influence obesity rates. Research has shown that these neighborhood characteristics can indeed impact body mass index (BMI) and obesity, but the relationship is complex and varies by demographic factors and specific environmental features. For instance, the CARDIA study found that improvements in neighborhood food retail or physical activity environments could accumulate to reduce BMI, with increased supermarket and commercial physical activity facility density predicting BMI reductions, particularly in men. However, changes such as reduced fast food restaurant and convenience store density or increased public physical activity facility density did not predict reductions in BMI (PUBMED:24386458). In contrast, a study on Cuban immigrants in Miami, Florida, found no significant difference in the change in moderate-to-vigorous physical activity (MVPA) over time between participants residing in neighborhoods with varying socioeconomic statuses (PUBMED:28920072). Research in Oslo indicated that adolescents living in neighborhoods with high deprivation had a two-fold higher odds of being overweight compared to those in low-deprivation neighborhoods, suggesting that neighborhood deprivation levels can influence obesity rates among adolescents (PUBMED:37138266). A large-scale cross-sectional study in Japan found that the density of convenience stores was positively associated with obesity among children, while the density of stations showed a negative association in the physical activity environment model (PUBMED:34763543). In New York City, the density of BMI-healthy food outlets was inversely associated with BMI and obesity, even after controlling for neighborhood walkability (PUBMED:19337520). A study in Melbourne, Australia, identified multiple food retail environment typologies with varying access and healthiness measures, noting that areas experiencing growth and disadvantage had the lowest access to healthy food outlets (PUBMED:35022093). The neighborhood social environment has also been implicated in obesity, with studies suggesting that strong social environments and mental wellbeing may influence physical activity and BMI (PUBMED:37623185, PUBMED:26780582). Overall, while there is evidence to support the idea that both the neighborhood food retail environment and physical activity environment can influence obesity, the relationship is not straightforward and is influenced by a variety of factors, including socioeconomic status, gender, and specific environmental characteristics.
Instruction: Effect of non-leisure physical activity on mortality in U.S. adults: does propensity score matching make a difference? Abstracts: abstract_id: PUBMED:22695390 Effect of non-leisure physical activity on mortality in U.S. adults: does propensity score matching make a difference? Purpose: To reanalyze results reported in 2008, using propensity score matching, to test the treatment effect of non-leisure physical activity on survival. Methods: McCullagh's ordinal logit model was used to estimate propensity scores, separately for adults aged 35-59 and 60-74 years at baseline in the National Health and Nutrition Examination Survey I Epidemiologic Followup Study, for three levels of non-leisure activity. Each untreated adult (low non-leisure activity) was matched to two treated adults from the same age group, one with moderate and one with high non-leisure activity. Multivariable Cox proportional hazard regressions were fitted to the matched samples and compared with regressions fitted to unmatched samples. Results: Except for moderate leisure-time activity and age, observed covariates were well balanced after matching. For adults aged 35-59 years, matched regressions showed smaller mortality reductions: 23% for moderate and 32% for high non-leisure activity, compared with low non-leisure activity. For adults aged 60-74 years, matched regressions showed a smaller mortality reduction for moderate non-leisure activity (29%) but a larger reduction for high non-leisure activity (47%). Conclusions: In line with published reviews of conventional multivariable analysis versus propensity score matching, Cox regressions on matched and unmatched samples produced similar estimates of treatment effects for non-leisure activity, but the estimates from matched regressions were usually smaller. abstract_id: PUBMED:19041587 Effects of leisure and non-leisure physical activity on mortality in U.S. adults over two decades. Purpose: To estimate the effects of the components of total physical activity, leisure-time and non-leisure activity, on all-cause mortality over two decades in a large, nationally representative sample of U.S. adults. Methods: We used the first National Health and Nutrition Examination Survey (NHANES I, 1971-1975) and its Epidemiologic Followup Study (NHEFS), which tracked deaths of NHANES I participants through 1992. Using multivariable Cox regression, and multiple imputation for missing values of control variables, we related baseline leisure-time and non-leisure physical activity to all-cause mortality during follow-up, controlling for other risk factors. Adults 35 through 59 years of age (N = 5884) and 60 through 74 years of age (N = 4590) were analyzed separately. Results: For persons aged 35-59, moderate non-leisure activity at baseline significantly reduced mortality risk over the next two decades by about 26%, high non-leisure activity by about 37%, compared with low non-leisure activity. For persons 60-74, risk reductions were 34% and 38%, respectively. Leisure-time activity was associated with lower mortality, but was not consistently significant when both types of activity were entered in the regressions. Conclusions: Over two decades, non-leisure physical activity was associated with a substantial reduction in all-cause mortality. These results contribute to a growing number of studies that support the importance of measuring all physical activity. abstract_id: PUBMED:28368322 Are Structural Changes in Polish Rural Areas Fostering Leisure-Time Physical Activity? Background: In this study, we analyze the determinants of leisure-time physical activity (LTPA) of farmers and non-farmers living in rural areas. Methods: We use statistical analysis to describe urban and rural populations, as well as econometric techniques (Heckman regressions and propensity score matching) to assess the role of rural lifestyle in physical activity. Results: World Health Organization (WHO) pro-health PA (physical activity) recommendations are not met by 66% of farmers and 49% of other dwellers in rural areas. Approximately two thirds of them are completely inactive. Farmers enjoy vigorous PA (VPA), cycling and recreational walking less than their non-farming counterparts and are 46% less likely to be active than them; however the difference disappears when they take up an activity. The amount of PA is negatively correlated with age, but tends to increase for older people compared to those in middle age. Women are 6%-7% less active than men, yet the odds of being active at all are higher for women than for men. Household size is negatively correlated with LTPA. Conclusion: Considering the structural changes, rural area dwellers, especially farmers, require public intervention aimed at increasing their awareness of the advantages of LTPA. abstract_id: PUBMED:32424647 Intensity- and domain-specific physical activity levels between cancer survivors and non-cancer diagnosis individuals: a propensity score matching analysis. Purpose: To examine intensity- and domain-specific physical activity (PA) levels between Korean cancer survivors compared with non-cancer individuals. Methods: We used data from the 2014-2016 Korean National Health and Nutrition Examination Survey (KNHNES) to compare PA levels between 639 cancer survivors and 15,352 non-cancer individuals. Using the 1:5 propensity score matching analysis by sex, age, body mass index (BMI), and education level, 3195 non-cancer individuals were selected. Global Physical Activity Questionnaire (GPAQ) was used to assess PA levels. Results: Cancer survivors were more likely to participate in a leisure-time PA (64.5 ± 157.9 vs. 51.8 ± 145.3 min/week, p < 0.002) compared with non-cancer individuals, while no difference was observed between groups in work and transportation. Cancer survivors undergoing treatment participated in more leisure-time PA (80.9 ± 169.2 vs. 56.6 ± 151.8 min/week, p < 0.02) compared with cancer survivors who completed cancer treatment. However, we found no significant difference in vigorous-intensity PA (18.8 ± 113.9 vs. 20.5 ± 156.1 min/week, p = 0.56) and moderate-intensity PA (186.2 ± 313.1 vs. 203.1 ± 355.3 min/week, p = 0.17) levels between cancer survivors and non-cancer individuals. Conclusions: This study showed that cancer survivors were more participated in leisure-time PA compared with age, gender, BMI, and education levels matched non-cancer individuals. abstract_id: PUBMED:37585501 Leisure time and occupational physical activity, overall and cardiovascular mortality: a 24-year follow-up in the OPERA study. Background: In earlier studies, the health benefits of physical activity have only been related to leisure time physical activity (LTPA). High occupational physical activity (OPA) might even be harmful. The current physical activity recommendations do not separate the OPA and LTPA. We investigated the effect of LTPA and OPA on cardiovascular morbidity and mortality during long-term follow-up. We also examined how heavy work affects the benefits of leisure time exercise. Material And Methods: The study was part of the OPERA study and the baseline examinations were conducted between the years 1991 and 1993. The Follow-up of events continued until the end of the year 2020. Study subjects (n = 1044) were divided into four groups according to their LTPA ("no exercise", "irregular", "regular" and "heavy regular") and into three groups according to their OPA ("no activity", "mild" and "heavy"). The amount of exercise was self-reported and the exercise status was defined at the beginning of the study. Study subjects were followed up for their overall mortality (26 years), fatal and non-fatal CVD events (24 and 20 years) and heart failure (20 years). The survival analysis was performed using Kaplan-Meier curves and Cox-proportional hazard models. Results: "Heavy" OPA group subjects belonging to the "irregular" (less than 1-2 times 30 min exercise per week) LTPA group experienced the lowest overall mortality compared to other LTPA groups. Also, overall mortality was increased in the "mild" (p = 0.002) and CVD mortality in the" heavy" (p = 0.005) OPA group compared to "no activity". The incidence of heart failure was increased in the "no exercise" LTPA compared to the "heavy regular" (p = 0.015) group. Conclusions: Study subjects who were in physically demanding occupations (heavy OPA) seemed to benefit from less LTPA than WHO currently recommends. Thus we suggest targeting different LTPA recommendations to different OPA groups. abstract_id: PUBMED:34702239 Leisure-time physical activities and the risk of cardiovascular mortality in the Malmö diet and Cancer study. Background: The association between leisure-time physical activity and cardiovascular mortality has been previously studied, but few studies have focused on specific activities and intensities. Methods: The association between different leisure-time physical activities and cardiovascular mortality was investigated among 25,876 individuals without diabetes or cardiovascular disease from the population-based Malmö Diet and Cancer Study cohort. The individuals estimated the average duration spent on 17 physical activities at baseline in 1991-1996 and after 5 years. Cardiovascular mortality was obtained from a register during a mean of 20 years of follow-up. Results: A total leisure-time physical activity of 15-25 metabolic equivalent task (MET) hours/week was associated with a decreased risk of cardiovascular mortality (HR 15-25 vs < 7.5 MET-h/week =0.80, 95% CI 0.69-0.93), with no further risk reduction at higher levels. Several high-intensity activities (i.e., lawn tennis and running) and moderate-intensity activities (i.e., golf, cycling and gardening) were associated with a reduced risk. Individuals who engaged in high-intensity physical activity for an average of 2.29 MET h/week (30 min/week) had an 18% (95% CI 0.72-0.93) reduced risk of cardiovascular mortality compared with non-participants, and no further risk reductions were observed at higher levels. Decreased risk was observed among individuals who had started (HR 0.56, 95% CI 0.32-0.97) or continued (HR 0.49, 95% CI 0.36-0.66) high-intensity activities at the five-year follow-up. Conclusions: Moderate- and high-intensity leisure-time physical activities reduced the risk of cardiovascular mortality. With regard to total leisure-time physical activity, the largest risk reduction was observed for 15-25 MET-h/week (equivalent to walking for approximately 5 h/week). abstract_id: PUBMED:23867814 Leisure time physical activity and mortality. Background: Some studies indicate that a large part of the beneficial effect of physical activity on mortality is confined to a threshold effect of participation. Methods: Self-reported physical activity was investigated in relation to all-cause mortality in the Danish Diet, Cancer and Health cohort, including 29,129 women and 26,576 men aged 50-64 years at baseline 1993-1997. Using Cox proportional hazards models we investigated the associations between mortality rate and leisure time physical activity by exploring 1) participation (yes/no) in each type of activity; 2) a simple dose-response relationship with hours spent on each activity, supplemented with indicators of participation in each activity; and 3) inflexion or nonmonotonic dose-response relationships using linear splines. Results: A total of 2696 women and 4044 men died through March 2010. We found lower mortality with participation in sports (for women, mortality rate ratio = 0.75, 95% confidence interval = 0.69-0.81; for men, 0.78, 0.73-0.84), cycling (for women, 0.77, 0.71-0.84; for men, 0.90, 0.84-0.96), or gardening (for women, 0.84, 0.78-0.91; for men, 0.73, 0.68-0.79) and in men participating in do-it-yourself activity (0.77, 0.71-0.84). A weak adverse dose response was seen for walking and gardening, but the association was small (1-2% increase in mortality per additional hour). We found no signs of inflexion or nonmonotonic effects of additional hours spent on each activity. Conclusion: Mortality was lower with participation in specific leisure time physical activities, but not with more time spent on those activities. This could suggest that avoiding a sedative lifestyle is more important than a high volume of activity. Nonparticipation in these types of physical activity may be considered as risk factors. abstract_id: PUBMED:28728482 Impact of persistence and non-persistence in leisure time physical activity on coronary heart disease and all-cause mortality: The Copenhagen City Heart Study. Aims The aim of this study was to investigate the impact of persistence and non-persistence in leisure time physical activity on coronary heart disease and all-cause mortality. Methods and results In the Copenhagen City Heart Study, we prospectively followed 12,314 healthy subjects for 33 years of maximum follow-up with at least two repeated measures of physical activity. The association between persistence and non-persistence in leisure time physical activity, coronary heart disease and all-cause mortality were assessed by multivariable Cox regression analyses. Coronary heart disease mortality for persistent physical activity in leisure compared to persistent sedentary activity were: light hazard ratio (HR) 0.76; 95% confidence interval (CI) 0.63-0.92, moderate HR 0.52; 95% CI 0.41-0.67, and high physical activity HR 0.51; 95% CI, 0.30-0.88. The differences in longevity were 2.8 years for light, 4.5 years for moderate and 5.5 years for high physical activity. A substantial increase in physical activity was associated with lower coronary heart disease mortality (HR 0.75; 95% CI 0.52-1.08) corresponding to 2.4 years longer life, whereas a substantial decrease in physical activity was associated with higher coronary heart disease mortality (HR 1.61; 95% CI 1.11-2.33) corresponding to 4.2 years shorter life than the unchanged group. A similar pattern was observed for all-cause mortality. Conclusion We found inverse dose-response relationships between persistent leisure time physical activity and both coronary heart disease and all-cause mortality. A substantial increase in physical activity was associated with a significant gain in longevity, whereas a decrease in physical activity was associated with even greater loss of longevity. abstract_id: PUBMED:24096895 Leisure-time aerobic physical activity, muscle-strengthening activity and mortality risks among US adults: the NHANES linked mortality study. Background: Regular physical activity elicits multiple health benefits in the prevention and management of chronic diseases. We examined the mortality risks associated with levels of leisure-time aerobic physical activity and muscle-strengthening activity based on the 2008 Physical Activity Guidelines for Americans among US adults. Methods: We analysed data from the 1999 to 2004 National Health and Nutrition Examination Survey with linked mortality data obtained through 2006. Cox proportional HRs with 95% CIs were estimated to assess risks for all-causes and cardiovascular disease (CVD) mortality associated with aerobic physical activity and muscle-strengthening activity. Results: Of 10 535 participants, 665 died (233 deaths from CVD) during an average of 4.8-year follow-up. Compared with participants who were physically inactive, the adjusted HR for all-cause mortality was 0.64 (95% CI 0.52 to 0.79) among those who were physically active (engaging in ≥150 min/week of the equivalent moderate-intensity physical activity) and 0.72 (95% CI 0.54 to 0.97) among those who were insufficiently active (engaging in >0 to <150 min/week of the equivalent moderate-intensity physical activity). The adjusted HR for CVD mortality was 0.57 (95% CI 0.34 to 0.97) among participants who were insufficiently active and 0.69 (95% CI 0.43 to 1.12) among those who were physically active. Among adults who were insufficiently active, the adjusted HR for all-cause mortality was 44% lower by engaging in muscle-strengthening activity ≥2 times/week. Conclusions: Engaging in aerobic physical activity ranging from insufficient activity to meeting the 2008 Guidelines reduces the risk of premature mortality among US adults. Engaging in muscle-strengthening activity ≥2 times/week may provide additional benefits among insufficiently active adults. abstract_id: PUBMED:30157685 Associations of leisure-time physical activity with cardiovascular mortality: A systematic review and meta-analysis of 44 prospective cohort studies. Background: Many cohort studies within the past few decades have shown the protective effect of leisure-time physical activity on cardiovascular mortality. To summarise the evidence from prospective cohort studies on the relationship between the amount of leisure-time physical activity and the risk of cardiovascular mortality, a dose-response meta-analysis was conducted in this study. Methods And Results: Electronic databases, including PubMed and Embase databases, Scopus and Cochrane Library, were systemically retrieved by two investigators from inception to 14 June 2018 for related studies. The maximum adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) were extracted, and a dose-response analysis was conducted using the restricted cubic splines. Finally, a total of 44 studies comprising 1,584,181 participants was enrolled into this meta-analysis. The HRs of cardiovascular mortality for moderate and high leisure-time physical activity were 0.77 (95% CI 0.74-0.81) and 0.73 (95% CI 0.69-0.77), respectively. Among these 44 studies, 19 were eligible for the dose-response meta-analysis, which suggested a linear negative correlation of leisure-time physical activity with cardiovascular mortality, regardless of age, gender and the presence of underlying cardiovascular disease or not. Conclusions: Leisure-time physical activity shows a linear negative correlation with the risk of cardiovascular mortality regardless of age, gender and the presence of cardiovascular disease or not. However, the cardiovascular benefits of leisure-time physical activity is decreased for those aged over 65 years or those with a history of cardiovascular disease. Moreover, leisure-time physical activity displays more cardiovascular benefits to people followed up for over 10 years than to those followed up for less than 10 years. Besides, high-intensity leisure-time physical activity has more obvious cardiovascular benefits than those of moderate-intensity leisure-time physical activity. Answer: The effect of non-leisure physical activity on mortality in U.S. adults was examined using propensity score matching to test the treatment effect of non-leisure physical activity on survival. The study reanalyzed results reported in 2008 using this method. Propensity score matching was used to create a more balanced comparison between adults with different levels of non-leisure activity, by matching each adult with low non-leisure activity to two adults with moderate and high non-leisure activity, respectively. The results from the matched regressions showed smaller mortality reductions for moderate non-leisure activity (23% for adults aged 35-59 years and 29% for adults aged 60-74 years) and varied for high non-leisure activity (32% for adults aged 35-59 years and 47% for adults aged 60-74 years) when compared with low non-leisure activity. These findings suggest that propensity score matching can make a difference in the estimation of treatment effects for non-leisure activity, usually resulting in smaller estimates from matched regressions compared to unmatched samples (PUBMED:22695390).
Instruction: Does treatment of split-thickness skin grafts with negative-pressure wound therapy improve tissue markers of wound healing in a porcine experimental model? Abstracts: abstract_id: PUBMED:22958590 Negative-pressure wound dressings to secure split-thickness skin grafts in the perineum. Several researches have shown that negative-pressure wound dressings can secure split-thickness skin grafts and improve graft survival. However, in anatomically difficult body regions such as the perineum it is questionable whether these dressings have similar beneficial effects. In this study, we evaluated the effects of negative-pressure wound dressings on split-thickness skin grafts in the perineum by comparing wound healing rate and complication rate with that of tie-over dressings. A retrospective chart review was performed for the patients who underwent a split-thickness skin graft to reconstruct perineal skin defects between January 2007 and December 2011. After grafting, the surgeon selected patients to receive either a negative-pressure dressing or a tie-over dressing. In both groups, the initial dressing was left unchanged for 5 days, then changed to conventional wet gauze dressing. Graft success was assessed 2 weeks after surgery by a single clinician. A total of 26 patients were included in this study. The mean age was 56·6 years and the mean wound size was 273·1 cm(2). Among them 14 received negative-pressure dressings and 12 received tie-over dressings. Negative-pressure dressing group had higher graft taken rate (P = 0·036) and took shorter time to complete healing (P = 0·01) than tie-over dressing group. The patients with negative-pressure dressings had a higher rate of graft success and shorter time to complete healing, which has statistical significance. Negative-pressure wound dressing can be a good option for effective management of skin grafts in the perineum. abstract_id: PUBMED:31413962 One-step approach for full-thickness skin defect reconstruction in rats using minced split-thickness skin grafts with Pelnac overlay. Background: Split-thickness skin grafting is the current gold standard for the treatment of traumatic skin loss. However, for patients with extensive burns, split-thickness skin grafting is limited by donor skin availability. Grafting split-thickness skin minced into micrografts increases the expansion ratio but may reduce wound repair quality. Dermal substitutes such as Pelnac can enhance the healing of full-thickness skin wounds, but their application currently requires two surgeries. The present study investigated whether it is possible to repair full-thickness skin defects and improve wound healing quality in a single surgery using Pelnac as an overlay of minced split-thickness skin grafts in a rat model. Methods: A full-thickness skin defect model was established using male Sprague-Dawley rats of 10 weeks old. The animals were randomly divided into control and experimental groups in which Vaseline gauze and Pelnac, respectively, were overlaid on minced split-thickness skin grafts to repair the defects. Wound healing rate and quality were compared between the two groups. For better illustration of the quality of wound healing, some results were compared with those obtained for normal skin of rats. Results: We found that using Pelnac as an overlay for minced split-thickness skin grafts accelerated wound closure and stimulated cell proliferation and tissue angiogenesis. In addition, this approach enhanced collagen synthesis and increased the formation of basement membrane and dermis as well as the expression of growth factors related to wound healing while reducing scar formation. Conclusions: Using minced split-thickness skin grafts overlaid with Pelnac enables the reconstruction of full-thickness skin defects in a single step and can increase the healing rate while improving the quality of wound healing. abstract_id: PUBMED:22846954 Does treatment of split-thickness skin grafts with negative-pressure wound therapy improve tissue markers of wound healing in a porcine experimental model? Background: Negative-pressure wound therapy (NPWT) has been used for to treat wounds for more than 15 years and, more recently, has been used to secure split-thickness skin grafts. There are some data to support this use of NPWT, but the actual mechanism by which NPWT speeds healing or improves skin graft take is not entirely known. The purpose of this project was to assess whether NPWT improved angiogenesis, wound healing, or graft survival when compared with traditional bolster dressings securing split-thickness skin grafts in a porcine model. Methods: We performed two split-thickness skin grafts on each of eight 30 kg Yorkshire pigs. We took graft biopsies on postoperative days 2, 4, 6, 8, and 10 and submitted the samples for immunohistochemical staining, as well as standard hematoxylin and eosin staining. We measured the degree of vascular ingrowth via immunohistochemical staining for von Willenbrand's factor to better identify blood vessel epithelium. We determined the mean cross-sectional area of blood vessels present for each representative specimen, and then compared the bolster and NPWT samples. We also assessed each graft for incorporation and survival at postoperative day 10. Results: Our analysis of the data revealed that there was no statistically significant difference in the degree of vascular ingrowth as measured by mean cross-sectional capillary area (p = 0.23). We did not note any difference in graft survival or apparent incorporation on a macroscopic level, although standard hematoxylin and eosin staining indicated that microscopically, there seemed to be better subjective graft incorporation in the NPWT samples and a nonsignificant trend toward improved graft survival in the NPWT group. Conclusion: We were unable to demonstrate a significant difference in vessel ingrowth when comparing NPWT and traditional bolster methods for split-thickness skin graft fixation. More studies are needed to elucidate the manner by which NPWT exerts its effects and the true clinical magnitude of these effects. abstract_id: PUBMED:35735663 Application of Negative Pressure Therapy on Skin Grafts after Soft-Tissue Reconstruction: A Prospective Observational Study. A split-thickness skin graft (STSG) is one of the main tools used in the reconstruction of skin defects. Negative-pressure wound therapy (NPWT) has been widely used as adjunct therapy for wound healing for decades. Few studies have conducted the outcomes of NPWT use as a postoperative dressing for STSGs. This study aimed to compare the outcomes of the application of NPWT versus conventional dressing on STSGs after soft-tissue reconstruction. A prospective observational study was performed at the King Abdullah Medical City. A total of 18 patients with STSGs for acute or chronic skin defects were recruited. Patients from the two groups—10 patients in the NPWT group and 8 in the No-NPWT group—were postoperatively evaluated for three weeks. Assessment included the STSG take rate, wound healing, pain, infection, hematoma formation, and the need to re-graft the same recipient area. Our data demonstrated a higher mean skin graft take rate in the second and third weeks of the No-NPWT group compared to the NPWT group, but it was not statistically significant (p > 0.05). No significant differences between the two groups in terms of wound healing, pain, infection, hematoma formation, and the need to re-graft (p > 0.05) were found. Our study showed that the conventional dressing of STSGs is not inferior to NPWT. In addition, conventional dressing was shown to be easier to use and less expensive to apply, as well as having a higher skin graft take rate and lower infection rate. abstract_id: PUBMED:29292216 Negative-pressure therapy versus conventional therapy on split-thickness skin graft: A systematic review and meta-analysis. Objective: To compare the clinical outcomes of negative-pressure wound therapy (NPWT) versus conventional therapy on split-thickness skin after grafting surgery. Design: Meta-analysis. Background: Split-thickness skin grafts are widely used in reconstruction of large skin defects. Conventional therapy causes pain during dressing changing. NPWT is an alternative method to cover the wound bed. Methods: The Pubmed, Embase, and Cochrane databases were searched for randomized controlled trials (RCTs) or cohort studies for articles published between 1993 and April 2017 comparing NPWT to conventional wound therapy for split-thickness skin grafts. The rate of graft take was the primary outcome of this meta-analysis. Wound infection and reoperation rate of the wound were secondary outcomes. Data analysis was conducted using the Review Manager 5.3 software. Results: Five cohort studies and seven RCTs including 653 patients were eligible for inclusion. Patients treated with NPWT had a significantly higher rate of graft take compared to those treated with conventional therapy [MD = 7.02, (95% CI 3.74, 10.31)] (P = .00). NPWT was associated with a reduction in reoperation [RR = 0.28, (95% CI 0.14, 0.55)] (P = .00). The reduction in wound infection was not significant [RR = 0.63, (95% CI 0.31, 1.27)] (P = .20). Conclusion: Compared with conventional therapy, NPWT significantly increases the rate of graft take and reduces the rate of reoperation when applied to cover the wound bed with split-thickness skin graft. No significant impact on wound infection was found in this study. abstract_id: PUBMED:32320360 In Situ Forming Nutritional and Temperature Sensitive Scaffold Improves the Esthetic Outcomes of Meshed Split-Thickness Skin Grafts in a Porcine Model. Objective: Full-thickness burn wounds require immediate coverage, and the primary clinical approaches comprise of skin allografts and autografts. The use of allografts is often temporary due to the antigenicity of allografts. In contrast, the availability of skin autografts may be limited in large burn injuries. In such cases, skin autografts can be expanded through the use of a skin mesher, creating meshed split-thickness skin grafts (MSTSGs). MSTSGs have revolutionized the treatment of large full-thickness burn injuries since the 1960s. However, contractures and poor esthetic outcomes remain a problem. We previously formulated and prepared an in situ forming skin substitute, called MeshFill (MF), which can conform to complex shapes and contours of wounds. The objective of this study was to assess the esthetic and wound healing outcomes in full-thickness wounds treated with a combination of MF and MSTSG in a porcine model. Approach: Either MSTSGs or MSTSG+MF was applied to full-thickness excisional wounds in Yorkshire pigs. Wound healing outcomes were assessed using histology, immunohistochemistry, and wound surface area analysis from day 10 to 60. Clinical evaluation of wounds were utilized to assess esthetic outcomes. Results: The results demonstrated that the combination of MSTSGs and MF improved wound healing and esthetic outcomes. Innovation: Effects of MSTSGs and reconstitutable liquid MF in a full-thickness porcine model were investigated for the first time. Conclusion: MF provides promise as a combination therapeutic regimen to improve wound healing and esthetic outcomes. abstract_id: PUBMED:28325054 The use of negative pressure wound therapy in the fixation of split-thickness skin grafts Introduction: Negative pressure wound therapy is one of the latest methods of dealing with complicated healing wounds. It promotes granulation, mechanically attracts the edges of the wound, removes secretions, reduces the number of bacteria in the wound and reduces swelling. In addition to its use to start and enhance the healing process, this method is also important in the fixation of split-thickness skin grafts in non-ideal conditions. The goal of this article is to establish basic indications for negative pressure fixation of meshed split-thickness skin grafts in non-ideal conditions in the wound and to assess the impact of contamination of wounds on engraftment using vacuum therapy. Additional goals are to verify the use of this method of fixation in defects of various etiologies (trauma, ischemia), to optimize and determine the advantages and disadvantages of fixation of grafts using this method in clinical practice, and to evaluate the effectiveness of fixation of meshed split-thickness skin grafts. Methods: Set of 89 operated patients of both sexes, various ages, etiologies of defects, in non-ideal conditions; statistical evaluation of the percentage of engraftment, depending on the etiology of the defect, microbial contamination and location of the defect. Measured in vivo using a centimeter measure at the point of maximum length and width. Results: Our set of 100% engraftments of StSG included 68 persons, 65 males and 24 females, in the following age groups: up to 30 years 11 persons; 3050 years 19 persons; 5070 years 38 persons; and above 70 years 21 persons, with negative microbial contamination of the defect in 20 cases, contamination with one germ in 33 cases, contamination with two germs in 22 cases and contamination with three germs in 14 cases. We obtained 100% engraftment in 68 cases, 9099% engraftment in 7 cases, 8089% engraftment in 5 cases, 7079% engraftment in 7 cases, and the 6069% and 5059% sets of engraftment were combined because of the low number of patients in this set. 51 of the patients had a traumatic origin of their defect, 22 had an ischemic origin of their defect and 16 had a different origin of their defect. We found a significant relationship between contamination and the percentage of engraftment, as well as dependence between patient age and the percentage of engraftment. Conclusion: Negative pressure fixation of meshed split-thickness skin grafts seems to be a convenient method of fixation in patients with defects of various origins in non-ideal conditions; this method increases the percentage of engraftment and apparently reduces the time required for fixation of the graft and the length of hospitalisation. We obtained 100% engraftment of StSG using negative pressure fixation. We concluded that traumatic origin had no effect on the percentage of engraftment, while ischemic origin had a significant effect on engraftment. Also, negative contamination of the defect had a positive effect on StSG engraftment, and contamination wit three microbial germs had a significant negative effect on the percentage of StSG engraftment using negative pressure fixation.Key words: negative pressure therapy - NPWT plastic surgery skin grafts complicated wounds. abstract_id: PUBMED:36042464 Hypergranulation over a meshed split-thickness skin graft, a complication of negative-pressure wound therapy: a case report. Background: We present a case of a rare complication of negative-pressure wound therapy (NPWT) wherein there was fixation of a meshed split-thickness skin graft (STSG), suspected as a failure by hypergranulation. However, the meshed STSG was integrated within 5 days of NPWT cessation. Case Presentation: A 22-year-old Asian man sustained 25% total-body-surface-area flame burns. After multiple operations, an ulcer was present on the proximal left thigh. On day 37 after admission, the ulcer was debrided, and an 11/1000-inch (0.28 mm) skin graft was taken from the ipsilateral thigh and meshed, using a 1:1.5 ratio. NPWT was applied to the donor and recipient sites with a continuous negative pressure of 125 mmHg. On day 43, NPWT was discontinued. The skin grafts were not identified on the surface of the granulation tissue. With topical ointment therapy, rapid epithelialization of the ulcer was observed as the granulation tissue regressed. On day 48, the recipient site had completely epithelialized. Conclusions: The hypergranulation tissue rarely covered the meshed STSGs when the grafts were fixed by NPWT. In that case, immediate debridement should be avoided, and conservative treatment should be initiated. abstract_id: PUBMED:32842398 Clinical application of negative-pressure wound therapy in split-thickness skin grafting at hard-to-fix sites Objective: To compare the clinical effects of continuous negative-pressure wound therapy (NPWT) and conventional pressure dressing at at hard-to-fix sites after split-thickness skin grafting. Methods: From September 2017 to August 2019, 129 patients who met the inclusion criteria and had spilt-thickness skin grafting at hard-to-fix sites were admitted to the First Affiliated Hospital of Air Force Medical University and included in this retrospective cohort study. The patients were divided into NPWT group (67 patients, 41 males and 26 females, aged (32±6) years) and conventional pressure dressing group (62 patients, 37 males and 25 females, aged (30±5) years) according to whether the hard-to-fix sites were applied with NPWT after spilt-thickness skin grafting. After debridement and spilt-thickness skin grafting at hard-to-fix sites in patients of 2 groups, the wounds of patients in conventional pressure dressing group were applied with conventional pressure bandaging after being filled with dry gauze; for the wounds of patients in NPWT group, the semi-permeable membrane was pasted and sealed for continuous negative pressure suction after filled with dry gauze and placed the drainage foam or drainage tube, with the negative pressure ranging from -16.6 to -9.9 kPa. The bandage was opened during the first dressing change on the 5th day after surgery in NPWT group and on the 7th day after surgery in conventional pressure dressing group. The skin graft surviving area and proportion, the area and proportion of hematoma, the incidence of common complications of skin graft were observed and calculated. The times of postoperative dressing change and the length of hospital stay were counted. Data were statistically analyzed with two independent sample t test, Cochran & Cox approximate t test, chi-square test, and Fisher's exact probability test. Results: (1) At the first dressing change, the skin graft surviving area of patients in NPWT group was (420±94) cm(2), which was significantly larger than (322±97) cm(2) in conventional pressure dressing group (t'=12.33, P<0.01); the skin graft surviving area proportion of patients in NPWT group was (97.0±2.3)%, which was significantly higher than (74.4±4.8)% in conventional pressure dressing group (t'=50.11, P<0.01). (2) At the first dressing change, the skin hematoma area of patients in conventional pressure dressing group was (31.7±10.1) cm(2), which was significantly larger than (3.2±0.7) cm(2) in NPWT group (t'=23.04, P<0.01); the skin hematoma area proportion of patients in conventional pressure dressing group was (7.3±2.3)%, which was significantly higher than (0.7±0.3)% in NPWT group (t'=76.21, P<0.01). (3) At the first dressing change, there was 1 case of skin movement and no case of skin graft edge tear in NPWT group with an incidence of 1.5% (1/67). In the conventional pressure dressing group, there were 4 cases of skin movement and 2 cases of skin graft edge tear with an incidence of 9.7% (6/62), P<0.05. The incidence of complication of skin graft of patients in NPWT group was significantly lower than that in conventional pressure dressing group (P<0.05). (4) The times of postoperative dressing change of patients in NPWT group was significantly less than that in conventional pressure dressing group (t=7.93, P<0.01). The postoperative length of hospital stay in NPWT group was significantly less than that in conventional pressure dressing group (t=11.71, P<0.01). Conclusions: Continuous NPWT can effectively promote wound healing, improve the survival rate of skin graft, reduce the incidence of complications after skin grafting, and shorten the length of hospital stay in split-thickness skin grafting at hard-to-fix sites. abstract_id: PUBMED:23550662 Effects of negative pressure wound therapy on healing of free full-thickness skin grafts in dogs. Objective: To compare healing of free, full-thickness, meshed skin grafts under negative pressure wound therapy (NPWT) with bolster dressings in dogs. Study Design: Randomized, controlled experimental study, paired design. Animals: Dogs (n = 5) METHODS: Full-thickness skin wounds (4 cm × 1.5 cm) were created bilaterally on the antebrachia of 5 dogs (n = 10). Excised skin was grafted to the contralateral limb. Grafts were randomized to NPWT or bolster dressings (control; CON). NPWT was applied continuously for 7 days. Grafts were evaluated on Days 2, 4, 7, 10, 14, and 17, biopsied on days 0, 4, 7, and 14, and had microbial culture on Day 7. Outcome variables were: time to first appearance of granulation tissue, percent graft necrosis, and percent open mesh. Significance was set at P < .05. Histologic findings, culture results, and graft appearance were reported. Results: Granulation tissue appeared earlier in the NPWT grafts compared with CON grafts. Percent graft necrosis and remaining open mesh area were both greater in CON grafts compared with NPWT grafts at most time points. Histologic results showed no significant difference in all variables measured, and all cultures were negative. Conclusions: Variables of graft acceptance were superior when NPWT was used in the first week post-grafting. Fibroplasia was enhanced, open meshes closed more rapidly and less graft necrosis occurred with NPWT application. More preclinical studies are required to evaluate histologic differences. Answer: The treatment of split-thickness skin grafts with negative-pressure wound therapy (NPWT) did not show a statistically significant difference in the degree of vascular ingrowth as measured by mean cross-sectional capillary area when compared with traditional bolster methods for split-thickness skin graft fixation in a porcine experimental model. The study assessed angiogenesis, wound healing, or graft survival and found no significant difference in vessel ingrowth or macroscopic graft survival or incorporation, although there seemed to be better subjective graft incorporation in the NPWT samples and a nonsignificant trend toward improved graft survival in the NPWT group (PUBMED:22846954). More studies are needed to elucidate the manner by which NPWT exerts its effects and the true clinical magnitude of these effects.
Instruction: Cervical lymph node metastasis from an unknown primary: is a tonsillectomy necessary? Abstracts: abstract_id: PUBMED:9308930 Cervical lymph node metastasis from an unknown primary: is a tonsillectomy necessary? Purpose: The detection of an infraclinical primary by tonsillectomy in case of cervical lymph node of an epidermoid carcinoma with unknown primary after a radical neck dissection, allows avoiding irradiation of the normal larynx. The aim of this study is to quantify the rate of tonsil primary to justify this procedure. Methods And Materials: From 1969 to 1992, 87 patients had a tonsillectomy as part of the workup for cervical nodal metastasis of an epidermoid carcinoma with unknown primary. The mean age was 57 years (range: 39-75 years) and the sex ratio was 8.6. Sixty-seven patients had a single cervical adenopathy (17 N1, 30 N2a, 5 N3, 15 Nx), and 20 patients multiple cervical adenopathies (17 N2b, 3 N2c). The treatments included always an irradiation to the node areas (50 Gy), and to the pharyngolarynx in case of normal tonsil (50 Gy), or to the tonsil if it was the primary (50 Gy with a brachytherapy boost of 20-25 Gy). In this last case, the larynx could be protected. Results: Tonsillectomy never induced specific complication. Out of 87 patients, 26% had a tonsil primary. There was not specific histological differentiation in this group. In the 67 patients with a single cervical adenopathy, 31% had a tonsil primary (6 N1, 7 N2, 1 N3, 7 Nx). It was a subdigastric adenopathy in 38%, a submandibular in 28% and a midjugulocarotidian in 23%. Among the 17 patients N2b, none had a tonsil primary. In the three patients N2c, two presented a tonsil carcinoma (two subdigastric nodes). Conclusion: Tonsillectomy allows avoiding irradiation of normal larynx in 26% of patients who have a cervical lymph node with unknown primary. It should be performed in case of a single node of the subdigastric, midjugulocarotidian or submandibular area or bilateral subdigastric adenopathies. abstract_id: PUBMED:36597158 Clinical characteristics and outcomes of cervical lymph node metastasis from unknown primary sites: a single institution's 14-year experience. Background: Cervical lymph node metastasis from unknown primary sites is a challenging clinical issue with a changing therapy model and unpredictable outcomes, which leads to the difficulty in selecting optimal treatments. Thus, it is valuable to analyze the clinical characteristics and outcomes of the patients who receive different management styles. Methods: All patients with cervical lymph node metastasis from unknown primary sites were reviewed and no primary lesions were found. In addition, this work was funded by the Clinical Trial Fund Project of Tianjin Medical University Cancer Institute and Hospital (No. C1716). Specifically, we used univariate, multiple regression analysis to evaluate the factors associated with prognosis. Results: 365 patients met the inclusion criteria, and the 2- and 5-year survival rates were 77.0% and 33.4%, respectively, with a median survival of 45 months. Gender, age, pathological type, nodal status, and necessary cervical lymph node dissection affected locoregional control. Distant metastasis was common in individuals with a pathological type of adenocarcinoma, poor differentiation, and advanced nodal status. Furthermore, patients who received induction chemotherapy had a better prognosis than those treated with postoperative chemotherapy. Multiple regression analysis showed that pathological grade, treatment models, and distant metastasis were associated with overall survival (OS) and progression-free survival (PFS). In addition, local recurrence exerted a significant influence on OS. Induction chemotherapy and postsurgical radiotherapy seemed to improve the prognosis of patients at the advanced stage compared with simple surgery and postsurgical chemotherapy. Conclusions: Pathological grade, treatment models, and distant metastasis were independent risk factors for prognosis. Induction chemotherapy or postoperative radiotherapy benefited patients at the advanced stage, and patients with adenocarcinoma, poor differentiation, and advanced nodal status should undergo induction chemotherapy in light of the increased risk of distant metastasis. abstract_id: PUBMED:35967951 Radiomics analysis for differentiating of cervical lymphadenopathy between cancer of unknown primary and malignant lymphoma on unenhanced computed tomography. To investigate the usefulness of texture analysis to discriminate between cervical lymph node (LN) metastasis from cancer of unknown primary (CUP) and cervical LN involvement of malignant lymphoma (ML) on unenhanced computed tomography (CT). Cervical LN metastases in 17 patients with CUP and cervical LN involvement in 17 patients with ML were assessed by 18F-FDG PET/CT. The texture features were obtained in the total cross-sectional area (CSA) of the targeted LN, following the contour of the largest cervical LN on unenhanced CT. Values for the max standardized uptake value (SUVmax) and the mean SUV value (SUVmean), and 34 texture features were compared using a Mann-Whitney U test. The diagnostic accuracy and area under the curve (AUC) of the combination of the texture features were evaluated by support vector machine (SVM) with nested cross-validation. The SUVmax and SUVmean did not differ significantly between cervical LN metastases from CUP and cervical LN involvement from ML. However, significant differences of 9 texture features of the total CSA were observed (p = 0.001 - 0.05). The best AUC value of 0.851 for the texture feature of the total CSA were obtained from the correlation in the gray-level co-occurrence matrix features. SVM had the best AUC and diagnostic accuracy of 0.930 and 84.8%. Radiomics analysis appears to be useful for differentiating cervical LN metastasis from CUP and cervical LN involvement of ML on unenhanced CT. abstract_id: PUBMED:28413396 Mammary Analogue Secretory Carcinoma Presenting as a Cervical Lymph Node Metastasis of Unknown Primary Site: A Case Report. Background: Mammary analogue secretory carcinoma (MASC) is a pathological entity arising in the salivary glands first described by Skalova et al. [Am J Surg Pathol 2010;34: 599-608]. Here, we report the first case of MASC presenting as a cervical lymph node metastasis of unknown primary site together with a brief review of the literature. Case Report: We present a 74-year-old male with a painless lump in his left neck. Based on the fine-needle aspiration cytological findings, a possible malignant tumor was suspected. No evidence of a primary lesion was observed using imaging modalities including positron emission tomography/computed tomography. The patient underwent an ipsilateral modified radical neck dissection. Immunohistochemical staining showed that the neoplastic cells were positive for S100 protein and GATA3. A rearrangement of the ETV6 gene was noted during fluorescence in situ hybridization, and the final histopathological diagnosis was MASC. Conclusion: We encountered a MASC presenting as a cervical lymph node metastasis of unknown primary site. No adjuvant therapy was administered, and no local recurrence or metastatic disease has been detected during a follow-up period of 9 months. This is the first case report of MASC presenting as a cervical lymph node metastasis of unknown primary site and suggests the new properties of MASC. abstract_id: PUBMED:31914284 The scheme of diagnosis and treatment on metastatic cervical carcinomas from an unknown primary SummaryMetastatic cervical carcinomas from unknown primary(MCCUP) are a group of metastatic carcinomas of cervical lymph nodes which could not be managed to find their origins. The diagnosis of MCCUP includes pathological diagnosis of metastatic cervical carcinomas, search of the primary cancer, the stage of(T₀) NM. For unknown primary, there are some uncertainties of treatment on MCCUP. The methods of diagnosis and treatment on MCCUP should be renewed with the development of clinical medicine. abstract_id: PUBMED:30197359 Concurrent chemoradiotherapy with or without induction chemotherapy for the management of cervical lymph node metastasis from unknown primary tumor. Background: To determine the impact of concurrent chemoradiotherapy (CCRT) with or without induction chemotherapy over radiation alone for the treatment of patients of cervical lymph node metastasis from an unknown primary tumor. Materials And Methods: A total of 96 patients with squamous cell histology of cervical lymph node metastases from unknown primary tumor were retrospectively reviewed. About 12 patients were treated with radiation alone (RT alone), 45 with CCRT, and 39 with induction chemotherapy followed by CCRT (ICCRT). Results: Overall response rate (ORR) obtained with RT alone arm was 75%, whereas for CCRT, it was 86.7%. About 15.4% had a complete response (CR) after induction chemotherapy with 92.3% ORR after the completion of CCRT. At 2 years, the disease free survival (DFS) was 50% in RT alone, 73% with CCRT, and 62% with ICCRT, while the corresponding rates at 5 years were 50% for RT alone, 45% with CCRT, and 54% in ICCRT arm. Conclusion: Though we obtained a high ORR with concurrent or induction chemotherapy arm over RT alone, there is no clear advantage in the DFS. The nonrandomized nature of this trial and the potential for selection bias should definitely be taken into account while drawing comparisons from this trial. abstract_id: PUBMED:9376159 Malignant cervical adenopathies of unknown origin A retrospective study of malignant cervical lymph node enlargement of unknown origin was made from 1985-1995. Of 26 cases observed, the primary tumor was found in 8 cases after a complete ENT study in 3 the primary tumor was found in the course of the disease and in 15 the origin remained unknown. Patients were treated with a combination of surgery, chemotherapy and radiation therapy. Only 2 patients survive, one with 6.5 years of follow-up and the other with 20 months. The literature was reviewed for new diagnostic and therapeutic methods for cervical lymph node enlargement of unknown origin. abstract_id: PUBMED:28486947 Diagnostic and treatment modalities for patients with cervical lymph node metastases of unknown primary site - current status and challenges. Background And Purpose: This review aims to provide a comprehensive overview of the literature and elucidate open questions for future clinical trials concerning diagnostics and treatment modalities for cervical cancer of unknown primary (CUP). Methods: A literature search for head and neck CUP was performed with focus on diagnostics and therapies as well as molecular markers. Results: High level evidence on CUP is limited. However, it seems that a consensus exists regarding the optimal diagnostic procedures. The correct implementation of biomarkers for patient stratification and treatment remains unclear. An even greater dispute dominates about the ideal treatment with publications ranging from sole surgery to surgery with postoperative bilateral radiotherapy with inclusion of the mucosa and concomitant chemotherapy. Conclusions: Cervical CUP represents a very heterogeneous malignant disease. On this account many aspects concerning treatment optimization remain unclear, despite a considerable number of publications in the past. Future research in form of prospective randomized trials is needed in order to better define patient stratification criteria and enable tailored treatment. abstract_id: PUBMED:10756760 Cervical lymph node metastases of unknown origin In this paper we present a three-year prospective study (1991-1993) about the cervical lymph node metastases of unknown origin. 370 patients with lymphatic carcinomatous metastases in the were admitted in "Sf. Spiridon" Hospital--ENT Clinic. No primary tumor was located in 33 patients (8.9%). The diagnostic strategy for cervical metastases of an unknown primary was reviewed. The probability of locating the primary tumor depend on diagnostic phase. abstract_id: PUBMED:8010575 Lateral cervical metastases from unknown primary The authors report their experience about cervical metastasis of unknown primitive neoplasms. Epidemiology, histological type and topography are exposed. Primaries which are responsible, in a high incidence, of the preceding cervical localizations are described. Diagnostic iter, non-invasive/low-price clinical and instrumental procedures, advanced endoscopic and radiologic procedures are discussed. 64 patients were treated. Only in one case we could diagnose the primitive site of the neoplasm with non-invasive procedures. The other cases (63 patients) underwent the excision-biopsy of the cervical nodes. In 52 cases we have been able to define the primitive site of the neoplasm. 11 patients, in which the primary was unknown, underwent the radiotherapy of the cervical nodes and the eventual primitive "foci"; we had no survival, on the average, after six months for the diffuse dissemination of the neoplastic disease. Answer: The necessity of tonsillectomy in cases of cervical lymph node metastasis from an unknown primary is debated. According to the study by PUBMED:9308930, tonsillectomy as part of the workup for cervical nodal metastasis of an epidermoid carcinoma with unknown primary was performed on 87 patients. The results showed that 26% of these patients had a tonsil primary, and the procedure allowed for the avoidance of irradiation of the normal larynx in these cases. The study concluded that tonsillectomy should be performed in cases of a single node of the subdigastric, midjugulocarotidian, or submandibular area or bilateral subdigastric adenopathies, as it can reveal the primary site in a significant proportion of patients and potentially alter the treatment plan to avoid unnecessary irradiation of the larynx. However, the necessity and outcomes of tonsillectomy are not discussed in the other abstracts provided. These abstracts focus on different aspects of managing cervical lymph node metastasis from unknown primary sites, such as the impact of different treatment models on prognosis (PUBMED:36597158), the use of radiomics analysis for differentiation between cancer of unknown primary and malignant lymphoma (PUBMED:35967951), the presentation of a specific case of Mammary Analogue Secretory Carcinoma (PUBMED:28413396), general diagnostic and treatment schemes for metastatic cervical carcinomas from unknown primary (PUBMED:31914284), the impact of concurrent chemoradiotherapy with or without induction chemotherapy (PUBMED:30197359), and reviews of diagnostic and treatment modalities for patients with cervical lymph node metastases of unknown primary site (PUBMED:28486947, PUBMED:10756760, PUBMED:8010575). In summary, the study by PUBMED:9308930 supports the use of tonsillectomy in specific cases of cervical lymph node metastasis from an unknown primary to potentially identify the primary site and tailor the treatment approach. However, the necessity of tonsillectomy may depend on individual patient factors and the diagnostic and treatment protocols of the treating institution. Further research and consensus in the medical community could help clarify the role of tonsillectomy in this context.
Instruction: Can Classic Bladder Exstrophy be Safely and Successfully Reconstructed at a Low Volume Center? Abstracts: abstract_id: PUBMED:26144337 Can Classic Bladder Exstrophy be Safely and Successfully Reconstructed at a Low Volume Center? Purpose: Exstrophy reconstruction is challenging and requires expertise and experience. However, many patients are treated at low volume centers. We evaluated whether classic bladder exstrophy could be safely and successfully reconstructed at a low volume center. Materials And Methods: A total of 31 patients with classic bladder exstrophy were primarily treated at our low volume center during a 17-year period. A total of 22 patients underwent primary closure within 5 days of birth and 9 underwent delayed closure with osteotomy. Of the patients 29 underwent planned modern staged repair and 2 underwent attempted complete primary repair. Results: The bladder was successfully closed in all 31 children. All 22 newborns underwent primary bladder closure without osteotomy, including 4 with extremely small bladder plates. Bladder neck obstruction developed in 3 patients (10%), of whom 2 were treated successfully with transurethral dilation and 1 underwent open repair. Epispadias repair was successful in 12 of 15 patients undergoing the Cantwell-Ransley technique and in 2 of 4 undergoing complete primary repair. A total of 16 patients underwent bladder neck reconstruction, of whom 9 are awaiting appropriate age or capacity, 4 were lost to followup, 1 is continent after bladder closure alone and 2 underwent continent diversion (1 after failed bladder neck reconstruction). Of the 15 patients with at least 1 year of followup after bladder neck reconstruction 9 are continent day and night, 2 are continent only during the daytime and 4 are incontinent, for a 73% post-bladder neck reconstruction continence rate (11 of 15 patients). Conclusions: Successful exstrophy reconstruction is achievable at a low volume center, with results comparable to those of high volume centers. abstract_id: PUBMED:29627154 How to close classic bladder exstrophy: Are subspecialty training and technique important? Introduction: Successful primary closure is one of the main factors for achieving continence in a classic bladder exstrophy (CBE) patient. Even with contemporary management, patients still have failed primary closures. We sought to understand the role of training, surgical technique, and their impacts on outcomes of CBE closure. Materials And Methods: A retrospective cohort study from the largest single-institution database of primary and re-closure CBE patients in the world was performed. Failed closure was defined as developing bladder outlet obstruction, wound dehiscence, bladder prolapse, or any need for a re-closure operation. Patient demographics and surgical factors were abstracted and analyzed. Multivariable analysis was performed to test for associations with successful exstrophy closure. Results: Data from 722 patients were analyzed. On bivariate analysis, successful closure was associated with gestational age at presentation, time of closure, location of closure, credential of surgeon performing the closure, closure type, concomitant osteotomy, and type of immobilization. Multivariable analysis, adjusting for patient comorbidity and location of closure, demonstrated increased odds of failure for closure by pediatric surgeon compared with pediatric urologist (OR 4.32, 95% CI 1.98-9.43; p = 0.0002), closure by unknown credentialed surgeon (OR 1.86, 95% CI 1.15-2.99; p = 0.011), Complete Primary Repair of Exstrophy (CPRE) closure compared with Modern Staged Repair of Exstrophy (OR 2.05, 95% CI 1.29-2.99; p = 0.0024), and unknown closure type (OR 4.81, 95% CI 2.94-7.86; p < 0.0001) (Table). Discussion: Many factors associated with failure on bivariate analysis can be explained by these patients presenting to a center of excellence or the selection bias of this cohort stemming from a single center database that have been previously published. However, the finding on adjusted multivariable logistic regression analysis that closure by a pediatric surgeon is associated with higher odds of failure is novel. The additional finding that CPRE closure is associated with failure is most likely secondary to these patients being referred to our institution after having been closed with CPRE which falsely increases its impact on closure failure. Nevertheless, as a center with a large exstrophy volume, this study draws from a cohort that is larger than any other. Conclusion: Classic bladder exstrophy closure should be performed at a center with pediatric urologists to ensure the best chance of a successful primary closure. abstract_id: PUBMED:37451916 Long term outcomes in classic bladder exstrophy - The adult picture. With continued improvements in medical care and surgical reconstruction, more patients with classic bladder exstrophy (CBE) are living into adulthood, than ever before. With improved survival, a greater emphasis on adult issues and improving quality of life (QOL) for these individuals is of increasing importance. This review aims to summarize data on long-term considerations for the adult with exstrophy and to highlight areas of future research and collaboration. Key conclusions are that continence or dryness are achievable alongside the ability to enjoy sexual relationships and a good quality of life. abstract_id: PUBMED:29914824 Predictors and outcomes of perioperative blood transfusions in classic bladder exstrophy repair: A single institution study. Background: Primary bladder closure of classic bladder exstrophy (CBE) is a major operation that occasionally requires intraoperative or postoperative (within 72 h) blood transfusions. Objective: This study reported perioperative transfusion rates, risk factors for transfusion, and outcomes from a high-volume exstrophy center in primary bladder closure of CBE patients. Study Design: A prospectively maintained, institutional exstrophy-epispadias complex database of 1305 patients was reviewed for primary CBE closures performed at the authors' institution (Johns Hopkins Hospital) between 1993 and 2017. Patient and surgical factors were analyzed to determine transfusion rates, risk factors for transfusions, and outcomes. Patients were subdivided into two groups based upon the time of closure: neonatal and delayed closure. Results: A total of 116 patients had a primary bladder closure during 1993-2017. Seventy-three patients were closed in the neonatal period, and 43 were delayed closures. In total, 64 (55%) patients received perioperative transfusions. No transfusion reactions were observed. Twenty-five transfusions were in the neonatal closure group, yielding a transfusion rate of 34%. In comparison, 39 patients were transfused in the delayed closure group, giving a transfusion rate of 91%. Pelvic osteotomy, delayed bladder closure, higher estimated blood loss (EBL), larger pubic diastasis, and longer operative time were all associated with blood transfusion. In multivariable logistic regression, pelvic osteotomy (OR 5.4; 95% CI 1.3-22.8; P < 0.001), higher EBL-to-weight ratio (OR 1.3; 95% CI 1.1-1.6; P = 0.029), and more recent years of primary closure (OR 1.1; 95% CI 1.0-1.2; P = 0.018) remained independent predictors of receiving a transfusion (Summary Table). No adverse transfusion reactions or complications were observed. Discussion: This was the first study from a single high-volume exstrophy center to explore factors that contribute to perioperative blood transfusions. Pelvic osteotomy as a risk factor was unsurprising, as the osteotomy may bleed both during and immediately after closure. However, it is important to use osteotomy for successful closure, despite the increased transfusion risk. The risks accompanying contemporary transfusions are minimal and osteotomies are imperative for successful bladder closure. Conclusions: More than half of CBE patients undergoing primary closure at a single institution received perioperative blood transfusions. While there was an association between transfusions and osteotomy, delayed primary closure, larger diastasis, increased operative time, and increased length of stay, only the use of pelvic osteotomy, higher EBL-to-weight ratio, and recent year of closure independently increased the odds of receiving a transfusion on multivariate analysis. abstract_id: PUBMED:29386761 IschioPubic Osteotomy, A Simple and Effective Technique for Pelvic Ring Closure in Repair of Classic Exstrophy Bladder. Aim: The aim of this study is to report the technique and outcome of ischiopubic (IP) osteotomy for pelvic ring closure in classic exstrophy bladder. Methods: A total of 85 male classic exstrophies were selected based on trapezoid-shaped space between IP ramus on three-dimensional computed tomography pelvis. Using midline scroto-perineal approach, after bladder plate mobilization and radical corporal detachment; the pelvic surface of superior pubic ramus (SPR) was exposed. Above the obturator canal, H-shaped incision was made on periosteum of the SPR. The horizontal line of H was placed above the obturator canal. Rectangular periosteal flaps above and below the horizontal line was raised. In the subperiosteal plane, curved hemostats encircled the SPR that were divided using a bone drill. On the medial aspect of ischial tuberosity, a notch was created as hinge using a bone drill. Forks of bone holding forceps hooked the pubic bone and tightened for its midline approximation, lengthening of the SPR and inward rotation of IP ramus. Linea alba and pubic bones were approximated with interrupted Polygalactin sutures. Results: Midline approximation of pubic bone and linea alba was possible in all. There was no injury to obturator nerve, vessels, or other structures . Conclusions: IP osteotomy is the safe and effective technique of pelvic ring closure in patients with specific pelvic configuration. abstract_id: PUBMED:28794909 Role of the LF-SINE-Derived Distal ISL1 Enhancer in Patients with Classic Bladder Exstrophy. A genome-wide association study and meta-analysis identified ISL1 as the first genome-wide significant susceptibility gene for classic bladder exstrophy (CBE). A short interspersed repetitive element (SINE), first detected in lobe-finned fishes (LF-SINE), was shown to drive Isl1 expression in embryonic mouse genital eminence. Hence, we assumed this enhancer a conclusive target for mutations associated with CBE formation and analyzed a cohort of 200 CBE patients. Although we identified two enhancer variants in five CBE patients, their clinical significance seems unlikely, implying that sequence variants in the ISL1 LF-SINE enhancer are not frequently associated with CBE. abstract_id: PUBMED:38245430 Perioperative management of primary classic bladder exstrophy: A single institutional pathway to success. Purpose: Appropriate perioperative management is crucial in patients undergoing classic bladder exstrophy closure (CBE). Therefore, the authors sought to review their intra and postoperative management of patients with CBE undergoing primary closure and examine the impact of this pathway on patient outcomes. Method: A prospectively maintained institutional approved exstrophy-epispadias complex database was reviewed for patients with CBE who had undergone primary closure between 2016 and 2022 and whose closure was performed within one year of age. Electronic medical records for eligible patients were retrospectively reviewed to examine patient demographics, use of pelvic osteotomy, immobilization status, pediatric intensive care unit (PICU) admission and management, perioperative analgesia and sedation, nutritional support, drainage tubes, blood transfusions, antibiotic coverage, hospital length of stay, postoperative complications, and closure failure. Results: A total of 25 patients were identified, 22 with CBE and 3 with variant CBE. Closure was performed at a median age of 84 days with patients ranging in age from 9 to 351 days. All patients underwent osteotomy and immobilization with modified Buck's traction and external fixation for a median duration of 41 days. A suprapubic tube was placed in all patients for a median duration of 46.5 days. All patients underwent PICU admission following closure for a median duration of 8 days. Ventilator support was required in 68 % of patients for a median of 3 days. Epidural analgesia was used in all patients and catheters were maintained for a median duration of 19 days. All patients received a blood transfusion over the course of their admission. Patient-controlled analgesia was used in most patients as an adjunct for a median duration of 38.5 days. Other commonly used analgesic adjuncts included acetaminophen, diazepam, clonidine, and dexmedetomidine. TPN was used in 80 % of patients for a median of 7 days with a return of oral feeding thereafter. Overall, the closure success rate in this cohort of patients was 100 %. Discussion: The outcome of primary bladder closure can have inauspicious consequences that can affect a child's continence for years. The incidence of failed bladder closure can be minimized with the implementation of a detailed plan for immobilization, analgesia, and nutrition guided by an experienced multi-disciplinary team. Conclusion: We have identified several guiding principles for perioperative success in exstrophy patients at our center including Buck's traction with external fixation, provision of adequate postoperative analgesia and sedation, aggressive nutritional support, renal and bladder drainage, and robust antibacterial support. Our high success rate in managing this complex pathology demonstrates its validity and use as a pathway to success. abstract_id: PUBMED:37244839 Bladder capacity and growth in classic bladder exstrophy: A novel predictive tool. Introduction: Bladder capacity (BC) is an important metric in the management of patients with classic bladder exstrophy (CBE). BC is frequently used to determine eligibility for surgical continence procedures, such as bladder neck reconstruction (BNR), and is associated with the likelihood of achieving urinary continence. Objective: To use readily available parameters to develop a nomogram that could be used by patients and pediatric urologists to predict BC in patients with CBE. Study Design: An institutional database of CBE patients was reviewed for those who have undergone annual gravity cystogram 6 months after bladder closure. Candidate clinical predictors were used to model BC. Linear mixed effects models with random intercept and slope were used to construct models predicting log transformed BC and were compared with adjusted R2, Akaike Information Criterion (AIC), and cross-validated mean square error (MSE). Final model evaluated via K-fold cross-validation. Analyses were performed using R version 3.5.3 and the prediction tool was developed with ShinyR. Results: In total, 369 patients (107 female, 262 male) with CBE had at least one BC measurement after bladder closure. Patients had a median of 3 annual measurements (range 1-10). The final nomogram includes outcome of primary closure, sex, log-transformed age at successful closure, time from successful closure, and interaction between outcome of primary closure and log-transformed age at successful closure as the fixed effects with random effect for patient and random slope for time since successful closure (Extended Summary). Discussion: Using readily accessible patient and disease related information, the bladder capacity nomogram in this study provides a more accurate prediction of bladder capacity ahead of continence procedures when compared to the age-based Koff equation estimates. A multi-center study using this web-based CBE bladder growth nomogram (https://exstrophybladdergrowth.shinyapps.io/be_app/) will be needed for widespread application. Conclusion: Bladder capacity in those with CBE, while known to be influenced by a broad swath of intrinsic and extrinsic factors, may be modeled by the sex, outcome of primary bladder closure, age at successful bladder closure and age at evaluation. abstract_id: PUBMED:32981861 Volumetric and acetabular changes in the bony pelvis associated with primary closure of classic bladder exstrophy. Introduction: Repair of classic bladder exstrophy (CBE) is known to alter dimensions of the bony pelvic ring. Pelvic volume and acetabular configuration are additional metrics which merit analysis in the reconstruction process. Advances in magnetic resonance imaging (MRI) allow for precise elucidation of such anatomy in pediatric patients, providing enhanced knowledge of how primary reconstruction may impact factors in pelvic health. Methods: An IRB-approved exstrophy-epispadias database of 1337 patients was reviewed for patients with CBE who had pelvic MRI performed before and after repair. Pelvic MRIs were analyzed by a pediatric radiologist, and three-dimensional volumetric renderings of the true pelvis were calculated. Pre- and post-closure imaging were compared, in addition to imaging from age-matched controls without pelvic pathology. Cartilaginous acetabular index and version angles were also calculated and compared between groups. Results: Eighteen patients with post-closure imaging, 14 of whom also had pre-closure imaging, and 23 control patients (ages 0-365 days) were included. The median ages at pre- and post-closure scan were 2 and 178 days, respectively. Osteotomy was performed in 6 (33%) of the closures. The median segmented pelvic volumes were 89 cm3 in the pre-closure group, 105 cm3 in the post-closure group, and 72 cm3 in the control group. At a given age, patients with CBE pre-closure had the largest pelvic volume, and those without bladder exstrophy demonstrated the smallest pelvic volumes (Summary Figure). CBE patients' pelvic volumes were overall lower following repair, relative to age (p = 0.007). Cartilaginous acetabular version angle increased following closure, with acetabular orientation converting from retroversion to anteversion in all cases. However, only acetabular version angles were significantly different between groups. Discussion: This study found that pelvic volume significantly decreased relative to age following primary repair of CBE, but that it did not correct to control levels. Similarly, acetabular retroversion that is naturally seen in exstrophy patients was converted to anteversion post-closure but did not become completely normal. Knowledge of these persistent anatomical anomalies may be useful in treatment of future concerns related to exstrophy, such as continence achievement, pelvic organ prolapse, and potential gait disturbances. Lack of significant differences between study groups regarding acetabular index angles can give assurance to providers that hip dysplasia is neither a natural concern nor iatrogenically inflicted through reconstruction. Conclusions: The primary closure of exstrophy results in pelvic volumes and anteverted acetabula that more closely resemble those without bladder exstrophy, compared to pre-closure findings. abstract_id: PUBMED:31286274 Anatomy of Classic Bladder Exstrophy: MRI Findings and Surgical Correlation. Purpose Of Review: The exstrophy-epispadias complex (EEC) represents a group of congenitally acquired malformations involving the musculoskeletal, gastrointestinal, and genitourinary systems. Classic bladder exstrophy (CBE) is the most common and best studied entity within the EEC. In this review, imaging features of CBE anatomy will be presented with surgical correlation. Recent Findings: Magnetic resonance imaging (MRI) has emerged as a useful modality for pre- and postnatal assessment of the abdominal wall, pelvic floor, and gastrointestinal and genitourinary systems of children with CBE. The authors' experience supports use of preoperative MRI, in conjunction with navigational software, as a method for identifying complex CBE anatomy. Imaging facilitates surgical approach and improves visualization of complex anatomy, potentially helping to avoid complications. Continued investigation of imaging guidance in CBE repair is needed as surgical techniques improve. Answer: According to the study reported in abstract PUBMED:26144337, classic bladder exstrophy can be safely and successfully reconstructed at a low volume center. The study evaluated 31 patients with classic bladder exstrophy who were primarily treated at a low volume center over a 17-year period. The results showed that the bladder was successfully closed in all 31 children, with a 73% post-bladder neck reconstruction continence rate (11 of 15 patients) for those with at least 1 year of followup after bladder neck reconstruction. These findings suggest that successful exstrophy reconstruction is achievable at a low volume center, with results comparable to those of high volume centers.
Instruction: Does systematic lymphadenectomy have a benefit on survival of suboptimally debulked patients with stage III ovarian carcinoma? Abstracts: abstract_id: PUBMED:12407725 Does systematic lymphadenectomy have a benefit on survival of suboptimally debulked patients with stage III ovarian carcinoma? A DEGOG* Study. Background And Objectives: The aim of this study was to investigate whether systematic lymphadenectomy is necessary in suboptimally cytoreduced patients with stage III ovarian carcinoma. Methods: Prognostic significance and the effect on survival of systematic pelvic and para-aortic lymphadenectomy were investigated retrospectively in 61 suboptimally debulked patients with stage III ovarian carcinoma. All patients received platinum-based chemotherapy after surgery; 51 patients had been followed for &gt; or =1 year, or until death. Survival curves were calculated according to the Kaplan-Meier method and were evaluated by log-rank test. Results: Most patients had stage IIIC disease (60.7%), poorly differentiated tumor (45.9%), and serous histological type (59%). Systematic pelvic and para-aortic lymphadenectomy was performed in 29 patients (47.5%). Lymph node metastases were found in 17 (58.6%) patients; the median number of metastatic nodes was 7 (5-10). Lymph node metastasis was significantly higher in patients with residual disease of &gt;2 cm (P &lt; 0.05). Both univariate and multivariate analyses showed that systematic pelvic and para-aortic lymphadenectomy was not a significant prognostic factor (P &gt; 0.05). In lymph node-dissected patients, survival was significantly longer in patients with minimal residual tumor than in those with residual tumor size &gt;2 cm (P = 0.005). Conclusions: Lymphadenectomy seems not to have an evident prognostic value and a benefit on survival in suboptimally debulked patients with stage III ovarian carcinoma. abstract_id: PUBMED:35435097 Survival Advantage of Lymphadenectomy in Patients with Ovarian Cancer. We investigated the survival effect of lymphadenectomy in ovarian cancer. The five-year progression-free and overall survival in early-stage ovarian cancer were not affected. Preliminary, unadjusted analysis in advanced ovarian cancer suggested an improvement in survival. However, after adjusting for other factors, e.g. ECOG performance status and patients' age, this survival advantage vanished. Our analysis suggests that systemic pelvic and para-aortic lymphadenectomy was not associated with an improvement of the progression-free and overall survival of patients with optimally debulked ovarian cancer. abstract_id: PUBMED:21058437 The impact of systematic para-aortic and pelvic lymphadenectomy on survival in patients with optimally debulked ovarian cancer. Aim: The objective of this study was to verify the impact of systematic retroperitoneal lymphadenectomy on survival in patients with ovarian cancer. Material & Methods: During 2001–2005, clinical records of 118 patients with epithelial ovarian cancer were collected in Tokushima prefecture. From a number of hospitals, patients in one group were treated without systematic lymphadenectomy, and in another group, patients were treated with routine systematic lymphadenectomy. Clinical records were reviewed retrospectively and progression-free survival (PFS) and overall survival (OS) were compared. Results: Sixty-two patients were staged as I–II according to the macroscopic findings at surgery. Forty of these patients received systematic lymphadenectomy and 22 patients did not. The 5-year OS was 100 and 80%, respectively (P = 0.07). The 5-year PFS was 94 and 71%, respectively (P = 0.04). In patients with clear cell adenocarcinoma, 3-year OS and PFS were significantly better in the lymphadenectomy group (P = 0.01, P = 0.046, respectively). The 56 patients staged as III–IV according to the macroscopic findings at surgery were optimally debulked. Twenty-eight of these patients received systematic lymphadenectomy and 28 patients did not. There is no difference in the 5-year OS (65 and 66%, respectively; P = 0.71) or the 5-year PFS (30 and 52%, respectively; P = 0.48). Conclusion: This study has demonstrated that the systematic lymphadenectomy had benefit only in patients with ovarian cancer macroscopically confined to the pelvis. In patients with clear cell adenocarcinoma, systematic lymphadenectomy was beneficial. To the contrary, systematic lymphadenectomy had no benefit on OS or PFS in patients with advanced ovarian cancer if optimally debulked. abstract_id: PUBMED:18593951 A gene signature predicting for survival in suboptimally debulked patients with ovarian cancer. Despite the existence of morphologically indistinguishable disease, patients with advanced ovarian tumors display a broad range of survival end points. We hypothesize that gene expression profiling can identify a prognostic signature accounting for these distinct clinical outcomes. To resolve survival-associated loci, gene expression profiling was completed for an extensive set of 185 (90 optimal/95 suboptimal) primary ovarian tumors using the Affymetrix human U133A microarray. Cox regression analysis identified probe sets associated with survival in optimally and suboptimally debulked tumor sets at a P value of &lt;0.01. Leave-one-out cross-validation was applied to each tumor cohort and confirmed by a permutation test. External validation was conducted by applying the gene signature to a publicly available array database of expression profiles of advanced stage suboptimally debulked tumors. The prognostic signature successfully classified the tumors according to survival for suboptimally (P = 0.0179) but not optimally debulked (P = 0.144) patients. The suboptimal gene signature was validated using the independent set of tumors (odds ratio, 8.75; P = 0.0146). To elucidate signaling events amenable to therapeutic intervention in suboptimally debulked patients, pathway analysis was completed for the top 57 survival-associated probe sets. For suboptimally debulked patients, confirmation of the predictive gene signature supports the existence of a clinically relevant predictor, as well as the possibility of novel therapeutic opportunities. Ultimately, the prognostic classifier defined for suboptimally debulked tumors may aid in the classification and enhancement of patient outcome for this high-risk population. abstract_id: PUBMED:32660444 The relationship between retroperitoneal lymphadenectomy and survival in advanced ovarian cancer patients. Background: Systematic retroperitoneal lymphadenectomy has been widely used in the surgical treatment of advanced ovarian cancer patients. Nevertheless, the corresponding therapeutic may not provide a survival benefit. The aim of this study was to assess the effect of systematic retroperitoneal lymphadenectomy in such patients. Methods: Patients with advanced ovarian cancer (stage III-IV, according to the classification presented by the International Federation of Gynecology and Obstetrics) who were admitted and treated in Zhejiang Cancer Hospital from January 2004 to December 2013 were enrolled and reviewed retrospectively. All patients were optimally or suboptimally debulked (absent or residual tumor &lt; 1 cm) and divided into two groups. Group A (no-lymphadenectomy group, n = 170): patients did not undergo lymph node resection; lymph nodes resection or biopsy were selective. Group B (n = 240): patients underwent systematic retroperitoneal lymphadenectomy. Results: A total of 410 eligible patients were enrolled in the study. The patients' median age was 51 years old (range, 28-72 years old). The 5-year overall survival (OS) and 2-year progression-free survival (PFS) rates were 78 and 24% in the no-lymphadenectomy group and 76 and 26% in the lymphadenectomy group (P = 0.385 and 0.214, respectively). Subsequently, there was no significant difference in 5-year OS and 2-year PFS between the two groups stratified to histological types (serous type or non-serous type), the clinical evaluation of negative lymph nodes or with macroscopic peritoneal metastasis beyond pelvic (IIIB-IV). Multivariate Cox regression analysis indicated that systematic retroperitoneal lymphadenectomy was not a significant factor influencing the patients' survival. Patients in the lymphadenectomy group had a higher incidence of postoperative complications (incidence of infection treated with antibiotics was 21.7% vs. 12.9% [P = 0.027]; incidence of lymph cysts was 20.8% vs. 2.4% [P &lt; 0.001]). Conclusions: Our study showed that systematic retroperitoneal lymphadenectomy did not significantly improve survival of advanced ovarian cancer patients with residual tumor &lt; 1 cm or absent after cytoreductive surgery, and were associated with a higher incidence of postoperative complications. abstract_id: PUBMED:32117740 Prognostic Significance of Systematic Lymphadenectomy in Patients With Optimally Debulked Advanced Ovarian Cancer: A Meta-Analysis. Background: The effect of systematic lymphadenectomy (SL) on survival in patients with optimally debulked advanced ovarian cancer remains unclear. We evaluated the therapeutic value of SL in advanced ovarian cancer patients who underwent primary optimal debulking surgery. Methods: A meta-analysis was carried out using articles retrieved from the PubMed, Embase, and Cochrane databases. Overall survival (OS) and progression-free survival (PFS) were compared between patients who underwent SL and those who underwent unsystematic lymphadenectomy (USL). Results: Seven studies that included 2,425 patients with advanced ovarian cancer were included in the meta-analysis. The overall analyses indicated significantly improved OS [hazard ratio (HR) = 0.64, 95% confidence interval (CI): 0.49-0.84, P &lt; 0.01] but not PFS (HR = 0.89, 95% CI: 0.69-1.15, P = 0.38) in patients who underwent SL compared to those who underwent USL. Subgroup analyses based on study type, study quality, total numbers of patients, and International Federation of Gynecology and Obstetrics (FIGO) stage provided similar results. However, subgroup analysis of patients with no residual tumor revealed that SL was not associated with improved OS (HR = 0.81, 95% CI: 0.66-1.00, P = 0.05) or PFS (HR = 1.09, 95% CI: 0.91-1.30, P = 0.33). Conclusions: In patients with optimally debulked advanced ovarian cancer, SL may improve OS but not PFS. However, SL does not provide a survival advantage when macroscopically complete resection of all visible tumors is achieved. abstract_id: PUBMED:35785208 Systematic Pelvic and Para-Aortic Lymphadenectomy During Fertility-Sparing Surgery in Patients With Early-Stage Epithelial Ovarian Cancer: A Retrospective Study. Objective: The implication of pelvic and para-aortic lymphadenectomy in surgical staging of early-stage epithelial ovarian cancer (eEOC) is still debated. Limited data are available about systematic lymphadenectomy during fertility-sparing surgery (FSS) in patients with eEOC. Methods: The medical records of 38 patients with FIGO stage I EOC and below 40 years who underwent FSS at our hospital between January 2003 and December 2018 were retrospectively reviewed. Results: Among them, 18 patients (47.4%) underwent comprehensive lymphadenectomy, 11 patients (28.9%) underwent lymph node sampling, and 9 patients (23.7%) did not undergo lymphadenectomy. There was no statically significant difference in age, histology, grade, surgical approach, chemotherapy, and gestation among the three groups. With a median follow-up of 52.5 months (range: 24-153), three patients (7.9%) with FIGO stage IC EOC developed tumor recurrence. In these patients, progress-free survival (PFS) was 92.1%, and overall survival (OS) was 94.7%. No significant difference in the OS. Three patients had among all the patients, 15 patients (39.5%) had gestation after treatment, and 23 patients (60.5%) did not have gestation after treatment. Conclusion: The number of lymph nodes removed did not significantly affect survival eEOC with FSS. Systematic pelvic and para-aortic lymphadenectomy could not be performed for mucious eEOC patients with FSS if intraoperative freezing in confirmed and no suspicious lymph nodes are found. A better understanding of sentinel lymph node biopsy may help to identify whether the patient requires FSS. abstract_id: PUBMED:37149928 The role of systematic pelvic and para-aortic lymphadenectomy in the management of patients with advanced epithelial ovarian, tubal, and peritoneal cancer: A systematic review and meta-analysis. Objective: To investigate whether systematic pelvic and para-aortic lymphadenectomy offers superior survival rates in patients with advanced epithelial ovarian cancer (EOC), tubal, or peritoneal cancer. Methods: We searched the electronic databases PubMed (from 1996), Cochrane Central Register of Controlled trials (from 1996), and Scopus (from 2004) to September 2021. We considered randomised controlled trials (RCTs) comparing systematic pelvic and para-aortic lymphadenectomy with no lymphadenectomy in patients with advanced EOC. Primary outcomes were overall survival and progression-free survival. Secondary outcomes were peri-operative morbidity and operative mortality. The revised Cochrane tool for randomised trials (RoB 2 tool) was utilised for the risk of bias assessment in the included studies. We performed time-to-event and standard pairwise meta-analyses, as appropriate. Results: Two RCTs with a total of 1074 patients were included in our review. Meta-analysis demonstrated similar overall survival (HR = 1.03, 95% CI [0.85-1.24]; low certainty) and progression-free survival (HR = 0.92, 95% CI [0.63-1.35]; very low certainty). Regarding peri-operative morbidity, systematic lymphadenectomy was associated with higher rates of lymphoedema and lymphocysts formation (RR = 7.31, 95% CI [1.89-28.20]; moderate certainty) and need for blood transfusion (RR = 1.17, 95% CI [1.06-1.29]; moderate certainty). No statistically significant differences were observed in regard to other peri-operative adverse events between the two arms. Conclusions: Systematic pelvic and para-aortic lymphadenectomy is likely associated with similar overall survival and progression-free survival compared to no lymphadenectomy in optimally debulked patients with advanced EOC. Systematic lymphadenectomy is also associated with an increased risk for certain peri-operative adverse events. Further research needs to be conducted on whether we should abandon systematic lymphadenectomy in completely debulked patients during primary debulking surgery. abstract_id: PUBMED:18284454 Combination chemotherapy of intraperitoneal carboplatin and intravenous paclitaxel in suboptimally debulked epithelial ovarian cancer. The objective of this study was to retrospectively assess the efficacy and safety of combination chemotherapy of intraperitoneal (IP) carboplatin and intravenous (IV) paclitaxel in suboptimally debulked ovarian cancer. Between March 1998 and March 2006, 44 patients with histologically confirmed epithelial ovarian carcinoma or peritoneal carcinoma with a residual mass greater than 1 cm received combination chemotherapy of IV paclitaxel and IP carboplatin. Administration of IV paclitaxel at 175 mg/m(2) immediately followed by IP carboplatin at an area under the curve of 6 was scheduled every 3 weeks for at least six cycles. The diagnosis and stage were ovarian carcinoma stage II in 8, III in 25, and IV in 6 cases, and peritoneal carcinoma stage III in 5 cases. Eighty-three percent of patients completed more than six cycles of chemotherapy. The incidences of grade 3/4 hematologic toxicities were 41 (93%) for neutrocytopenia, 10 (41%) for thrombocytopenia, and 18 (23%) for anemia. Observed grade 3/4 nonhematologic toxicities were 1 (2%) for allergy, 1 (2%) for fatigue, 1 (2%) for vomiting, 1 (2%) for liver dysfunction, and 4 (9%) for peripheral neuropathy. Two patients (5%) encountered catheter problems (one obstruction and one infection). Overall response rate was 80% (16 complete response, 19 partial response, 3 stable disease, and 6 progressive disease). Median progression-free survival and overall survival were 24 and 31 months, respectively. Combination chemotherapy of IP carboplatin and IV paclitaxel is effective and safe in suboptimally debulked ovarian cancer, and further evaluation is warranted. abstract_id: PUBMED:34814044 Prognostic value of systematic lymphadenectomy in patients with ovarian cancer: A systematic review and meta-analysis. Objective: To appraise clinical outcomes of systematic lymphadenectomy in women with ovarian cancer based on stage, control group and type of chemotherapy. Study Design: A literature search was conducted on SCOPUS, PUBMED, COCHRANE, MEDLINE, and WEB OF SCIENCE databases. All comparative studies that assess outcomes of systematic lymphadenectomy in patients with ovarian cancer were eligible. Overall survival was analyzed by pooling log hazard ratio (HR) and standard error of multivariable Cox regression models. MOGGE Meta-analysis Matrix is a novel illustration tool that was used to demonstrate multiple subgroup analyses of included studies. Results: Twenty-two studies were eligible. Systematic lymphadenectomy was associated with better overall survival, that was close to significance, compared to control group (HR 0.93, 95 %CI 0.86-1.00). Among women treated with adjuvant chemotherapy, overall survival improved in women with stage IIB-IV who underwent systematic lymphadenectomy (HR 0.91, 95 %CI 0.84-0.99) and was most significant among patients with stage III to IV (HR 0.85, 95 %CI 0.73-0.99). Systematic lymphadenectomy did not improve survival in women who received neoadjuvant chemotherapy (HR 0.97, 95 %CI 0.73-1.29). Systematic lymphadenectomy was associated with improved progress-free survival compared to control group (HR 0.88, 95 %CI 0.79-0.99). Conclusion: Although data from clinical trials do not support role of systematic lymphadenectomy in advanced ovarian cancer, overall data conveys stage-specific survival benefit. Further clinical trials may be warranted to assess substage survival outcomes in women with advanced stages. Answer: The evidence regarding the benefit of systematic lymphadenectomy on survival in suboptimally debulked patients with stage III ovarian carcinoma is mixed. Some studies suggest that systematic lymphadenectomy does not have a clear prognostic value or benefit on survival in these patients. For instance, a DEGOG study found that systematic pelvic and para-aortic lymphadenectomy was not a significant prognostic factor for survival in suboptimally debulked patients with stage III ovarian carcinoma (PUBMED:12407725). Similarly, another study indicated that systematic retroperitoneal lymphadenectomy did not significantly improve survival in advanced ovarian cancer patients with residual tumor < 1 cm or absent after cytoreductive surgery and was associated with a higher incidence of postoperative complications (PUBMED:32660444). On the other hand, a meta-analysis suggested that systematic lymphadenectomy may improve overall survival but not progression-free survival in patients with optimally debulked advanced ovarian cancer. However, it also noted that systematic lymphadenectomy does not provide a survival advantage when macroscopically complete resection of all visible tumors is achieved (PUBMED:32117740). Another study found that systematic lymphadenectomy had a benefit only in patients with ovarian cancer macroscopically confined to the pelvis, particularly in those with clear cell adenocarcinoma, but not in patients with advanced ovarian cancer if optimally debulked (PUBMED:21058437). Furthermore, a systematic review and meta-analysis concluded that systematic pelvic and para-aortic lymphadenectomy is likely associated with similar overall survival and progression-free survival compared to no lymphadenectomy in optimally debulked patients with advanced epithelial ovarian cancer, while also being associated with an increased risk for certain peri-operative adverse events (PUBMED:37149928). In summary, the current evidence does not consistently support a survival benefit of systematic lymphadenectomy in suboptimally debulked patients with stage III ovarian carcinoma. The decision to perform lymphadenectomy should be individualized, taking into account the potential risks and benefits for each patient.
Instruction: Do physical therapists change their beliefs, attitudes, knowledge, skills and behaviour after a biopsychosocially orientated university course? Abstracts: abstract_id: PUBMED:19674225 Do physical therapists change their beliefs, attitudes, knowledge, skills and behaviour after a biopsychosocially orientated university course? Aim: The aim of this study is to examine the effects of an 8-day university-based training course, aimed at identifying and addressing psychosocial prognostic factors during physiotherapy treatment, in shifting therapists towards a more biopsychosocial orientation as measured by changes in beliefs/attitudes, knowledge, skills and behaviour. Method: We combined a randomized controlled trail with a pre-post design. Forty-two physiotherapists applied for a university-accredited training course designed to enhance knowledge and management of psychosocial factors in their practice with patients suffering from musculoskeletal pain. The course participants were randomized either to receiving the course or to a waiting list for training. Attitudes and beliefs towards, and knowledge of psychosocial factors, patient vignettes and a video of an imaginary patient were tested before and after training. The patients of the course participants were asked to fill out a questionnaire with background questions at treatment start. The patients also received a questionnaire about the physical therapists' behaviour and patient satisfaction 6 weeks after treatment start. Results: The results show that physical therapists' attitudes and believes became more biopsychosocially and less biomedically orientated, they were less convinced that pain justifies disability and limitation of activities, and their knowledge and skills on psychosocial risk factors increased after a university-accredited training course. Yet despite these changes their patients perceived their practice behaviour before and after the course as similar and were equally satisfied with their treatment and treatment result. Conclusion: A course, which enhanced biopsychosocial attitudes and beliefs, as well as increased such knowledge and skills did not change the way patients perceived their physical therapists. A future question is whether it improves patient outcome. abstract_id: PUBMED:26019383 Evidence-based practice: beliefs, attitudes, knowledge, and skills among Colombian physical therapists. Objective: The main purpose of this study was to describe a group of Colombian physical therapists' beliefs and attitudes towards Evidence-Based Practice (EBP), their education, knowledge and skills for implementing EBP, the use of relevant literature in clinical practice, access to and availability of scientific information and perceived barriers to including EBP in practice. Methods: This was a cross-sectional study which involved 1,064 Colombian physical therapists. The study used a 50-item screening questionnaire EBP developed to estimate attitudes, beliefs, knowledge and skills regarding. This instrument has been adapted and was validated previously in Colombia by Flórez-López et al. Results: The population mostly consisted of young females (77.2%) aged 22 to 29 years old (79.4%). Most respondents had an undergraduate degree (87.7%). The physical therapists stated that they had positive attitudes and beliefs regarding EBP, most of them answering that they agreed or strongly agreed that EBP is necessary (71.6%), the relevant literature is useful for practice (61.3%), EBP improves the quality of patient care (64.1%) and evidence helps in decision-making (44.5%). Forty-one percent of the respondents indicated that a lack of research skills was the most important barrier to the use of evidence in practice. Conclusion: The physical therapists reported that they had a positive attitude to EBP and were interested in learning about or improving the skills necessary to adopt EBP in their clinical practice. abstract_id: PUBMED:35131592 Factors associated with the low back pain-related attitudes and beliefs of physical therapists. Background: Clinicians who believe in a strong connection between pain and disability, and who have biomedically oriented beliefs recommend less physical activity, more time off work, and promote unhelpful beliefs in patients. Understanding how these beliefs develop may assist in identifying ways to reduce unhelpful beliefs in clinicians. Objective: To identify factors that are associated with the LBP-related attitudes and beliefs of US-based physical therapists. Design: Cross-sectional design METHOD: An electronic survey was sent to US-based physical therapists. Attitudes and beliefs were measured using the Health Care Providers' Pain and Impairment Relationship Scale (HC-PAIRS) and the Pain Attitudes and Beliefs Scale for Physiotherapists (PABS-PT), pain science knowledge was measured using the Neurophysiology of Pain Questionnaire (NPQ), and professional and demographic information were collected. Results: Completed surveys were obtained from 420 physical therapists. More helpful attitudes and beliefs were associated with board certification in orthopedics or sports, higher NPQ scores, and working in a hospital-based clinic. Less helpful attitudes and beliefs were associated with working in private practice and a personal history of LBP that had a minimum or significant impact on life. The factors included in this study explained 16.8% of the variability in HC-PAIRS scores (p &lt; .001), 13% of the variability in PABS-BM scores (p &lt; .001), and 8.3% of the variability in PABS-BPS scores (p &lt; .001). Conclusion: Several modifiable and non-modifiable factors are associated with the LBP-related attitudes and beliefs of US physical therapists. abstract_id: PUBMED:34298493 The impact of physical therapist attitudes and beliefs on the outcomes of patients with low back pain. Background: Low back pain (LBP) is a common problem that places a major burden on individuals and society. It has been proposed that patients treated by biomedically oriented clinicians will have worse outcomes than those treated by biopsychosocially oriented clinicians. Objective: To investigate the impact of physical therapist LBP related attitudes and beliefs on the outcomes of patients with LBP. Design: Retrospective cohort design. Method: United States based physical therapists utilizing the Focus on Therapeutic Outcomes, Inc. (FOTO) database were surveyed using the Health Care Providers' Pain and Impairment Relationship Scale (HC-PAIRS) and the Pain Attitudes and Beliefs Scale for Physiotherapists (PABS-PT). Outcomes were measured using average Computerized Lumbar Functional Scale change scores (CLFS) and CLFS residual scores. Results: Analysis was performed on outcome data from 78 physical therapist and included 2345 patients. HC-PAIRS was a univariate predictor of average CLFS change scores and average CLFS residual scores. PABS-PT Biomedical subscale was a univariate predictor of average CLFS change scores. After adjusting for confounding variables, higher HC-PAIRS scores and PABS-BM scores were associated with a greater change in average CLFS score, and higher PABS-BM scores were associated with higher CLFS residual scores. Conclusion: Physical therapists who believed in a stronger relationship between pain and disability had improved patient outcomes. abstract_id: PUBMED:34657870 Comparison of attitudes and beliefs of physical therapists and primary care physicians regarding low back pain management: A cross-sectional study. Background: The first-line contact for patients seeking care for low back pain (LBP) can potentially change the disease course. The beliefs and attitudes of healthcare providers (HCPs) can influence LBP management. Although referring patients with LBP to physical therapy is common, the first-line contact for patients with LBP in Saudi Arabia is the primary care physician (PCP). Physical therapy will soon be integrated into primary care; therefore, it is rational to compare physical therapists' (PTs) beliefs and attitudes regarding LBP with those of PCPs. Objective: We compared PCPs' and PTs' attitudes and beliefs regarding LBP management. Methods: We employed a cross-sectional, voluntary response sample research design using the Pain Attitudes and Beliefs Scale (PABS). Participants were PTs and PCPs practicing in Saudi Arabia. Results: In total, 153 participants completed the PABS (111 PTs and 52 PCPs). PCPs demonstrated significantly higher PABS biomedical subscale scores than did the PTs. Conclusions: HCPs in Saudi Arabia should receive additional training to adopt a biopsychosocial approach to managing LBP. In this study, the HCPs' treatment recommendations may not correspond with contemporary clinical guidelines. Research to facilitate the implementation of optimal professional education and training to adopt a biopsychosocial approach is an urgent priority. abstract_id: PUBMED:26637654 Concussion Attitudes and Beliefs, Knowledge, and Clinical Practice: Survey of Physical Therapists. Background: A concussion is considered a mild traumatic brain injury that may cause physical, cognitive, affective, and sleep dysfunction. Physical therapists have been identified as health care providers involved in the multidisciplinary care of a patient with concussion. Objective: The purpose of this study was to describe the current attitudes and beliefs, knowledge, and practice of physical therapists in the treatment of patients with concussion. Methods: A 55-question electronic survey divided into 6 sections-(1) demographics, (2) current practice in concussion, (3) youth concussion legislation, (4) attitudes and beliefs toward concussion management, (5) concussion knowledge, and (6) clinical decision making-was developed and distributed online through selected American Physical Therapy Association sections. Results: A total of 1,272 physical therapists completed the survey. Seventy percent of the respondents (n=894) reported having concussion training. Although supportive of the role of the physical therapist in the treatment of a person with concussion, the respondents demonstrated less confidence when making return-to-play decisions. Respondents correctly answered, on average, 13 (out of 15) concussion knowledge questions, with gaps exhibited in understanding the clinical utilization of concussion severity scales, the conservative treatment of youth who sustain a concussion, and anticipated normal computed tomography and magnetic resonance imaging after a concussion. When provided with clinical scenarios, respondents were able to recognize when a referral to a physician was indicated; however, they demonstrated variability in identifying a need for vestibular or manual physical therapy. Limitations: Convenience sampling was utilized, limiting generalizability of the results of the study to the physical therapy profession as a whole. Conclusion: Physical therapists demonstrated a solid foundation of concussion knowledge, but gaps still existed. Future professional development opportunities should be developed to target identified gaps in knowledge and current practice patterns. abstract_id: PUBMED:31237840 Evidence-Based Physical Therapy Practice in the State of Kuwait: A Survey of Attitudes, Beliefs, Knowledge, Skills, and Barriers. Background: Evidence-based practice (EBP) is necessary to improve the practice of physical therapy. However, a lack of knowledge and skills among physical therapists and the presence of barriers may hinder the implementation of EBP in the State of Kuwait. Objective: The objectives of this study were to extensively (1) investigate attitudes toward EBP, (2) assess the current level of knowledge and skills necessary for EBP, and (3) identify the barriers to EBP among physical therapists in the State of Kuwait. Methods: The following methods were used: (1) a previously validated self-reported questionnaire and (2) a face-to-face semistructured interview. The questionnaire, which was distributed to 200 physical therapists, examined the attitudes and beliefs of physical therapists about EBP; the interest in and motivation to engage in EBP; educational background, knowledge, and skills related to accessing and interpreting information; the level of attention to and use of the literature; access to and availability of information to promote EBP; and the perceived barriers to using EBP. The interview explored the factors that promote or discourage EBP. Descriptive statistics and logistic regression analyses were used. Results: Of the 200 nonrandomly distributed questionnaires, 92% (184/200) were completed and returned. In general, the physical therapists had positive attitudes, beliefs, and interests in EBP. Their educational background, knowledge, and skills related to assessing and interpreting information were well-founded. The top 3 barriers included insufficient time (59.2%, 109/184), lack of information resources (49.4%, 91/184), and inapplicability of the research findings to the patient population (40.7%, 75/184). Conclusions: EBP lacks support from superiors at work. Thus, identifying methods and strategies to support physical therapists in adopting EBP in the State of Kuwait is necessary. abstract_id: PUBMED:33395477 Survey of Physical Therapists' Attitudes, Knowledge, and Behaviors Regarding Diagnostic Imaging. Objective: The purpose of this study was to describe physical therapists' attitudes, knowledge, and behaviors regarding the use of diagnostic imaging. Methods: Physical therapists in the United States were recruited from July 2018 through May 2019 to complete a web-based, cross-sectional survey. Participants were asked about demographics, their perceived knowledge base and skills for recommending or ordering different imaging modalities, and their behaviors regarding diagnostic imaging. Descriptive statistics were used to characterize the participants' demographics and responses to all questions. Chi-square tests were performed to compare responses by characteristics of survey participants and Wilcoxon signed-rank tests to compare levels of agreement. Results: The mean age was 43 years, and of the 739 respondents, 58% (n = 417) were female. Ninety-two percent of respondents (n = 595) reported having recommended diagnostic imaging to another provider at least once. Only 11.6% (n = 75) reported having ever directly ordered diagnostic imaging. Participants' attitudes about their knowledge base and skills for recommending or ordering plain radiographs were stronger compared with any other imaging modalities followed by magnetic resonance imaging (MRI) over other imaging techniques. Participants' attitudes on recommending plain radiographs or MRI differed by professional education level, board-certification status, fellowship completion, and years since graduation. Conclusion: It is common for physical therapists to recommend imaging, but few have directly ordered imaging. Most survey participants believed that they have an adequate knowledge base and skills for recommending and ordering plain radiographs and MRI. Impact: These results can serve as a benchmark for future comparison as policies and educations evolve. Understanding physical therapists' attitudes, knowledge, and use of diagnostic imaging is important to inform research, policy, and education. abstract_id: PUBMED:33517867 Pain knowledge, attitudes and beliefs of doctor of physical therapy students: changes across the curriculum and the role of an elective pain science course. Introduction: Entry-level physical therapist education on pain has been described as lacking. Calls have been made to include pain science courses to address this knowledge gap.Methods: Physical therapist students' pain knowledge and attitudes were measured using the revised Neurophysiology of Pain Questionnaire (rNPQ) and Pain Attitudes and Beliefs Scale for Physical Therapists (PABS-PT), respectively. Univariate ANOVAs, with post hoc pairwise comparison and effect sizes, were used to measure these aspects over time.Results: Pain knowledge and clinician beliefs were significantly different (p &lt; 0.001) at various curricular timepoints. rNPQ scores increased from 1st to 2nd year (effect size: 1.10), remained similar between years 2 and 3, and improved following the pain course (effect size: 1.25). Biomedical beliefs were similar during years 1, 2 and 3, and declined following the pain course (effect size: 1.56). Conversely, psychosocial belief scores increased from 1st to 2nd year (effect size: 0.82), remained similar between years 2 and 3, and increased following the pain course (effect size: 1.08).Discussion/Conclusions: Physical therapist education, without a dedicated pain science course, may be insufficiently preparing students to treat patients in pain. Educators should consider adopting a dedicated pain science course or substantially bolstering embedded curricular pain content to promote best practice in pain treatment. abstract_id: PUBMED:33543837 Beliefs, attitudes and knowledge of cardiovascular healthcare providers on mobilization. Aim: To assess the beliefs, attitudes and knowledge of nurses, physicians and physiotherapists in a cardiovascular intensive care unit (CICU) on patient mobilization. Design: Survey of CV healthcare providers in the CICU at two academic tertiary care hospitals. Methods: The validated Patient Mobilization Attitudes and Beliefs Survey was distributed to CV providers. The survey is a 26-item self-administered questionnaire that assesses providers' perceived barriers in three domains: attitude, behaviour and knowledge. Results: Participants (N = 142) completed the survey (nurses, N = 67, physicians, N = 59 and physiotherapists, N = 16; 155 eligible participants, 91.6% overall completion rate). Nurses had lower overall knowledge, attitude and behaviour barriers to mobilization than physicians, but higher than physiotherapists (all p &lt; .001). The highest barriers to mobilization for nurses were adequate staffing, patient-level and time restraint. These findings should inform efforts to overcome existing barriers and to transform acute cardiovascular mobility culture. Answer: Yes, physical therapists do change their beliefs, attitudes, knowledge, and skills after a biopsychosocially oriented university course. A study examining the effects of an 8-day university-based training course designed to enhance knowledge and management of psychosocial factors in physiotherapy treatment found that physical therapists' attitudes and beliefs became more biopsychosocially and less biomedically oriented. They were less convinced that pain justifies disability and limitation of activities, and their knowledge and skills on psychosocial risk factors increased after the course (PUBMED:19674225). However, despite these changes, the patients of the course participants perceived their practice behavior before and after the course as similar and were equally satisfied with their treatment and treatment result, indicating that while the course impacted the therapists' perspectives and understanding, it did not translate into a change in practice behavior as perceived by patients (PUBMED:19674225). This finding suggests that while educational interventions can be effective in shifting therapists' theoretical orientation and self-reported knowledge and skills, additional strategies may be needed to translate these changes into observable changes in clinical behavior that are noticeable to patients. It also raises the question of whether these changes in attitudes and beliefs ultimately improve patient outcomes, which was not addressed in the study (PUBMED:19674225).
Instruction: Cholecystectomy-induced gastroesophageal reflux: is it reduced by the laparoscopic approach? Abstracts: abstract_id: PUBMED:9260805 Cholecystectomy-induced gastroesophageal reflux: is it reduced by the laparoscopic approach? Objective: The incidence of gastroesophageal reflux and esophagitis increases significantly following cholecystectomy. Laparoscopic technique minimizes the metabolic and mechanical disturbances of cholecystectomy, but there has been no study of its effects on gastroesophageal reflux. The aim of this study was to examine the effect of laparoscopic cholecystectomy on the indices of gastroesophageal reflux. Methods: A prospective study was carried out on patients scheduled for cholecystectomy. Informed consent was obtained. Preoperative 24-h pH studies and manometry were performed. Studies were repeated 3 months after surgery. Pathological acid reflux was defined in terms of the DeMeester acid score. Results: Pre- and postoperative studies were carried out on 28 patients who had open cholecystectomy and on 22 patients who had laparoscopic cholecystectomy. The mean (SEM) DeMeester acid score increased from 14.8 (2.4) to 34.0 (6.7) following open cholecystectomy (p = 0.006) and from 13.9 (2.5) to 28.9 (4.3) following laparoscopic cholecystectomy (p = 0.002). Decrease in mean lower esophageal sphincter pressure in both groups was not significant. Conclusions: A significant increase in the incidence of gastroesophageal reflux occurred within 3 months after surgery in both study groups. Despite the less invasive nature of laparoscopic cholecystectomy, the type of surgery did not influence the degree of esophageal dysfunction. abstract_id: PUBMED:9322720 The effect of laparoscopic cholecystectomy on gastroduodenal reflux. Background: The present study evaluates the effect of laparoscopic cholecystectomy on biliary reflux in patients with cholelithiasis. Methods: All patients over 18 years of age awaiting elective cholecystectomy for gallstone disease in one of two teaching hospitals were contacted by telephone, and those who gave informed consent were entered in the study. A total of 66 patients (43 females and 23 males) underwent milk 99mTc DIDA scans. Elective laparoscopic cholecystectomy was perfomed after a median of 28.5 days (range: 8-588 days) and patients were re-investigated with a milk 99mTc DIDA scan at a median time of 50 days (range: 18-370 days) postoperatively. Scans were carried out in the Nuclear Medicine Department of the Royal Adelaide Hospital. Results: All but two patients had a functioning gall-bladder on milk 99mTc N-2, 6-dimethylphenyl-carbamoylmethyl iminodiacetic acid scanning prior to cholecystectomy. One of these patients was found to have a gall-bladder carcinoma at cholecystectomy. Fifty-seven of the 66 patients had a successful laparoscopic cholecystectomy (nine open cholecystectomies). Nineteen patients experienced gastroduodenal reflux into the stomach prior to cholecystectomy and 23 patients experienced it postoperatively. There was no significant difference in gastroduodenal reflux in both the open and laparoscopic groups. Conclusion: The present study demonstrated that laparoscopic cholecystectomy did not significantly alter gastroduodenal reflux. abstract_id: PUBMED:31492434 Robotic compared with laparoscopic cholecystectomy: A propensity matched analysis. Background: As robotic surgery becomes more ubiquitous, determining clinical benefit is necessary to justify the cost and time investment required to become proficient. We hypothesized that robotic cholecystectomy would be associated with improved clinical outcomes but also increased cost as compared with standard laparoscopic cholecystectomy. Materials And Methods: All patients undergoing robotic or laparoscopic cholecystectomy at a single academic hospital between 2007 and 2017 were identified using an institutional clinical data repository. Patients were stratified by operative approach (robotic versus laparoscopic) for comparison and propensity score matched 1:10 based on relevant comorbidities and demographics. Categorical variables were analyzed by the χ2 test and continuous variables using the Mann-Whitney U test. Results: A total of 3,255 patients underwent cholecystectomy during the study period. We observed no differences in demographics or body mass index, but greater rates of diabetes mellitus, hypertension, and gastroesophageal reflux disease were present in the laparoscopic group. After matching (n = 106 robotic, n = 1,060 laparoscopic), there were no differences in preoperative comorbidities. Patients who underwent robotic cholecystectomy had lesser durations of stay (robotic: 0.1 ± 0.7 versus laparoscopic: 0.8 ± 1.9, P &lt; .0001) and lesser 90-day readmission rates (robotic: 0% [0], laparoscopic: 4.1% [43], P = 0.035); however, both operative and hospital costs were greater compared with laparoscopic cholecystectomy. Conclusion: Robotic cholecystectomy is associated with lesser duration of stay and lesser readmission rate within 90 days of the index operation, but also greater operative duration and hospital cost compared with laparoscopic cholecystectomy. Hospitals and surgeons need to consider the improved clinical outcomes but also the monetary and time investment required before pursuing robotic cholecystectomy. abstract_id: PUBMED:18366890 Laparoscopic combined Nissen-Rossetti fundoplication and cholecystectomy: our experience Object: The aim of this study is to valuate the opportunity to associate both laparoscopic Nissen-Rossetti fundoplication and cholecystectomy in patients with gallbladder cholelithiasis and gastro-oesophageal acid reflux (endoscopically and pHmetrically assessed), considerating that the gallbladder removal makes duodenal-gastric reflux worse. Patients And Methods: From 2005 until 2007 we associated laparoscopic Nissen-Rossetti fundoplication and cholecystectomy in 10 patiens, who presented surgical indications for gallbladder cholelithiasis and gastro-oesophageal reflux. Clinical data, surgical procedures and post-operative complications were compared to our esperiency on the singular procedure (laparoscopic Nissen-Rossetti fundoplication and cholecystectomy). Results were valuated at 3, 6 and 12 months after surgical interventions by clinical and instrumental follow up (24-pH-metry and oesophagus-gastro-duodenoscopy). Results: The analysis of instrumental data of these associated procedures doesn't present significative differences between the singular surgical approach. In all the patients treated by combined procedure, the follow-up shows a normal pHmetric exam, a good control on the acid reflux by fundosplication, absence of distal oesophagitis and gastric reflux symptoms. Conclusions: The association of laparoscopic Nissen-Rossetti fundoplication and laparoscopic cholecystectomy is indicated in patients who present both pathologies and needs to be considerating in relations to the good results and the low postoperatoric morbidity. abstract_id: PUBMED:18206466 Efficacy of laparoscopic cholecystectomy in the pediatric population. Purpose: Gallbladder disease is increasingly affecting the pediatric population. The advent of new technology in the 1980s, specifically, hepatobiliary scintigraphy and laparoscopic cholecystectomy, gave a dramatic rise in both the diagnosis and treatment of biliary disease in the pediatric population. The purpose of this study was to determine (a) whether laparoscopic cholecystectomy for biliary dyskinesia is efficacious in the treatment of children with biliary colic and (b) the ability of cholescintigraphy to predict which patients may benefit from an operative intervention. Methods: We performed a retrospective review of the records of all patients (N = 184) who underwent laparoscopic cholecystectomy, correlating postoperative results with degree of dyskinesia (percentage of ejection fraction), histopathology, associated gastrointestinal diagnoses, age, and sex. Biliary dyskinesia was defined by ultrasonography without evidence of cholelithiasis with clinical diagnosis of biliary colic. Results: Of the 184 patients who underwent laparoscopic cholecystectomy, 117 had a diagnosis of biliary dyskinesia and 108 were available for follow-up. Mean follow-up was 8.3 months. One hundred patients (92.6%) reported resolution or improvement of preoperative symptoms (64.8% reported complete resolution and 27.8% reported improvement in symptoms). The mean age of the patients was 14.1 years. No correlation was seen for degree of dyskinesia, histopathology, age, and sex. Patients with a preoperative diagnosis of gastroesophageal reflux were more likely to report resolution of symptoms, although this finding was not statistically significant. There was no major complication; 1 patient suffered a prolonged ileus, 1 patient suffered a wound infection, and 1 patient required incisional hernia repair. Conclusion: Laparoscopic cholecystectomy is safe, efficacious, and durable in children suffering from biliary dyskinesia. abstract_id: PUBMED:33068857 Huge gangrenous gallbladder presenting as gastro-esophageal reflux disease successfully treated by laparoscopic cholecystectomy: Case report and literature review. Introduction: Gallbladder disease is a common surgical pathology. Gallstones can remain asymptomatic or develop into an acute cholecystitis and need for surgical intervention. Significant enlargement of the gallbladder well beyond the normal volume is rare. Such "giant" gallbladders can affect feasibility of subsequent management options. Presentation Of Case: An 80-year-old female presented to the emergency department with a two-day history of acute on chronic gastric reflux with nausea and vomiting. On examination, she had right upper quadrant abdominal pain. CT imaging identified an enormous gallbladder creating mass effect and compression on the distal stomach. She underwent successful laparoscopic cholecystectomy and was discharged from the hospital the next day, doing well. On two-week follow up, her reflux symptoms had completely resolved and she had no complaints. Discussion: Giant gallbladders are a rare entity. Our patient's case is unique in both its occurrence as well as presentation with predominant reflux symptoms secondary to mass effect by the enlarged gallbladder. Current cholecystitis grading systems do not utilize size as a means of predicting severity and risk of operative complications or difficulty of procedure. Laparoscopic cholecystectomy was a successful approach in managing this extreme pathology. Conclusion: Updated classifications systems that include size and mass effect as a predictive measure are needed to better assess surgical outcomes, especially in "giant" gallbladder disease. Despite the large size and potential mass effect on surrounding structures, laparoscopic cholecystectomy can still be attempted if no other contraindications exist. abstract_id: PUBMED:26349166 Evaluation of gastroesophageal reflux after laparoscopic cholecystectomy using combined impedance-pH monitoring. Background And Aim: Postoperative gastroesophageal reflux (GER) is one of the causes of post-cholecystectomy syndrome (PCS). Reports studying the effect of cholecystectomy on GER show conflicting results and only a few studies have used the more sensitive technique of combined impedance-pH monitoring. This study aimed to study the effect of laparoscopic cholestectomy on GER (acid/non-acid reflux) using impedance-pH monitoring. Methods: Sixty three consecutive patients of symptomatic cholelithiasis were evaluated. All patients underwent esophageal mannometry and 24-hour impedance-pH monitoring pre- and postoperatively. Frequency scale for the symptoms of GERD (FSSG) scoring was also done in each patient pre- and postoperatively. Results: Out of sixty three patients, four developed symptoms of reflux postoperatively as detected by FSSG scoring. However, no significant changes were observed in lower esophageal sphincter (LES) characteristics, acid and non-acid reflux characteristics, total number of reflux episodes, or in the physical character of the refluxate following laparoscopic cholecystectomy. Significant decrease in the proximal acid reflux episodes was observed. Conclusion: The chemical characteristics (acid or non-acid reflux) as well as physical properties (liquid, gas or mixed) of reflux episodes remain unaffected following laparoscopic cholecystectomy. Cholecystectomy itself doesn't increases GER. abstract_id: PUBMED:33851610 SIMULTANEOUS HIATAL HERNIA PLASTICS WITH FUNDOPLICATION, LAPAROSCOPIC CHOLECYSTECTOMY AND UMBILICAL HERNIA REPAIR. The article presents a case report of patients with multimorbid pathology - hiatal hernia with gastroesophageal reflux disease, cholecystolithiasis and umbilical hernia. Simultaneous surgery was performed in all cases - laparoscopic hiatal hernia with fundoplication, laparoscopic cholecystectomy and umbilical hernia alloplasty (in three cases - by IPOM (intraperitoneal onlay mesh) method and in one - hybrid alloplasty - open access with laparoscopic imaging). After the operation in one case there was an infiltrate of the trocar wound, in one case - hyperthermia, which were eliminated by conservative methods. The follow-up result showed no hernia recurrences and clinical manifestations of gastroesophageal reflux disease. abstract_id: PUBMED:14557124 Laparoscopic surgery: an excellent approach in elderly patients. Hypothesis: A review of the literature will show that laparoscopy is safe and effective for the treatment of surgical diseases in elderly patients. Data Sources: An electronic search using the PubMed and MEDLINE databases was performed using the term laparoscopy in elderly patients. Literature published in the English language in the past decade was reviewed. Pertinent references from articles and books not identified by the search engines were also retrieved. Relevant surgical textbooks were also reviewed. Study Selection: All relevant studies that could be obtained regardless of the study design were included. Data Extraction: All studies that contained material applicable to the topic were considered. Data on patient characteristics and surgical outcomes were abstracted. Data Synthesis: Sixteen studies evaluated laparoscopic cholecystectomy in the elderly. Compared with open cholecystectomy, elderly patients undergoing the laparoscopic procedure had a lower incidence of complications and a shorter hospitalization. In the 4 studies reporting the results of laparoscopic antireflux surgery in the elderly, the morbidity, mortality, and length of hospital stay were similar to those of younger patients. The elderly had equally good postoperative symptom relief. Ten reports of laparoscopic colon resection in the elderly demonstrated earlier return of bowel function, shorter hospitalization, and less cardiopulmonary morbidity. Conclusions: Despite underlying comorbidities, individuals older than 65 years tolerate laparoscopic procedures extremely well. Complications and hospitalization are lower than in open procedures. Surgeons need to inform primary care physicians of the excellent result of laparoscopic procedures in the elderly to encourage earlier referrals. abstract_id: PUBMED:7831599 Continuous esophageal pH monitoring during laparoscopic cholecystectomy. Gastro-esophageal regurgitation (GER) and eventual aspiration is considered a major risk during general anesthesia. High intraperitoneal pressure produced during laparoscopic cholecystectomy (LC) is a possible source of increased GER. We investigated the incidence of GER using continuous esophageal pH monitoring in 14 patients undergoing elective LC. Only two brief episodes of acid reflux (pH &lt; 4) occurred during LC. Apparently the high intraperitoneal pressure during LC carries no increased risk of regurgitation and aspiration. Answer: The incidence of gastroesophageal reflux and esophagitis is known to increase significantly following cholecystectomy. However, studies examining the impact of the laparoscopic approach on gastroesophageal reflux have yielded mixed results. One study aimed to examine the effect of laparoscopic cholecystectomy on the indices of gastroesophageal reflux. This prospective study involved preoperative and postoperative 24-hour pH studies and manometry on patients scheduled for cholecystectomy. The results showed that there was a significant increase in the DeMeester acid score following both open and laparoscopic cholecystectomy, indicating an increase in gastroesophageal reflux after surgery in both groups. The decrease in mean lower esophageal sphincter pressure in both groups was not significant, suggesting that the type of surgery did not influence the degree of esophageal dysfunction (PUBMED:9260805). Another study evaluated the effect of laparoscopic cholecystectomy on biliary reflux in patients with cholelithiasis. The study found no significant difference in gastroduodenal reflux in both the open and laparoscopic groups postoperatively, indicating that laparoscopic cholecystectomy did not significantly alter gastroduodenal reflux (PUBMED:9322720). A study comparing robotic and laparoscopic cholecystectomy found that robotic cholecystectomy was associated with lesser duration of stay and lesser readmission rate within 90 days of the index operation, but also greater operative duration and hospital cost compared with laparoscopic cholecystectomy. However, this study did not specifically address the issue of gastroesophageal reflux post-cholecystectomy (PUBMED:31492434). In summary, the available evidence suggests that laparoscopic cholecystectomy does not significantly reduce the incidence of gastroesophageal reflux when compared to the open approach. Both laparoscopic and open cholecystectomy have been associated with an increase in gastroesophageal reflux postoperatively (PUBMED:9260805; PUBMED:9322720).
Instruction: Can diabetes medication be reduced in elderly patients? Abstracts: abstract_id: PUBMED:36713972 Self-Efficacy as Moderator and Mediator Between Medication Beliefs and Adherence in Elderly Patients with Type 2 Diabetes. Purpose: Poor antidiabetic medication adherence remains a great barrier to effective diabetes self-management among aging adults. This study investigates the mediation and moderation effects of self-efficacy on the relationship between medication beliefs and adherence in elderly patients with type 2 diabetes. Methods: This cross-sectional study evaluated a sample of 309 hospitalized elderly patients who completed the assessment of medication beliefs, self-efficacy for medication uses and medication adherence in a tertiary hospital in Shanghai, China. A bootstrapping sampling method and hierarchical moderator regression analysis were used to verify the hypothesis of mediation and moderation effects of self-efficacy on the relationship between medication beliefs and adherence. Results: Self-efficacy for medication use acted as a moderator (B=-0.063, t=-2.215, p=0.028) and partial mediator (CItotal effect=4.5-16.63, p=0.001; CIindirect=1.524-5.323, p=0.014; CIdirect=2.151-11.817, p=0.001) on the relationship between general harm medication beliefs and medication adherence. Participants with lower general harm medication beliefs may develop higher self-efficacy, which, in turn, results in a higher level of medication adherence, and higher self-efficacy may attenuate the negative effect of high general harm medication beliefs on medication adherence. Conclusion: Self-efficacy for medication use not only mediated the relationship between general harm beliefs about medication and medication adherence, but moderated it negatively. The findings of this study indicate an opportunity to improve the prognosis of elderly Chinese patients with type 2 diabetes through improved medication adherence by strengthening factors such as self-efficacy for appropriate medication use and general harm beliefs about medication. abstract_id: PUBMED:35087782 Factors Associated With Medication Compliance in Elderly Patients With Type 2 Diabetes Mellitus: A Cross-Sectional Study. The average age of patients with type 2 diabetes in Japan is over 70 years. Elderly patients tend to have poor medication compliance, therefore, it is important to understand their individual situations to improve medication compliance, the treatment of their diabetes, and their quality of life (QOL). This study aimed to identify factors associated with medication compliance in elderly type 2 diabetic patients. A cross-sectional study based on questionnaires was conducted on type 2 diabetes patients aged 65 years or older. The participants were recruited from patients who visited three dispensing pharmacies in the Shinagawa area of Tokyo between March 1 and September 30, 2019. The questionnaire consisted of patient information (sex, age, medication compliance status, knowledge of drug effects, and side effects), 12-Item Short Form Survey quality of life rating scale (SF-12), and Diabetes Treatment Satisfaction Questionnaire (DTSQ). Factors related to medication compliance were then evaluated. In all, there were 47 respondents: 31 males and 16 females. Four factors were found to be associated with medication compliance in elderly type 2 diabetic patients: medication storage (P = 0.01), knowledge of drug effects (P &lt; 0.001), knowledge of side effects (P = 0.026), and physical functioning: (PF) (P = 0.045), a subscale of SF-12. Furthermore, the strength of the association between these four factors and medication compliance was calculated using Cramer's V coefficient of association. Knowledge of drug effects was the most strongly associated (knowledge of drug effects: V = 0.559; knowledge of side effects: V = 0.464; medication storage: V = 0.451; PF: V = 0.334). Because diabetes mellitus has no subjective symptoms and treatment effects are not felt to a great extent, it is difficult to motivate patients to consistently adhere to medication. When pharmacists provide medication guidance to elderly patients with type 2 diabetes mellitus, it is important to provide sufficient information to ensure they fully understand the drug effects to maintain medication compliance. abstract_id: PUBMED:26730557 Health Behaviors and Medication Adherence in Elderly Patients. Purpose: To explore the relationships of selected health behaviors to medication adherence. Design: A retrospective cohort study. Setting: Data from Korean national health insurance claims between January 2010 and June 2011. Subjects: Patients aged 65 years and older with hypertension (N = 662,170), hyperlipidemia (N = 244,702), or diabetes (N = 179,285). Measures: Medication adherence as a medication possession ratio from January to June 2011 as a dependent variable. The waist circumference (cm) and the body mass index (weight in kilogram divided by height in meter squared) as a marker for obesity. Smoking, drinking, and physical activity as main independent variables. Analysis: A multivariate logistic regression. Results: Nonobese patients, as based on the waist circumference, were more likely to adhere to their medication (by 8.9% for hypertension, 6.2% for diabetes, and 3.5% for hyperlipidemia). Current smokers were less likely to adhere to their medication (by 8.7% for hypertension and 6.8% for diabetes), and moderate and heavy drinkers were also less likely to show medication adherence for diabetes (by 12.9% and 6.4%). Mild physical activity was related to a 1.1% to 1.8% increase in the likelihood of medication adherence across the three disease groups. Conclusion: Health promotion programs for self-care health behaviors of elderly patients should emphasize good medication adherence to achieve successful self-management of diseases. abstract_id: PUBMED:28495392 Elderly polypharmacy patients' needs and concerns regarding medication assessed using the structured patient-pharmacist consultation model. Objective: To evaluate elderly polypharmacy patients' needs and concerns regarding medication through the Structured Patient-Pharmacist Consultation (SPPC). Methods: Older patients on chronic treatment with ≥5 medications were asked to fill in the SPPC form at home. A consultation with the community pharmacist, structured according to patient's answers, followed within 2-4 weeks. Logistic regression associated patients' individual treatment with care issues and consultation outcomes. Results: Out of 440 patients, 39.5% experienced problems, and 46.1% had concerns about medication use. 122 patients reported reasons for discontinuing treatment. The main outcome of the consultation was a better understanding of medication use (75.5%). Side effects and/or non-adherence were identified in 50% of patients, and 26.6% were referred to the doctor. Atrial fibrillation, COPD, anticoagulants, benzodiazepines, and beta agonists/corticosteroids were associated with problems during medication use. Patients with diabetes improved their understanding of medication use significantly. Conclusion: Patients on benzodiazepines, anticoagulants, and beta agonists/corticosteroids, with atrial fibrillation and/or COPD, may have a higher potential for non-adherence. Counseling patients based on the SPPC model may be particularly useful for patients with diabetes. Practice Implications: The SPPC model is a useful tool for counseling based on patient needs. abstract_id: PUBMED:31096616 A Comprehensive Overview of Polypharmacy in Elderly Patients in Saudi Arabia. Background/Objectives: Saudi Arabia has a great percentage of geriatric patients associated with multiple chronic diseases who require close attention and monitoring for their medications. The purpose of this study is to develop a full-framed picture about the utilization of medications for geriatric patients and how to provide better health-care management. Methodology: A retrospective cross-sectional study targeting patients 65 years of age and older, who are taking multiple chronic medications for different indications. Descriptive analysis and frequency of the main variables were used as appropriate. Only qualified and professional candidates were chosen for data entry to present the quality and accuracy of data. Results: A total of 3009 patient profiles were analyzed, with the patients' average age in years being 73.26 ± 6.6 (SD). It was found that 55% of the patients have polypharmacy. An average of 6.4 medications were prescribed for patients aged between 65 and 70 years compared with a significant difference for patients aged 71 years and above, while a linear correlation between age and comorbidity diseases associated with all elderly patients. Hypertension, hyperlipidemia, and diabetes mellitus are the most common comorbidity diseases for elderly patients aged 65 years and older. Conclusion: Polypharmacy in geriatrics is defined as a patient aged 65 years and older receiving five or more appropriate medications. It is the responsibility of health-care professionals to reduce the number of medications in elderly patients. Awareness of geriatric medications and diagnosed diseases will improve managing adverse drug reaction and other risk factors. Awareness of geriatric medications should elaborate on how to avoid adverse drug reaction and other risk factors. It is the responsibility of physicians and pharmacists to reduce the number of medications in elderly patients. We also prove that the number of medications will not necessarily increase with age. The main impact of this study is to follow the main recommendations to improve health care management in geriatrics. abstract_id: PUBMED:31069001 Medication adherence among elderly patients with coronary artery disease: our experience in Morocco The treatment of coronary artery diseases has made significant progress. Medication adherence among patients with coronary artery disease, in particular among elderly patients, is a major challenge to disease control and prevention of its complications. Medication adherence could be influenced by the demographic and socio-economic vulnerable situation in the African countries. We conducted a cross-sectional study of elderly patients treated for stable coronary artery disease on an ambulatory basis from March to October 2016. Medication adherence was evaluated by a questionnaire: Morisky Medication Adherence Scale. The informations about predictive factors of medication adherence were obtained from a multidimensional adherence model. The study involved 115 elderly patients (age &gt; 65 years). Medication adherence accounted for 72.2%, according to Morisky Medication Adherence Scale. Physical inactivity was found in 59% of patients, hypertension and diabetes in 42.6% and 41.7% of patients respectively. Poor compliance predictive factors were: the absence of a mutual health (p = 0.02), the severity of symptoms (p = 0.001), patients who had acute coronary syndrome (p = 0.006), the level of social support (p = 0.011) and depression (p = 0.006). Medication adherence is a health problem in Morocco, particularly among elderly subjects. Health care providers should be aware of factors associated with a higher probability of stopping treatment, in particular of variable factors, in order to implement personalized strategies to improve adherence to treatment. abstract_id: PUBMED:22065902 Overlapping medication associated with healthcare switching among Korean elderly diabetic patients. This study was performed to describe the patterns of healthcare switching with overlapping use of anti-diabetic medication in the elderly using the Korea Health Insurance Review and Assessment Service's claims data. The study subjects were ambulatory elderly diabetic patients (ICD-10, E10-14) receiving at least one oral anti-diabetic drug or insulin, and visiting healthcare facilities more than two times between January and December 2005. A total of 457,975 elderly diabetic ambulatory care patients were identified. The mean of visiting frequencies was 9.0 ( ± 3.6) and switching frequencies was 1.5 ( ± 0.8) during 2005. Switching group consisted of 33% of total study subject. Healthcare switching was common in female patients who were older, and had treated polytherapy more in rural areas. The movement among primary care medical services was very common among the patients in the switching group (52.6%). A statistically significant correlation was observed between the healthcare switching and concomitant drug use (rho = 0.96), and overlapping days (rho = 0.57). The use of overlapping anti-diabetic medication increased with the extent of healthcare switching. Further, frequent switching of healthcare between primary clinics was observed. Efforts should be made to establish continuity for the elderly diabetic patients with the identification of frequent switching with overlapping medication. abstract_id: PUBMED:33828372 The Prevalence of Prescribing Medications Associated with Geriatric Syndromes among Discharged Elderly Patients. Introduction: A geriatric syndrome is a group of signs and symptoms that occur in older people and do not fit into a discrete disease. Several medications were reported to be associated with the incidence of geriatric syndromes. Objective: The objective of this study was to investigate the prevalence and pattern of medications associated with geriatric syndromes (MAGSs) among the discharged elderly patients (≥65 years old). Materials And Methods: This is a cross-sectional study that was conducted at a Malaysian teaching hospital from October to December 2018. The discharge medications of geriatric patients were reviewed to identify MAGSs using Beers criteria, Lexicomp drug information handbook, and the United States Food and Drug Administration (USFDA) drug inserts. Chi-square test was used to compare MAGS prescribed between categories. Spearman's rank-order correlation was used to test the correlation between the presence of MAGS and the number of discharge medications. A binomial logistic regression was applied to determine the predictors of prescribing MAGSs. Results: A total of 400 patients (mean ± standard deviation [SD] age, 72.0 ± 5.0 years) were included, and 45.3% of them were females. The most common diseases were hypertension followed by diabetes mellitus. The mean ± SD number of discharge medications per patient was 4.2 ± 2.5. The MAGSs were prescribed in 51.7% of the patients, and 54 patients were discharged with more than one MAGSs. The most commonly prescribed MAGSs were opioid analgesics, vasodilators, and β-blockers, which are associated with falls, depression, and delirium. Polypharmacy was found in 138 patients, and it was significantly associated with the presence of MAGSs (P &lt; 0.001). No significant differences were found in prescribing MAGSs based on the patients' gender, race, and age. Conclusion: The prescribing of MAGSs occurred in half of the discharged elderly patients. Physicians should be aware of the medications that are associated with special side effects in the elderly patients, and should switch to safer alternatives when possible. abstract_id: PUBMED:37833671 Evaluation of potentially inappropriate medications use and medication complexity in elderly patients applying to community pharmacy in Turkey. Background: Older adults often use multiple medicines to manage comorbidities well or to prevent associated complications. This study aims to determine polypharmacy, the use of potentially inappropriate medications (PIMs) using the 2019 Beers Criteria and to determine the Medication Regimen Complexity Index (MRCI) score. It also aims to identify factors associated with the presence of PIMs and the MRCI score. Methods: This cross-sectional study was carried out between 6 and 2023 and 5 May 2023 in a community pharmacy in Turkey. Elderly patients over 65 years of age, who used at least one drug, and who came to the pharmacy for any reason were included in the study. PIMs were determined according to the 2019 Beers Criteria. The Turkish validated version of the MRCI was used to determine the medication complexity score. Results: 200 patients were included in this study. 59.5% of the patients were female and the median age was 70 (IQR, 66-74.75). Polypharmacy was detected in 33% of patients. The use of PIMs was determined in 63.5% of the patients. The median of the MRCI score was 11 (IQR, 7-15). The number of chronic diseases and drugs, presence of polypharmacy, MRCI score and mental disorders were found to be significantly higher in those with PIMs than in those without (p &lt; 0.05). Having less than eight years of education, presence of polypharmacy, the presence of comorbidity (diabetes mellitus, cardiovascular disease, thyroid, chronic obstructive pulmonary disease (COPD), asthma and mental disorders) were associated with significantly higher MRCI scores (p &lt; 0.05). Conclusions: According to the results of our study, it was found that the elderly patients who came to the pharmacy had low MRCI scores, but had high PIMs use. Community pharmacists have an important role in identifying inappropriate drug use, so they should be trained to develop skills in identifying and reducing PIMs in older patients. abstract_id: PUBMED:30045684 Evaluation of potentially inappropriate medication utilization in elderly patients with cancer at outpatient oncology unit. Background: The aim of the study was to evaluate potentially inappropriate medication use in elderly patients with cancer. Method: This study was conducted at outpatient oncology clinic from December 2014 to March 2015 among elderly cancer patients. Screening Tool of Older Person's Prescriptions/Screening Tool to Alert doctors to Right Treatment criteria were used to identify potentially inappropriate medication in elderly patients. Results: Among 114 cancer patients 55.26% of them were male and the mean age of them was 71.78 ± 5.50 (years). The most common concurrent diseases were hypertension in 45 (39.47%) and diabetes in 26 (22.81%) patients. Polypharmacy (≥5 medications) was seen in 94.73% of them. Eighteen patients (15.79%) utilized medications inappropriately according to Screening Tool of Older Person's Prescriptions criteria. Medication omissions were identified in 112 patients (98.25%) with Screening Tool to Alert doctors to Right Treatment criteria. Conclusions: Clinical pharmacists could improve the current prescribing practices in elderly patients with cancer by assessing potentially inappropriate medications. Answer: The abstracts provided do not directly address the question of whether diabetes medication can be reduced in elderly patients. However, they do discuss various aspects of medication management, adherence, and the prevalence of polypharmacy among elderly patients with chronic conditions, including diabetes. For instance, the study in Saudi Arabia found that polypharmacy is common among geriatric patients and that health-care professionals should aim to reduce the number of medications in elderly patients when possible (PUBMED:31096616). This suggests that there may be opportunities to review and potentially reduce medication regimens, including those for diabetes, if deemed appropriate and safe by healthcare providers. Another study from Turkey evaluated the use of potentially inappropriate medications (PIMs) in elderly patients and found that a significant proportion of these patients were using PIMs (PUBMED:37833671). This indicates that medication regimens, including those for diabetes, should be regularly reviewed to ensure they are appropriate for the elderly population. The study from Morocco highlights the importance of medication adherence among elderly patients with coronary artery disease, which can also be relevant for those with diabetes, as both conditions often coexist (PUBMED:31069001). Improved adherence could potentially lead to better disease control and might influence the decision to adjust medication dosages or reduce the number of medications. In summary, while the abstracts do not provide a direct answer to the question of reducing diabetes medication in elderly patients, they emphasize the importance of careful medication management, the potential for reducing polypharmacy, and the need for regular review of medication appropriateness in the elderly. Any decision to reduce diabetes medication would need to be individualized and based on a thorough clinical assessment by healthcare professionals.
Instruction: Does regular garlic intake affect the prevalence of Helicobacter pylori in asymptomatic subjects? Abstracts: abstract_id: PUBMED:12939668 Does regular garlic intake affect the prevalence of Helicobacter pylori in asymptomatic subjects? Objective: The in vitro antibacterial activity of garlic against Helicobacter pylori (H.pylori) is well documented and the potential for its use in vivo was suggested. Garlic intake, a traditional habit by the Taskopru population in Turkey for decades, was examined for its effect on the prevalence of H. pylori and compared with the non garlic consuming group. Methods: Eighty-one garlic consuming asymptomatic subjects in Kastamonu province in Turkey (68 males, 13 females) of 23-82 years of age (average 46) were selected on a very restricted bases in regards to the garlic intake (raw or cooked, or both), amount, duration and other criteria. Control group (non-garlic consuming) of 81 asymptomatic subjects (66 males, 15 females) of 23-90 years of age (average 43) were enrolled for comparison with the garlic consuming group. Serum samples were collected from both groups during the period from September 2001 through to April 2002 and examined by the enzyme linked immunoassay test for anti H.pylori antibodies. Results: An overall H.pylori prevalence of 79% and 81% was detected in the garlic and non garlic consuming groups. A significantly lower average antibody titer was detected in the garlic consuming group than that of the control group and similarly in those who consumed mixture of raw plus cooked garlic as compared to those who consumed raw or cooked garlic alone. Conclusion: Garlic intake for long durations (years) did not appear to have an effect on the prevalence of H.pylori infection. Garlic consuming subjects had a significantly lower average antibody titer than non garlic consuming groups, which might suggest an indirect inhibitory effect on the reproduction of H.pylori and possibly progression to more serious peptic ulcer diseases. abstract_id: PUBMED:12793072 Prevalence of Helicobacter pylori infection in normal or asymptomatic patients The propose of this prospective study was to assess the prevalence of Helicobacter pylori infection in asymptomatic subjects in year 2000. Ninety eight patients not suffering from upper gastrointestinal symptoms were enrolled in this study. The mean age was 39 years. Helicobacter pylori was present in 81 patients (82.7%). Helicobacter pylori infection increase with age. It was maximal between 30 and 50 years. Helicobacter pylori prevalence is still high in Tunisia as well as in most developing countries. Improvement of hygiene conditions will decrease this prevalence. abstract_id: PUBMED:12151185 Prevalence of Helicobacter pylori infection in asymptomatic subjects in Libya. Helicobacter pylori infection is very common infection worldwide particularly in the developing countries. No detailed study on such infection is being recorded so far in Libya. The aims of the study were to detect H. pylori prevalence in asymptomatic Libyan subjects, to determine the rate of infection among different age groups and to correlate the prevalence of H. pylori with age, sex, smoking, non-steroidal anti-inflammatory drugs intake, marital status, education, and socioeconomic status. Three hundred and sixty asymptomatic subjects 1-&gt;70 years of age (average 36) with no previous history of epigastric pain were selected randomly from the Libyan population. Serum samples were obtained and a questionnaire was filled for each subject. The ELISA test was used to detect IgG anti-H. pylori antibodies. An overall prevalence of 76% was detected in these subjects. A 50% infection rate were in subjects 1-9 years of age that increased to 84% in subjects 10-19 years and continued with increasing age and reached up to 94% in those over 70 years of age. Subjects &gt; 45 years of age had significantly higher antibody response than those &lt;45 years. H. pylori prevalence was higher in married subjects (84%) as compared to singles (68%) and similarly in illiterates (89%) and low socioeconomic subjects (91%). No difference in sex, smoking and NSAIDs intake was recorded. Infection with H. pylori is highly prevalent in the Libyan asymptomatic subjects. Infection acquired early in childhood and reached up to 94% in older age. Prevalence of H. pylori was significantly increased in association with marital status, education and low socioeconomic status. abstract_id: PUBMED:25827337 The Prevalence of Mixed Helicobacter pylori Infections in Symptomatic and Asymptomatic Subjects in Dhaka, Bangladesh. Background: Helicobacter pylori is a highly genetically diverse bacterial species, which can persist in the gastric environment for decades. Recent studies have shown that single infections predominate in developed countries, whereas mixed infections are more prevalent in developing countries. Mixed infections of this bacterium may be important for adaptation to the hostile gastric environment and may facilitate dyspeptic symptoms. Materials And Methods: To calculate the prevalence of mixed infections in symptomatic and asymptomatic subjects, 2010 H. pylori isolates collected from 83 symptomatic and 91 asymptomatic subjects from Dhaka, Bangladesh, were analyzed by (i) random amplified polymorphic DNA fingerprinting (RAPD) and (ii) multiplex PCR amplification for cagA and vacA virulence gene alleles. Results: The overall prevalence of mixed H. pylori infection was 60.15% (77/128), indicating substantial co-colonization in this population. We additionally found that symptomatic subjects (53%) had a significantly higher rate of mixed infection than asymptomatic individuals (36.3%) (p = .016) and that the prevalence of the cagA and vacA and vacA m1/s1 and vacA m2/s1 alleles were higher in subjects with mixed infection. Conclusion: Our findings suggest that an increased diversity of the H. pylori strains in the gastric environment may contribute to the development of disease symptoms. abstract_id: PUBMED:30302246 Prevalence of Helicobacter pylori in asymptomatic patients at surgical outpatient department: Harare hospitals. Background: Helicobacter pylori infection is present in more than 50% of the world's population. The estimated life time risk of peptic ulcer disease is 20 percent and of gastric cancer is 1-2 percent. Materials And Methods: A cross sectional study was done at two Central hospitals in Harare, Zimbabwe, with the objective being to estimate the prevalence of Helicobacter pylori infection in asymptomatic individuals. Other objectives were to determine the association of the Helicobacter pylori infection with potential risk factors.Four hundred and fifty patients visiting the outpatient surgical clinics for other complaints other than upper gastrointestinal symptoms were recruited in the study. Drops of whole blood were obtained by fingertip puncture from each patient. The Onsite H. pylori Combo Rapid Test was used to confirm the presence or absence of antibodies against Helicobacter pylori. A questionnaire was used to record the sociodemographics of the participants. Results: Three hundred patients, 186 males (62%) and 114 females (38%) participated. The prevalence of Helicobacter pylori infection was 67.7 percent (203/300). The prevalence of H pylori infection was significantly correlated with increasing age (p = 0.012), sharing of a bed with siblings during childhood (p = 0.013) and the mode of sanitation methods (p = 0.023). There was no association found between H pylori infection and other risk factors such as; gender, level of education, employment status or number of rooms in a house. Conclusion: H. pylori infection prevalence was significantly associated with increasing age, sharing of a bed with siblings during childhood and the mode of sanitation used. Clinicians and the public have to be aware of the important role of H pylori in upper gastrointestinal disease. Use of better sanitation methods, appropriate hygiene, avoidance of over-crowding amongst other measures should be encouraged as a means to reduce the acquisition and transmission of H pylori. abstract_id: PUBMED:30155945 The association of garlic with Helicobacter pylori infection and gastric cancer risk: A systematic review and meta-analysis. Background: Garlic may be protective against Helicobacter pylori infection and gastric cancer development. We conducted this study to quantitatively update evidence on garlic intake and gastric cancer with the inclusion of most recent cohort studies and qualitatively summarize epidemiological studies of garlic consumption and Helicobacter pylori infection. Materials And Methods: PubMed, Embase, MEDLINE, and Cochrane Library were searched on April 2018. We conducted a meta-analysis to determine whether garlic intake reduced gastric cancer risk using random-effect models and a systematic review to summarize evidence on the association between garlic consumption and Helicobacter pylori infection. Risk of bias was assessed using tools of Cochrane risk of bias and Robins-I for randomized and nonrandomized studies, respectively. Results: Meta-analysis of 18 studies (142 921 subjects) demonstrated high garlic consumption (as comparing the highest category to the lowest) was associated with a reduced gastric cancer risk (OR = 0.51, 95% CI = 0.44-0.57). This association became nonsignificant if only derived from the prospective studies (OR = 0.95, 95% CI = 0.66-1.24). Thirteen studies (4889 participants) were included in the systematic review for garlic consumption and Helicobacter pylori infection; ten of which found no significant results. The majority of these studies were poor in quality given the small sample size and high risk of bias. Conclusions: Pooled evidence, mainly from case-control studies, suggested a significant inverse association of garlic intake with gastric cancer risk. Given the limitations of included studies, current epidemiological evidence is not sufficient to reach any definite conclusion regarding the association of garlic with Helicobacter pylori infection. abstract_id: PUBMED:33639679 Prevalence and Potential Risk Factors of Helicobacter pylori Infection among Asymptomatic Individuals in Kazakhstan. Background: Helicobacter pylori (H. pylori) infection is associated with several risk factors such as demographic, socioeconomic status and personal habits, which vary in different populations. This is the most up-to-date data on H. pylori prevalence and potential risk factors for H. pylori infection among asymptomatic middle-aged individuals in Kazakhstan. Methods: Apparently healthy individuals aged 40 to 64, who took part in the health control in the outpatient clinic, were invited to participate in the study; answered a questionnaire, donated a blood sample. The antibodies to H. pylori were analysed by latex agglutination method. The baseline characteristics of study subjects with or without H. pylori infection were compared using the Chi-square test. Odds ratio (OR) and 95% confidence intervals (CI) for the association between H. pylori infection and potential risk factors were estimated using multivariable logistic regression models. Results: Altogether 166 subjects (59% male; the median age - 51 years old) were included; 104 (62.7%) were H. pylori positive. There were no statistically significant differences between H. pylori positive and H. pylori negative groups in respect to the gender, anthropometric measurements, socioeconomic factors and personal habits. The multiple variable analysis showed that age (OR, 1.99; 95% CI, 1.03 - 3.86; P=0.04) and increased salt intake (OR, 2.21; 95% CI, 1.12 - 4.35; P=0.02) were associated with H. pylori infection. Conclusions: More than half of the study subjects were infected with H. pylori in Kazakhstan. The prevalence of H. pylori infection was independently associated with older age and regular high salt consumption. abstract_id: PUBMED:9820089 Prevalence of Helicobacter pylori infection in the Czech Republic--the South Moravia Region The prevalence of Helicobacter pylori is influenced in a significant way by geographical conditions and depends to a certain extent also on the economic standard of different countries. Some published work provides evidence that Helicobacter pylori infections in eastern European countries are in general more frequent than in western European countries. It cannot be ruled out, however, that in different countries there are regional differences as far as Helicobacter pylori is concerned. In a group of 309 subjects, none of those treated previously to eradicate H.pylori, at least three weeks before blood sampling no preparations of the type of H2 blockers, proton pump blockers or drugs containing bismuth were administered. In these patients serological examinations of H.pylori antibodies were made, using kits of TEST-LINE Brno. The examined subjects were divided into six age groups by decades, starting at the age of 20 years. The general prevalence of H.pylori in the examined group was 58.8%, in the group of 20-year-old ones less than 46%. The highest prevalence was recorded in subjects aged 50-59 years and amounted to 67.3%. The authors compare their own results with findings assembled in the Czech Republic and abroad and draw attention to the necessity of an extensive epidemiological survey of H.pylori prevalence in the Czech Republic. The survey should be done by regions, using the same diagnostic method. abstract_id: PUBMED:10355018 Prevalence of Helicobacter pylori infection in asymptomatic 5-7-year-old children of St. Gallen canton The prevalence of Helicobacter pylori infection in asymptomatic Swiss children is not known. The aim of this study was to determine the rate of Helicobacter pylori infection in preschool children of the Canton of St. Gallen, eastern Switzerland. A 13C-urea breath test was performed in 432 randomly selected 5- to 7-year-old children from February to September 1998. A total prevalence of 6.5% was found, with 3.9% for Swiss children and 19.2% for children of parents immigrated from countries or regions with a high prevalence of Helicobacter pylori infection. Children born in these countries and who immigrated later were Helicobacter pylori positive in 70%. In 20/21 brother and sister pairs, the status of Helicobacter pylori infection was concordant. The amount of living space per person and the educational level of parents were comparable for all Helicobacter pylori positive children (Swiss and immigrants). In conclusion, total prevalence of Helicobacter pylori infection in preschool children aged 5 to 7 is low in the eastern part of Switzerland, with a significantly higher rate in children of immigrant parents. abstract_id: PUBMED:12829332 Helicobacter pylori infection: prospective study for asymptomatic Tunisian children Unlabelled: The aim of this study was to specify epidemiologic particularities of Helicobacter pylori infection among asymptomatic Tunisian children. Population And Methods: A sample of 191 Tunisian healthy children studied for a two-month-period of time in protection center for mothers and children in a Tunis area. The children had benefited of an oriented questionnaire and a serological study of Helicobacter pylori. Results: The prevalence of Helicobacter pylori infection was 30.4% (58 of 191). This prevalence increased with age (21% &lt; 5 years vs 69% &gt; 6 years: p &lt; 0.04). The low socio-economic level and the familial antecedents of peptic illness constitute the main risk factors of Helicobacter pylori infection (p &lt; 0.05). Both ways of transmission: oro-oral and fecal-oral seem to coexist among children. Answer: Regular garlic intake does not appear to have an effect on the prevalence of Helicobacter pylori (H. pylori) in asymptomatic subjects. A study conducted on the Taskopru population in Turkey, which traditionally consumes garlic, found that despite long durations of garlic intake, there was no significant difference in the prevalence of H. pylori infection when compared to a non-garlic consuming group. The prevalence was 79% in the garlic-consuming group and 81% in the non-garlic consuming group. However, the garlic-consuming subjects had a significantly lower average antibody titer than the non-garlic consuming group, which might suggest an indirect inhibitory effect on the reproduction of H. pylori, but not on the actual prevalence of the infection (PUBMED:12939668). Additionally, a systematic review and meta-analysis that included 13 studies with 4889 participants found that the majority of these studies showed no significant results regarding the association between garlic consumption and H. pylori infection. The quality of these studies was generally poor due to small sample sizes and high risk of bias, leading to the conclusion that current epidemiological evidence is not sufficient to definitively determine the association of garlic with H. pylori infection (PUBMED:30155945).
Instruction: Does anybody really know what time it is? Abstracts: abstract_id: PUBMED:8185095 Does anybody really know what time it is? Does anybody really care? Study Objective: To evaluate the difference among time sources in an emergency medical system. Design: Prospective; comparison to a criterion standard. Setting: Five emergency departments and three emergency medical services systems in Indianapolis, Indiana. Interventions: Coordinated Universal Time (UTC), generated by the atomic clock in Boulder, Colorado, and broadcast by the US Commerce Department's National Institute of Standards and Technology, was used as the time standard. The investigators, on a single day, made unannounced visits to the five EDs and the ambulances and fire stations in the three emergency medical services systems. The times displayed on all time sources at each location were recorded. The accuracy to the second of each time source compared to UTC was calculated. Results: Three time sources were excluded (two defibrillator clocks and one ED wall clock that varied more than three hours from UTC). Of the 152 time sources, 72 had analog displays, 74 digital, three both, and three other. The average absolute difference from UTC was 1 minute 45 seconds (SEM, 9 seconds) with a range of 12 minutes 34 seconds slow to 7 minutes 7 seconds fast. Thus, two timepieces could have varied by as much as 19 minutes 41 seconds. Compared to UTC, 47 timepieces (31%) were slow, 100 (66%) were fast, and five (3%) were accurate to the second. Fifty-five percent of the time sources varied one minute or more from UTC. Conclusion: Time sources in this health care system varied considerably. Time recording in medicine could be made more precise by synchronizing medical clocks to UTC, using computers to automatically "time stamp" data entries and using only digital time sources with second displays. abstract_id: PUBMED:33300810 A new musculoskeletal AnyBody™ detailed hand model. Musculoskeletal research questions regarding the prevention or rehabilitation of the hand can be addressed using inverse dynamics simulations when experiments are not possible. To date, no complete human hand model implemented in a holistic human body model has been fully developed. The aim of this work was to develop, implement, and validate a fully detailed hand model using the AnyBody Modelling System (AMS) (AnyBody, Aalborg, Denmark). To achieve this, a consistent multiple cadaver dataset, including all extrinsic and intrinsic muscles, served as a basis. Various obstacle methods were implemented to obtain with the correct alignment of the muscle paths together with the full range of motion of the fingers. These included tori, cylinders, and spherical ellipsoids. The origin points of the lumbrical muscles within the tendon of the flexor digitorum profundus added a unique feature to the model. Furthermore, the possibility of an entire patient-specific scaling based on the hand length and width were implemented in the model. For model validation, experimental datasets from the literature were used, which included the comparison of numerically calculated moment arms of the wrist, thumb, and index finger muscles. In general, the results displayed good comparability of the model and experimental data. However, the extrinsic muscles showed higher accordance than the intrinsic ones. Nevertheless, the results showed, that the proposed developed inverse dynamics hand model offers opportunities in a broad field of applications, where the muscles and joint forces of the forearm play a crucial role. abstract_id: PUBMED:28521951 Validation of the AnyBody full body musculoskeletal model in computing lumbar spine loads at L4L5 level. In the panorama of available musculoskeletal modeling software, AnyBody software is a commercial tool that provides a full body musculoskeletal model which is increasingly exploited by numerous researchers worldwide. In this regard, model validation becomes essential to guarantee the suitability of the model in representing the simulated system. When focusing on lumbar spine, the previous works aimed at validating the AnyBody model in computing the intervertebral loads held several limitations, and a comprehensive validation is to be considered as lacking. The present study was aimed at extensively validating the suitability of the AnyBody model in computing lumbar spine loads at L4L5 level. The intersegmental loads were calculated during twelve specific exercise tasks designed to accurately replicate the conditions during which Wilke et al. (2001) measured in vivo the L4L5 intradiscal pressure. Motion capture data of one volunteer subject were acquired during the execution of the tasks and then imported into AnyBody to set model kinematics. Two different approaches in computing intradiscal pressure from the intersegmental load were evaluated. Lumbopelvic rhythm was compared with reference in vivo measurements to assess the accuracy of the lumbopelvic kinematics. Positive agreement was confirmed between the calculated pressures and the in vivo measurements, thus demonstrating the suitability of the AnyBody model. Specific caution needs to be taken only when considering postures characterized by large lateral displacements. Minor discrepancy was found assessing lumbopelvic rhythm. The present findings promote the AnyBody model as an appropriate tool to non-invasively evaluate the lumbar loads at L4L5 in physiological activities. abstract_id: PUBMED:34390524 Dynamics analysis for flexion and extension of elbow joint motion based on musculoskeletal model of Anybody. Purpose: Little is known about how biomechanics governs the biological nature for humeral motion dynamically. Elbow motion ought to be investigated based on a musculoskeletal model and evidence the physiologic principle of upper limbs. Method: A humeral model was reconstructed by MIMICS after CT images input in *.dicom format, it was processed by Geomagic Studio for Surfaces, then gridded mesh and assigned materials by Hypermesh. On the other hand, a musculoskeletal model was built by AnyBody, physical motions were then simulated to export boundary condition and myodynamia during flexion and extension. Finally, all the humeral model and boundary were imported to Abaqus for finite element analysis. Result: During the simulative motion of flexion, the primary muscles are brachii biceps, brachialis anticus and teretipronator, their myodynamia increased and then decreased gradually, and reached its peak value at 30°; During extension, the main muscles are triceps brachii and brachialis anticus, their myodynamia increased and then decreased gradually too, and reached peak at 50°; In these two cases, their strain and displacement distributed at the middle of humerus. Conclusion: AnyBody is a novel modelling system to simulate physical motion, for example flexion and extension. Biceps brachii and brachialis anticus are functional for flexion, and triceps brachii plays a key role in extension critically. This simulation confirms the physiologic rule for sport event, humeral fixation and postoperative healing with clinical significance that minimizing joint forces from injury onset may promote pain-free ways. abstract_id: PUBMED:33619605 Does anybody really know what time it is? : From biological age to biological time. During his celebrated 1922 debate with Bergson, Einstein famously proclaimed: "the time of the philosopher does not exist, there remains only a psychological time that differs from the physicist's." Einstein's dictum, I maintain, has been metabolized by the natural sciences, which typically presuppose, more or less explicitly, the existence of a single, univocal, temporal substratum, ultimately determined by physics. This reductionistic assumption pervades much biological and biomedical practice. The chronological age allotted to individuals is conceived as an objective quantity, allowing one to straightforwardly assign and compare the biological age of organisms. This essay argues that the standard practice of assessing the age and aging of organisms against the backdrop of a physical conception of time is problematic. This becomes especially evident in light of recent discoveries of various levels of senescence underlying the development of individual organisms-a phenomenon known as 'age mosaicism.' The bottom line is that the study of age and aging requires a biological conception of time, as opposed to a physical one. Einstein clearly wasn't wrong about his operationalization of time in relativity theory. Still time may be less monolithic than he surmised. abstract_id: PUBMED:38331648 Evaluation of ground reaction forces and centers of pressure predicted by AnyBody Modeling System during load reaching/handling activities and effects of the prediction errors on model-estimated spinal loads. Full-body and lower-extremity human musculoskeletal models require feet ground reaction forces (GRFs) and centers of pressure (CoPs) as inputs to predict muscle forces and joint loads. GRFs/CoPs are traditionally measured via floor-mounted forceplates that are usually restricted to research laboratories thus limiting their applicability in real occupational and clinical setups. Alternatively, GRFs/CoPs can be estimated via inverse dynamic approaches as also implemented in the Anybody Modeling System (AnyBody Technology, Aalborg, Denmark). The accuracy of Anybody in estimating GRFs/CoPs during load-handling/reaching activities and the effect of its prediction errors on model-estimated spinal loads remain to be investigated. Twelve normal- and over-weight individuals performed total of 480 static load-handling/reaching activities while measuring (by forceplates) and predicting (by AnyBody) their GRFs/CoPs. Moreover, the effects of GRF/CoP prediction errors on the estimated spinal loads were evaluated by inputting measured or predicted GRFs/CoPs into subject-specific musculoskeletal models. Regardless of the subject groups (normal-weight or overweight) and tasks (load-reaching or load-handling), results indicated great agreements between the measured and predicted GRFs (normalized root-mean-squared error, nRMSEs &lt; 14% and R2 &gt; 0.90) and between their model-estimated spinal loads (nRMSEs &lt; 14% and R2 &gt; 0.83). These agreements were good but relatively less satisfactory for CoPs (nRMSEs &lt; 17% and 0.57 &lt; R2 &lt; 0.68). The only exception, requiring a more throughout investigation, was the situation when the ground-foot contact was significantly reduced during the activity. It appears that occupational/clinical investigations performed in real workstation/clinical setups with no access to forceplates may benefit from the AnyBody GRF/CoP prediction tools for a wide range of load-reaching/handling activities. abstract_id: PUBMED:38486635 Assessing the efficiency of exoskeletons in physical strain reduction by biomechanical simulation with AnyBody Modeling System. Introduction: Recently, many industrial exoskeletons for supporting workers in heavy physical tasks have been developed. However, the efficiency of exoskeletons with regard to physical strain reduction has not been fully proved, yet. Several laboratory and field studies have been conducted, but still more data, that cannot be obtained solely by behavioral experiments, are needed to investigate effects on the human body. Methods: This paper presents an approach to extend laboratory and field research with biomechanical simulations using the AnyBody Modeling System. Based on a dataset recorded in a laboratory experiment with 12 participants using the exoskeleton Paexo Shoulder in an overhead task, the same situation was reproduced in a virtual environment and analyzed with biomechanical simulation. Results: Simulation results indicate that the exoskeleton substantially reduces muscle activity and joint reaction forces in relevant body areas. Deltoid muscle activity and glenohumeral joint forces in the shoulder were decreased between 54 and 87%. Simultanously, no increases of muscle activity and forces in other body areas were observed. Discussion: This study demonstrates how a simulation framework could be used to evaluate changes in internal body loads as a result of wearing exoskeletons. Biomechanical simulation results widely agree with experimental measurements in the previous laboratory experiment and supplement such by providing an insight into effects on the human musculoskeletal system. They confirm that Paexo Shoulder is an effective device to reduce physical strain in overhead tasks. The framework can be extended with further parameters, allowing investigations for product design and evaluation. abstract_id: PUBMED:8643789 Does smoking really kill anybody? Statements that so many people are killed by smoking use the term "kill" in a very unusual manner which is easily misunderstood by people not expert in epidemiology. In addition, the usual calculations leave out of account the fact that smoking interacts synergistically with other risk factors, so that it is a combination of risk factors rather than any specific one that is likely to have a causal influence on mortality. Strictly speaking it is quite inappropriate to state that smoking kills anybody, if we use the term "kill" in a meaningful fashion. abstract_id: PUBMED:28943969 The Right to Know and the Right Not to Know Revisited: Part One. Prompted by developments in human genetics, a recurrent bioethical question concerns a person's 'right to know' and 'right not to know' about genetic information held that is intrinsically related to or linked to them. In this paper, we will revisit the claimed rights in relation to two particular test cases. One concerns the rights of the 500,000 participants in UK Biobank (UKB) whose biosamples, already having been genotyped, will now be exome sequenced, and the other concerns the rights of pregnant women (and their children) who undergo non-invasive prenatal testing (NIPT)-a simple blood test that can reveal genetic information about both a foetus and its mother. This two-part paper is in four principal sections. First, we sketch the relevant features of our two test cases. Secondly, we consider the significance of recent legal jurisprudence in the UK and Singapore. Thirdly, we consider how, the jurisprudence apart, the claimed rights might be grounded. Fourthly, we consider the limits on the rights. We conclude with some short remarks about the kind of genetically aware society that we might want to be and how far there is still an opportunity meaningfully to debate the claimed rights. abstract_id: PUBMED:28160933 Editorial Commentary: Does Anybody Really Know What Time It Is? Does Anybody Really Care? The use of a "clock-face" to describe the anterior cruciate ligament femoral tunnel position is fraught with difficulties. Much work has been done to promote "anatomic" anterior cruciate ligament femoral tunnel positioning, and more research and clinical follow-up studies are necessary to further clarify this concept. Answer: The question "Does anybody really know what time it is?" is used in various contexts in the provided abstracts. In the context of emergency medical systems, the question highlights the issue of time discrepancies among different time sources within healthcare settings. A study found that time sources in an emergency medical system varied considerably, with only 3% of timepieces being accurate to the second when compared to Coordinated Universal Time (UTC). The study suggests that synchronizing medical clocks to UTC and using digital time sources with second displays could improve time recording precision in medicine (PUBMED:8185095). In a different context, the question is used in an editorial commentary to metaphorically address the difficulties in describing the anterior cruciate ligament femoral tunnel position using a "clock-face" reference system. The commentary suggests that more research and clinical follow-up studies are necessary to clarify the concept of "anatomic" anterior cruciate ligament femoral tunnel positioning (PUBMED:28160933). Lastly, the question is used in the title of an essay discussing the concept of biological time versus physical time. The essay argues that the standard practice of assessing age and aging against a physical conception of time is problematic, especially in light of discoveries such as 'age mosaicism.' It suggests that the study of age and aging requires a biological conception of time (PUBMED:33619605). In summary, the question serves as a rhetorical device to address issues of time measurement and perception in various scientific and philosophical discussions.
Instruction: Is there a relation between neuropsychologic variables and quality of life after stroke? Abstracts: abstract_id: PUBMED:11588738 Is there a relation between neuropsychologic variables and quality of life after stroke? Objectives: To describe the quality of life (QOL) of stroke patients and to distill neuropsychologic predictors for poor QOL. Design: A cohort study in which patients were neuropsychologically assessed at a mean of 72.2 days after stroke, with follow-up at a mean of 9.8 months after stroke. Setting: Research department of a rehabilitation center. Patients: A consecutive sample of 164 stroke patients (mean age, 55.2yr) recruited from a university hospital, a regional hospital, and a rehabilitation center. Interventions: Not applicable. Main Outcome Measures: Orientation, memory, attention and concentration, visuospatial and visuoconstructive functions, language, and arithmetic skills were assessed with neuropsychologic tests. QOL was assessed with the Sickness Impact Profile (SIP). Results: An overall mean SIP score +/- standard deviation of 20 +/- 11 showed that stroke has a high impact on everyday functioning. Further analyses indicated that QOL is related in particular to tests measuring spatiotemporal and/or sequential aspects of behavior. Forward/backward stepwise regression analysis (n = 106) showed that poor QOL was more likely if patients had a poor result on the Trailmaking Test (TMT) B and/or were women. Conclusion: The predictive value of the TMT is most effective and very useful because the TMT is a short and economical procedure. However, the gender-related aspects of recovery deserve more attention, as does the possible bias that can be caused by the composition of a measurement. Further research is needed to refine predictive models that are needed to facilitate the development of more adequate, individual rehabilitation programs. abstract_id: PUBMED:19237031 Estimation of the influence of some quality-of-life variables on the results of a battery of neuropsychological tests Introduction: Given current interest in the field of ageing, the objective of this study was to determine the factors related to quality of life that underlie healthy ageing. Two areas were analyzed: firstly, the influence of specific quality-of-life variables on the scores of the Mini Examen Cognoscitivo (MEC), on the classification of subjects as healthy or with mild cognitive impairment (MCI), and on the results of specific memory tests and, secondly, the predictive character of these variables in these tests. Materials And Methods: We performed an inferential study in an incidental sample composed of 140 elderly persons whose memory, praxis, language, and executive function were evaluated using a battery of neuropsychological tests. Data related to lifestyle and clinical history were collected at an interview. Results: Using a series of univariate analyses, we observed that alcohol consumption and a history of cerebrovascular accident affected the results of the MEC and of some of the memory tests, as well as the classification of a subject as healthy versus impaired. A backward regression analysis confirmed the predictive value of these variables in the MEC scores and in some of the memory tests. Conclusions: Some variables related to quality of life and lifestyle affect the results of certain cognitive tests. Given the current interest this area arouses, future studies that make quality-of-life variables more operative and include factors such as physical exercise are required. abstract_id: PUBMED:27237469 The impact of lower extremity pain conditions on clinical variables and health-related quality of life in patients with stroke. Background: The presence of lower extremity pain may be associated with poorer motor recovery, impaired activities of daily living (ADL), reduced quality of life, and disability in patients with stroke. Objective: The aims of the study were to describe the characteristics of lower extremity pain conditions and to evaluate the impact of lower extremity pain on clinical variables, and health-related quality of life in patients with stroke. Methods: One hundred and eighty-five patients with stroke who have self-reported pain in the lower extremity were included in the study. Lower extremity pain characteristics of stroke survivors including etiology, intensity, onset time, frequency, aggravating factors, relieving factors, site, and location of pain were evaluated and recorded. The Brunnstrom motor recovery scale, the functional ambulation category, the mini-mental state examination, the star cancellation test, the beck depression inventory, the Barthel Index, and the Short Form-36 Questionnaire (SF-36) were used. Results: The causes of lower extremity pain were diagnosed as osteoarthritis (51.1%), central neuropathic pain (28.7%), mixed pain (10.3%), low back pain associated with leg pain (8.6%), greater trochanteric pain syndrome (5.2%), prior hip fracture (2.4%), heterotopic ossification (2%), developmental hip luxation (1%), hallux valgus (0.5%), and malignancy (0.5%). The intensity of lower extremity pain was a significant predictor of cognitive functions, ADL, depression, and all SF-36 subscales except emotional role and vitality. The duration of lower extremity pain was a significant predictor of depression and lower extremity spasticity. Conclusions: Since lower extremity pain conditions have a significant important influence on clinical variables and health-related quality of life in stroke survivors, early identification and appropriate treatment of the lower extremity pain conditions should be carried out in order to reduce the level of disability in stroke patients. abstract_id: PUBMED:10893732 Life events, quality of life and social support prior to acute cerebral vascular disease, as predictive factors of the quality of life and functional state following stroke Introduction: The repercussion of stroke on quality of life has been evaluated but not the possible relation between the quality of life before and months after an acute stroke. Objective: To study the possible relation between quality of life, social support, stressful life events prior to the stroke and quality of life, social support and functional state months after. Patients And Methods: A prospective study was made of 34 patients (71.7 +/- 8 years; 19 (56%) men; 15 (44%) women with stroke, by means of two evaluations: personal interview within the first 36 hours (quality of life--Nothingham health profile (NHP)-, perception of social support and stressful life events--Holmes and Rake inventory-) and an interview over the phone 16.5 +/- 5.3 months after the stroke (NHP, perception of social support and functional state--Rankin scale-). Results: Following the stroke there was deterioration in perception of social support (19.8 +/- 3 vs 12.5 +/- 8; p = 0.000) and in the degree of social isolation of the NHP (9.4 +/- 20 vs 21.1 +/- 30; p = 0.03). The only relation found was between the following variables: pain at the first evaluation and pain (r = 0.45; p = 0.007) at the second evaluation; mobility at the first evaluation and emotional state (r = 0.39; p = 0.029) and social support (r = 0.37; p = 0.027) at the second evaluation; sleepiness at the first evaluation and energy (r = 0.55; p = 0.0006), pain (r = 0.39; p = 0.022), emotional state (r = 0.35; p = 0.038), mobility (r = 0.34, p = 0.048) and sleepiness (r = 0.51; p = 0.001) at the second evaluation. Conclusion: Our results indicate that there is little relationship between the previous state and that following stroke, and that the deterioration in perception of support and social isolation is due to the stroke itself. abstract_id: PUBMED:32006689 Long-term neurologic and cognitive outcome and quality of life in adults after pneumococcal meningitis. Objectives: To perform a cross-sectional cohort study on long-term neurologic, cognitive and quality-of-life outcome in adults surviving pneumococcal meningitis. Methods: Adult survivors of community-acquired pneumococcal meningitis from a Dutch nationwide prospective cohort study were evaluated 1 to 5 years after acute illness. The control group consisted of partners or proxies of patients. Neurologic examination was performed and cognitive domains were tested with the Vienna Test System Cognitive Basic Assessment Test set (VTS COGBAT). The Research and Development (RAND)-36 and adapted Cognitive and Emotional Consequences of Stroke (CLCE)-24 questionnaires assessed perceived cognitive functioning and quality of life. Differences between group scores were tested with multivariate analyses of variance. Results: A total of 80 pneumococcal meningitis patients and 69 controls were evaluated. After a median of 2 years (interquartile range, 2-3) after acute illness, 27 (34%) of 79 patients had persistent neurologic sequelae, most commonly hearing loss (21/79, 27%). On overall neuropsychologic evaluation, patients performed worse than the controls (MANCOVA; p 0.008), with alertness (z score -0.33, p 0.011) and cognitive flexibility (z score -0.33, p 0.027) as the most affected domains. Cognitive impairment was present in 11 (14%) of 79 patients. CLCE-24 questionnaires revealed cognitive impairment on all domains, most commonly for cognitive speed (53/75, 71%), attention (45/75, 60%) and memory (46/75, 61%). Patients had lower quality-of-life scores than controls (item physical functioning, (median) patients vs. controls, 80 vs. 95, p &lt; 0.001; social functioning, (median) 81 vs. 100, p 0.003; perceived health, (mean) 59 vs. 70, p 0.005), which correlated with cognitive complaints (R = 0.66, p &lt; 0.001). Conclusions: Adults after pneumococcal meningitis are at high risk of long-term neurologic and neuropsychologic deficits impairing daily life activities and quality of life. abstract_id: PUBMED:33266160 Health-Related Quality of Life in Stroke Survivors in Relation to the Type of Inpatient Rehabilitation in Serbia: A Prospective Cohort Study. Background and objectives: Health-related quality of life after stroke is an important public health issue. The objective of the study was to investigate the relationship between the perceived health-related quality of life in stroke survivors in relation to the type of inpatient rehabilitation. Materials and Methods: Using a random selection method out of a total of 688 patients, every fourth survivor who had a stroke in the period from 1 January 2017 to 31 December 2019 was selected from the admission protocol of the Clinic for Rehabilitation and Physical Medicine of the Clinical Centre Niš, Serbia. A total of 160 first-ever stroke survivors were included (80 underwent additional inpatient rehabilitation and 80 underwent only inpatient rehabilitation in a tertiary health institution) in a twelve-month prospective study. The EuroQuol-5 Dimension (EQ5D) questionnaire and Stroke Impact Scale were used for the assessment. Multivariate linear regression analysis was done. Results: Multivariate linear regression analysis showed that additional inpatient rehabilitation from six up to eight weeks after discharge was significantly associated with better self-reported health condition by 3.9 times (from 1.9 to 8.2), significantly decreased the ranks of EQ5D by 1.78 times (from 1.02 to 3.11), and showed a higher health-related quality of life. We determined a significant increase of strength, emotions, mobility, and participation role in survivors who underwent additional inpatient rehabilitation compared with those who did not. Conclusions: There was a significant difference in health-related quality of life perceived by stroke survivors who underwent additional hospital rehabilitation in relation to those who underwent only inpatient rehabilitation. abstract_id: PUBMED:32953912 Investigation of the quality of life of patients with hypertension in health centers. Introduction: Hypertension is one of the most critical factors for chronic diseases such as cardiovascular disease, stroke, arrhythmias, heart failure, and renal disease. The aim of this study was to evaluate the quality of life and its related factors in hypertensive patients. Materials And Methods: This study was descriptive research. The sampling method was simple random, i.e., 137 were selected among the ten health centers of the list of the existing patients by simple random sampling. Data were collected using a questionnaire World Health Organization Quality of Life-BREF (Short Form-26). After collecting the questionnaires, the data analysis was carried out using SPSS software and using statistical methods (e.g., number, percentage, mean, standard deviation, and MANOVA tests). Results: The results showed that the mean score for the physical domain (52.82), the psychological domain (50.26), the social domain (48.33), the environmental domain (46.1), and the total quality of life (49.60). Among the demographic variables, the education level variable (P =0.000), job (P = 0.013), and the duration of hypertension (P = 0.039) were significantly correlated with the quality of life of patients. The "levels of education" variables are significantly correlated with the physical domain (P = 0.000), psychological domain (P = 0.000), social domain (P = 0.000), and environmental domain (P = 0.000) of the quality of life. The "job status" variable is only significantly correlated with the social domain of the quality of life (P = 0.005). The "duration of hypertension" variable is only significantly associated with the physical domain of the quality of life (P = 0.011). Conclusion: The findings of the present study show that demographic variables such as educational level, occupation, and duration of hypertension have a significant relationship with the quality of life domains of hypertensive patients. It is recommended that the principled education of patients is a step toward the improvement of the quality of life of patients. abstract_id: PUBMED:35432611 Quality of Life of Post-stroke Patients. Introduction: Stroke is a disease whose consequences have a considerable impact on the quality of the patient's life. It is a widespread disease that has a disabling impact on life and, in addition to physical changes, brings about a number of psychological and cognitive processes. Goal: The goal of the study was to identify and describe the quality of life of post-stroke patients. Methodology: The study design was quantitative. A questionnaire of the authors' own design and the SF-36 questionnaire were used to obtain the data. Results: Significant differences in patient quality of life were identified in relation to patient gender. Moreover, the quality of life in all individual SF-36 dimensions, except for mental health, deteriorated with age. With regard to occupational placement, employed respondents gave the highest evaluation of quality of life according to SF-36 and old-age pensioners the lowest. The analysis shows that quality of life in individual dimensions is positively influenced by respondents' higher education. The evaluation in individual dimensions improves with the time that has passed since the stroke. Conclusion: The quality of life of post-stroke patients deteriorates with age. The deteriorating level of patient quality of life in older age requires programmes that include assessments and interventions that lead to the treatment of these patients. abstract_id: PUBMED:27692991 Chronic brain damage in sickle cell disease and its relation with quality of life. Background And Objective: Sickle cell anaemia causes progressive organ damage. The objective is to describe school performance of patients with sickle cell anaemia and their clinical parameters and quality of life that may have an influence. The hypothesis is that if school alterations occur without other objective data, additional factors must be present besides the disease itself. Patients And Methods: Transversal study performed in November 2015 considering analytical variables, complications and neuroradiological images of children with sickle cell anaemia, and family survey on school performance and quality of life. Results: Median age was 6.8 years and 78% were diagnosed at birth. Sixty patients were included. School performance was altered in 51% of cases and was related to nocturnal hypoxemia. Acute stroke incidence was 6.7%. Transcranial ultrasound was abnormal in 4% of cases and magnetic resonance imaging in 16% of cases. Quality of life showed pathological findings in all areas and the low values increased proportionally in older ages. The stroke affected the physical and social sphere, and lung disease affected the physical and emotional spheres. Conclusions: Poor school performance affects half of the patients and it is related to nocturnal hypoxemia, although other socio-cultural factors may have an influence. Quality of life is affected in most of these cases independently of academic results. The absence of alterations in neuroimaging or the apparent lack of severe clinical parameters do not mean that quality of life and schooling are normal. abstract_id: PUBMED:32811355 Quality of life and social support: Perspectives of Saudi Arabian stroke survivors. Given that stroke is an indispensable health burden in Saudi Arabia and around the world, great importance has been attached on studies of social support and other factors that could improve the quality of life of stroke survivors. Perceptions of quality of life and social support may vary depending on patients' cultural and societal background. This research assessed the quality of life and social support of community-dwelling Saudis who survived stroke. A quantitative study was performed among 123 Saudi stroke survivors. Questionnaire-guided interviews measuring social support and quality of life were performed, and the multivariate effects of predictor variables on the four domains of quality of life were determined through multivariate multiple regression analysis. Among the dimensions of social support, support from family members had the highest average, whereas support from friends had the lowest. The environmental domain of quality of life was perceived to be the best aspect, whereas physical health was perceived to be the poorest. Multivariate analysis revealed that age, gender, employment status, monthly family income, type of community, education, type of stroke, side of stroke and support from significant others had multivariate influences on the domains of quality of life. Several sociodemographic and disease-related variables and social support influence patients' quality of life. The study adds critical knowledge as to how Arab stroke survivors perceive their quality of life and social support. Ensuring that stroke survivors receive adequate social support is imperative because it can improve their quality of life. Answer: Yes, there is a relation between neuropsychologic variables and quality of life after stroke. Studies have shown that neuropsychological assessments can predict quality of life outcomes in stroke patients. For instance, poor performance on the Trailmaking Test (TMT) B, which measures spatiotemporal and/or sequential aspects of behavior, was found to be a predictor of poor quality of life (QOL) in stroke patients (PUBMED:11588738). Additionally, cognitive impairments, such as those affecting memory, praxis, language, and executive function, have been associated with lower quality of life in elderly individuals, suggesting that neuropsychological deficits can impact QOL (PUBMED:19237031). Lower extremity pain, which may be associated with poorer motor recovery and impaired activities of daily living, has also been found to significantly predict cognitive functions, depression, and all subscales of the Short Form-36 Questionnaire (SF-36), except for emotional role and vitality, indicating that pain conditions can influence both clinical variables and health-related quality of life in stroke survivors (PUBMED:27237469). Moreover, long-term neurologic and cognitive outcomes, such as persistent neurologic sequelae and cognitive impairment, have been shown to impair daily life activities and quality of life in adults after pneumococcal meningitis, which can be a complication of stroke (PUBMED:32006689). The type of inpatient rehabilitation received by stroke survivors has also been linked to differences in perceived health-related quality of life, with additional inpatient rehabilitation being associated with better self-reported health conditions (PUBMED:33266160). In summary, neuropsychologic variables, including cognitive and motor impairments, as well as pain conditions and the type of rehabilitation received, are related to the quality of life after stroke. These findings underscore the importance of addressing neuropsychological deficits and providing adequate rehabilitation to improve the quality of life for stroke survivors.
Instruction: Is pT2 subclassification feasible to predict patient outcome in colorectal cancer? Abstracts: abstract_id: PUBMED:21107740 Is pT2 subclassification feasible to predict patient outcome in colorectal cancer? Background: This study aimed to evaluate the prognostic impact of pT2 subclassification according to the depth of muscularis propria (MP) invasion and to explore the clinicopathologic factors correlated with lymph node metastasis (LNM) and postoperative hematogenous metastasis in pT2 colorectal cancer. Methods: A total of 317 patients with pT2 colorectal cancer were reviewed. pT2a represents the infiltration of the inner circumferential layer of the MP, and pT2b represents the infiltration of the outer longitudinal layer of the MP. Clinicopathologic factors and overall survival rates were compared in patients with pT2a and pT2b stage cancers. Multivariate analysis was performed to identify the significantly important prognostic factors. Univariate and multivariate analyses were performed, respectively, to identify the significantly important clinicopathologic factors correlated with LNM and postoperative hematogenous metastasis in pT2 colorectal cancer. Results: According to the depth of MP invasion, 107 patients were classified as pT2a and 210 patients were classified as pT2b. Among them, there were 55 patients with LNM, 34 patients with postoperative hematogenous metastasis. There was significant difference in most of clinicopathologic features between patients in the pT2a and pT2b stages. Multivariate analysis identified pN stage (P &lt; .001) and tumor location (P = .036) were independent factors affecting the prognosis. However, no apparent difference was observed between pT2a versus pT2b cancer. Univariate and multivariate analyses uniformly identified lymphovascular invasion (P = .035) and the depth of MP invasion (P = .005) as significantly correlated with LNM. Multivariate analysis found tumor location (P = .021) and the presence or absence of LNM (P &lt; .001) were important factors affecting postoperative hematogenous metastasis. Conclusions: In pT2 colorectal cancer treated with R0 surgery, there is a high risk of LNM in deep MP invasion versus superficial MP invasion. The pT2 subclassification system had no significant advantage in identifying a different prognosis, except for predicting the LNM before surgery. Rectal cancer and the presence of LNM were high-risk factors resulting in hematogenous metastasis postoperatively. abstract_id: PUBMED:36757433 D3 lymph node dissection improves the survival outcome in patients with pT2 colorectal cancer. Background: The extent of lymphadenectomy in patients with pT2 colorectal cancer (CRC) remains controversial. This study aimed to elucidate the effects of D3 and D2 lymph node dissection (LND) on survival in patients diagnosed with pT2 CRC. Methods: This was a retrospective cohort study from a high-volume cancer center in Japan. From April 2007 to December 2020, 6273 patients with primary CRC were included in the study; among these, 616 patients diagnosed with pT2 CRC underwent radical colorectal resection. Propensity score matching (PSM) was applied to balance potential confounding factors, and a total of 104 matched pairs were extracted from the entire cohort. Independent risk factors associated with prognosis were determined by Cox regression analysis. The main outcome measures were overall survival (OS) and cancer-specific survival (CSS). Results: Before PSM, there was a statistically significant difference across the cohort in OS and CSS (p = 0.000 and 0.013) between D3 and D2 LND groups; the estimated hazard ratio (HR) was 2.2 (95% confidence interval (CI), 1.1-4.4, p = 0.031) for OS in the D3 LND and 4.4 (95% CI, 1.7 to 11, p = 0.0027) for CSS (p = 0.013). There was also a significant difference (p = 0.024) in OS between the D3 and D2 LND groups in the matched cohort, with an estimated HR for OS of 3.3 (95% CI, 1.2 to 9.1, p = 0.024) and an estimated HR for CSS of 7.2 (95% CI, 1.6 to 33, p = 0.011). Conclusions: D3 LND had a significant survival advantage in the treatment of pT2 CRC. The results of this study provide a theoretical basis for the application of D3 LND in radical surgery for preoperative T2 CRC. abstract_id: PUBMED:33319157 Proposal of a modified subclassification system for stage III colorectal cancer: A multi-institutional retrospective analysis. Aim: The prognostic value of the stage III subclassification system based on the Japanese Classification of Colorectal, Appendiceal, and Anal Carcinoma has not yet been clarified. This study aimed to develop a modified system with optimal risk stratification and compare its performance with the current staging systems. Methods: Clinicopathological data from 6855 patients with stage III colorectal cancers who underwent D3 dissection were collected from a nationwide multicenter database. After determining patient survival rates across 13 divisions based on pathological N stage (N1, N2a, and N2b/N3) and tumor depth (T1, T2, T3, T4a, and T4b), except for T1N2a and T1N2b/N3 due to the small number, we categorized patients into three groups and developed a trisection staging system according to the Akaike information criterion. We then compared the Akaike information criterion of the developed system with those of the current staging systems. Results: The T1N1[rank, 1] division (98.5%) had the most favorable prognosis in terms of 5-year cancer-specific survival, followed by T2N1[2] (93.9%), T2N2a[3] (92.0%), T3N1[4] (87.0%), T3N2a[5] (78.8%), T4aN1[6] (78.7%), T2N2b/N3[7] (77.8%), T4aN2a[8] (75.2%), T4bN1[9] (73.5%), T3N2b/N3[10] (64.7%), T4aN2b/N3[11] (61.5%), T4bN2b/N3[12] (43.0%), and T4bN2a[13] (42.5%). Compared to the categorizations of the Japanese and tumor-node-metastasis systems (Akaike information criterion, 22 684.6 and 22 727.1, respectively), the following stage categorizations were proven to be the most clinically efficacious: T1N1[1]-T3N1[4], T3N2a[5]-T4bN1[9], and T3N2b/N3[10]-T4bN2a[13] (Akaike information criterion, 22 649.2). Conclusion: The proposed modified system may be useful in the risk stratification of patients with stage III colorectal cancer who had undergone D3 dissection. abstract_id: PUBMED:12749231 Oncologic outcome of laparoscopic surgery for T1 and T2 colorectal carcinoma. Background/aims: Laparoscopic surgery for colorectal carcinoma remains controversial because of the technical difficulties and uncertainty regarding the long-term oncologic outcome after laparoscopic surgery. The objective of this study was to evaluate the feasibility for the laparoscopic surgery in the treatment of pT1 and pT2 colorectal carcinoma. Methodology: A review was performed of a prospective registry of 226 patients who underwent curative laparoscopic resection for pT1 and pT2 colorectal carcinoma between December, 1992 and December, 2001. Patient demographics and outcomes were recorded prospectively. Results: The median follow-up was 43 months. Three patients (2.0%) in the pT1 group and 3 patients (3.9%) in the pT2 group developed recurrence of carcinoma. The expected five-year survival and disease-free survival rates in the pT1 group were 98.9% and 97.6%, respectively, whereas they were 93.6% and 93.4% in the pT2 group. No patient had port-site or peritoneal recurrence during the follow-up period. Conclusions: The findings of current study demonstrate that oncologic outcome of laparoscopic surgery for patients with pT1 and pT2 colorectal carcinoma appear to be comparable with conventional surgery. Laparoscopic surgery is oncologically appropriate at least for patients with pT1 and pT2 colorectal carcinoma. abstract_id: PUBMED:26601626 Clinical significance and validity of the subclassification for colorectal laterally spreading tumor granular type. Background And Aim: Colorectal laterally spreading tumor granular type (LST-G) is generally divided into two subtypes based on morphology. Here, we retrospectively investigated the clinical significance of a concrete, objective LST-G subclassification. Methods: This study examined 636 consecutive cases that were resected endoscopically or surgically. LST-G was subclassified as follows: Type 1, a lesion with homogenous uniform granules with uniform (&lt;5 mm) nodules; Type 2, a lesion with granules and small nodules (≥5 mm, &lt;10 mm); or Type 3, a lesion accompanied by large nodules (≥10 mm). For the validation study, 194 images were compiled from 97 cases investigated using conventional colonoscopy and chromoendoscopy with indigo carmine dye spraying. Images were distributed in a randomized order to students without prior endoscopy experience, less-experienced endoscopists (LEE group), and highly experienced endoscopists (HEE group). Diagnostic accuracy and interobserver agreement were then evaluated. Results: There was no submucosal invasion in Type 1 lesions. The incidence of deep submucosal invasive carcinoma was higher for Type 3 lesions than for Type 2 lesions. Interobserver agreement was good in each group. Diagnostic accuracy was higher in the HEE group than in the student and LEE groups. Chromoendoscopy had a higher accuracy rate than conventional colonoscopy in the LEE and HEE groups (LEE, 0.74 vs 0.69, P &lt; 0.05; HEE, 0.84 vs 0.78, P &lt; 0.05). Conclusions: This subclassification of LST-G according to the diameters of granules and nodules was both useful for choosing therapeutic strategies in the clinical setting and universally applicable. abstract_id: PUBMED:37610468 A Content Framework of a Novel Patient-Reported Outcome Measure for Detecting Early Adverse Events After Major Abdominal Surgery. Background: Systematic health monitoring with patient-reported outcome instruments may predict post-discharge complications after major surgery. The objective of this study was to conceptualize a novel patient-reported outcome instrument for detecting early adverse events within two weeks of discharge after major emergency abdominal surgery and colorectal cancer surgery. Methods: This study was conducted in two phases. (1) An exhaustive health concept pool was generated using systematic content analysis of existing patient-reported outcome measures (N = 31) and semi-structured interviews of readmitted patients (N = 49) and health professionals (N = 10). Concepts were categorized into three major domains: 'Symptoms,' 'functional status,' and 'general health perception.' We calculated the frequency of each health concept as the proportion of patients, who experienced the respective concept prior to readmission. (2) Colorectal cancer surgeons (N = 13) and emergency general surgeons (N = 12) rated the relevance of each health concept (1 = irrelevant, 5 = very relevant) in the context of detecting post-discharge adverse events. We selected concepts with either a high mean relevance score (≥ 4) or a combination of moderate mean relevance score and high patient-reported frequency (≥ 3 and ≥ 20% or ≥ 2.5 and ≥ 50%, respectively). Results: Content analysis of existing items with additions from patients and experts resulted in 58 health concepts, of which the majority were distinct symptoms (N = 40). The selection procedure resulted in 29 patient-reported health concepts relevant for detecting adverse events after discharge. Conclusion: The outlined framework provides content validity for future patient-reported outcome instruments detecting adverse events in the early post-discharge period after major emergency abdominal surgery and colorectal cancer surgery. abstract_id: PUBMED:30880455 Understanding patient-reported outcome measures in colorectal cancer. Quality of life has become increasingly regarded as a key outcome measurement for cancer patients. Patient-reported outcome measures (PROMs) represent the tools used to ascertain self-reported quality of life. This review provides a summary of the literature regarding the use of PROMs in colorectal cancer and evaluates the advantages and limitations of generic and disease specific questionnaires that can be utilized in clinical practice. Factors that influence PROMs are outlined, including cancer characteristics, patient factors and treatment methods. Finally, future directions for the use of PROMs in colorectal cancer to inform healthcare delivery at an individual- and systems-based level are discussed. abstract_id: PUBMED:36081311 Collection and Utilization of Patient-Reported Outcome Measures in a Colorectal Surgery Clinic. Introduction: The routine collection of patient-reported outcome measures (PROMs) promises to improve patient care. However, in colorectal surgery, PROMs are uncommonly collected outside of clinical research studies and rarely used in clinical care. We designed and implemented a quality improvement project with the goals of routinely collecting PROMs and increasing the frequency that PROMs are utilized by colorectal surgeons in clinical practice. Methods: This mixed-methods, quality improvement project was conducted in the colorectal surgery clinic of a tertiary academic medical center. Patients were administered up to five PROMs before each appointment. PROM completion rates were measured. Additionally, we performed two educational interventions to increase utilization of our electronic health record's PROM dashboard by colorectal surgeons. Utilization rates and attitudes toward the PROM dashboard were measured. Results: Overall, patients completed 3600 of 3977 (90.9%) administered PROMs during the study period. At baseline, colorectal surgeons reviewed 6.7% of completed PROMs. After two educational interventions, this increased to 39.3% (P = 0.004). Colorectal surgeons also felt that the PROM dashboard was easier to use. Barriers to greater PROM dashboard utilization included poor user interface/user experience and a perceived lack of knowledge, time, and relevance. Conclusions: The collection of PROMs in colorectal surgery clinics is feasible and can result in high PROM completion rates. Educational interventions can improve the utilization of PROMs by colorectal surgeons in clinical practice. Our experience collecting PROMs through this quality improvement initiative can serve as a template for other colorectal surgery clinics interested in collecting and utilizing data from PROMs. abstract_id: PUBMED:27160353 Distant metastases in colorectal carcinoma: A proposal for a new M1 subclassification. Introduction: In 2010, the seventh Tumour-Node-Metastasis (TNM) cancer staging system of the International Union for Cancer Control (UICC) and the American Joint Committee of Cancer (AJCC) introduced a subdivision of M1 in the TNM classification of colorectal carcinomas. For the eighth TNM edition which will be released in the autumn of 2016 and will become effective in January 2017 new proposals are appreciated. The aim of our study was to define a new and better proposal for M1 subclassification. Methods: In a total of 814 patients with stage IV colorectal carcinoma treated between 1995 and 2013 prognostic factors were analysed in univariate and multivariate analyses. Results: Advanced age, treatment in the earlier period 1995-2003, involvement of multiple metastatic sites, and non-curative resection were found to be independent prognostic factors. In patients with only one metastatic site, survival was good in patients with liver or lung metastasis, moderate in patients with metastasis of the peritoneum or non-regional lymph nodes and poor in patients with other rarely metastatic involved organs. The new proposal defines M1a, Metastasis confined to one organ: liver or lung (2-year survival 51.6%); M1b, Metastasis confined to one organ: peritoneum or non-regional lymph nodes, or Metastasis confined to liver plus lung (2-year survival 39.4%); and M1c, Metastasis confined to one organ: all other sites, or Metastasis in more than one organ, except liver plus lung (2-year survival 21.6%). Conclusion: The new proposal can identify three prognostic groups in stage IV colorectal carcinomas with significant differences in survival. abstract_id: PUBMED:31377032 Why focus on patient-reported outcome measures in older colorectal cancer patients? Colorectal cancer is the most common cancer among older persons, with surgery the recommended treatment for many. With a growing emphasis on value-based care, there is an increasing desire to implement patient-reported outcome measures. Patient-reported outcome measures (PROMs) are the tools and instruments used to collect patient-reported symptoms, functional status, and quality of life. Monitoring and addressing PROMs has been shown to improve patient-centered care in surgical oncology populations and survival in patients with advanced cancer. For older patients, functional status and quality of life are valuable outcomes of cancer treatment. However, experience with PROMs for older patients, and specifically older colorectal cancer patients, is limited and heterogenous. The International Consortium for Health Outcomes Measurement has recommended a set of outcome measures for colorectal cancer patients and a set of outcome measures for older patients, which have considerable overlap, providing a starting point for future investigations of PROMs for older colorectal cancer patients. While many questions remain, it is imperative that PROMs for older colorectal cancer patients be considered for their potential to improve cancer outcomes in this population. Answer: The pT2 subclassification in colorectal cancer, which differentiates between pT2a (infiltration of the inner circumferential layer of the muscularis propria) and pT2b (infiltration of the outer longitudinal layer of the muscularis propria), has been evaluated for its prognostic impact. According to a study with 317 patients with pT2 colorectal cancer, there was a significant difference in clinicopathologic features between patients with pT2a and pT2b stages. However, the subclassification system did not show a significant advantage in identifying different prognoses, except for predicting lymph node metastasis (LNM) before surgery. Deep muscularis propria (MP) invasion was associated with a higher risk of LNM compared to superficial MP invasion. Additionally, rectal cancer and the presence of LNM were identified as high-risk factors for postoperative hematogenous metastasis (PUBMED:21107740). In conclusion, while the pT2 subclassification may not be significantly advantageous for predicting overall patient outcomes, it appears to be feasible for predicting the risk of LNM before surgery in patients with pT2 colorectal cancer.
Instruction: Magnetic Resonance Imaging Features of the Nigrostriatal System: Biomarkers of Parkinson's Disease Stages? Abstracts: abstract_id: PUBMED:27035571 Magnetic Resonance Imaging Features of the Nigrostriatal System: Biomarkers of Parkinson's Disease Stages? Introduction: Magnetic resonance imaging (MRI) can be used to identify biomarkers in Parkinson's disease (PD); R2* values reflect iron content related to high levels of oxidative stress, whereas volume and/or shape changes reflect neuronal death. We sought to assess iron overload in the nigrostriatal system and characterize its relationship with focal and overall atrophy of the striatum in the pivotal stages of PD. Methods: Twenty controls and 70 PD patients at different disease stages (untreated de novo patients, treated early-stage patients and advanced-stage patients with L-dopa-related motor complications) were included in the study. We determined the R2* values in the substantia nigra, putamen and caudate nucleus, together with striatal volume and shape analysis. We also measured R2* in an acute MPTP mouse model and in a longitudinal follow-up two years later in the early-stage PD patients. Results: The R2* values in the substantia nigra, putamen and caudate nucleus were significantly higher in de novo PD patients than in controls. Early-stage patients displayed significantly higher R2* values in the substantia nigra (with changes in striatal shape), relative to de novo patients. Measurements after a two-year follow-up in early-stage patients and characterization of the acute MPTP mouse model confirmed that R2* changed rapidly with disease progression. Advanced-stage patients displayed significant atrophy of striatum, relative to earlier disease stages. Conclusion: Each pivotal stage in PD appears to be characterized by putative nigrostriatal MRI biomarkers: iron overload at the de novo stage, striatal shape changes at early-stage disease and generalized striatal atrophy at advanced disease. abstract_id: PUBMED:20736190 Magnetic resonance imaging markers of Parkinson's disease nigrostriatal signature. One objective of modern neuroimaging is to identify markers that can aid in diagnosis, disease progression monitoring and long-term drug impact analysis. In this study, Parkinson-associated physiopathological modifications were characterized in six subcortical structures by simultaneously measuring quantitative magnetic resonance parameters sensitive to complementary tissue characteristics (i.e. volume atrophy, iron deposition and microstructural damage). Thirty patients with Parkinson's disease and 22 control subjects underwent 3-T magnetic resonance imaging with T₂*-weighted, whole-brain T₁-weighted and diffusion tensor imaging scans. The mean R₂* value, mean diffusivity and fractional anisotropy in the pallidum, putamen, caudate nucleus, thalamus, substantia nigra and red nucleus were compared between patients with Parkinson's disease and control subjects. Comparisons were also performed using voxel-based analysis of R₂*, mean diffusivity and fractional anisotropy maps to determine which subregion of the basal ganglia showed the greater difference for each parameter. Averages of each subregion were then used in a logistic regression analysis. Compared with control subjects, patients with Parkinson's disease displayed significantly higher R₂* values in the substantia nigra, lower fractional anisotropy values in the substantia nigra and thalamus, and higher mean diffusivity values in the thalamus. Voxel-based analyses confirmed these results and, in addition, showed a significant difference in the mean diffusivity in the striatum. The combination of three markers was sufficient to obtain a 95% global accuracy (area under the receiver operating characteristic curve) for discriminating patients with Parkinson's disease from controls. The markers comprising discriminating combinations were R₂* in the substantia nigra, fractional anisotropy in the substantia nigra and mean diffusivity in the putamen or caudate nucleus. Remarkably, the predictive markers involved the nigrostriatal structures that characterize Parkinson's physiopathology. Furthermore, highly discriminating combinations included markers from three different magnetic resonance parameters (R₂*, mean diffusivity and fractional anisotropy). These findings demonstrate that multimodal magnetic resonance imaging of subcortical grey matter structures is useful for the evaluation of Parkinson's disease and, possibly, of other subcortical pathologies. abstract_id: PUBMED:25745453 Magnetic resonance imaging markers for early diagnosis of Parkinson's disease. Parkinson's disease (PD) is a neurodegenerative disorder characterized by selective and progressive degeneration, as well as loss of dopaminergic neurons in the substantia nigra. In PD, approximately 60-70% of nigrostriatal neurons are degenerated and 80% of content of the striatal dopamine is reduced before the diagnosis can be established according to widely accepted clinical diagnostic criteria. This condition describes a stage of disease called "prodromal", where non-motor symptoms, such as olfactory dysfunction, constipation, rapid eye movement behaviour disorder, depression, precede motor sign of PD. Detection of prodromal phase of PD is becoming an important goal for determining the prognosis and choosing a suitable treatment strategy. In this review, we present some non-invasive instrumental approaches that could be useful to identify patients in the prodromal phase of PD or in an early clinical phase, when the first motor symptoms begin to be apparent. Conventional magnetic resonance imaging (MRI) and advanced MRI techniques, such as magnetic resonance spectroscopy imaging, diffusion-weighted and diffusion tensor imaging and functional MRI, are useful to differentiate early PD with initial motor symptoms from atypical parkinsonian disorders, thus, making easier early diagnosis. Functional MRI and diffusion tensor imaging techniques can show abnormalities in the olfactory system in prodromal PD. abstract_id: PUBMED:32861103 Multimodal dopamine transporter (DAT) imaging and magnetic resonance imaging (MRI) to characterise early Parkinson's disease. Idiopathic Parkinson's disease (PD), the second most common neurodegenerative disorder, is characterised by the progressive loss of dopaminergic nigrostriatal terminals. Currently, in early idiopathic PD, dopamine transporter (DAT)-specific imaging assesses the extent of striatal dopaminergic deficits, and conventional magnetic resonance imaging (MRI) of the brain excludes the presence of significant ischaemic load in the basal ganglia as well as signs indicative of other forms of Parkinsonism. In this article, we discuss the use of multimodal DAT-specific and MRI protocols for insight into the early pathological features of idiopathic PD, including: structural MRI, diffusion tensor imaging, nigrosomal iron imaging and neuromelanin-sensitive MRI sequences. These measures may be acquired serially or simultaneously in a hybrid scanner. From current evidence, it appears that both nigrosomal iron imaging and neuromelanin-sensitive MRI combined with DAT-specific imaging are useful to assist clinicians in diagnosing PD, while conventional structural MRI and diffusion tensor imaging protocols are better suited to a research context focused on characterising early PD pathology. We believe that in the future multimodal imaging will be able to characterise prodromal PD and stratify the clinical stages of PD progression. abstract_id: PUBMED:19959017 Brain magnetic resonance imaging techniques in the diagnosis of parkinsonian syndromes. Parkinson disease (PD) is the most common neurodegenerative cause of parkinsonism, followed by progressive supranuclear palsy and multiple system atrophy (MSA). Despite published consensus operational criteria for the diagnosis of PD and the various atypical parkinsonian disorders (APD) such as progressive supranuclear palsy, Parkinson variant of MSA, and corticobasal degeneration, differentiation of these clinical entities may be challenging, particularly in the early stages of the disease. Diagnosis of PD and its distinction from APD and symptomatic parkinsonism is crucial for the clinical evaluation, as these disorders differ in prognosis, treatment response, and molecular pathogenesis. Despite limitations the different modern magnetic resonance (MR) techniques have undoubtedly added to the differential diagnosis of neurodegenerative parkinsonism. This article focuses on static or structural conventional MR imaging techniques including standard T2-weighted, T1-weighted, and proton-density sequences, as well as different advanced techniques, including methods to assess regional cerebral atrophy quantitatively such as magnetic resonance volumetry, diffusion tensor and diffusion-weighted imaging, and magnetization transfer imaging, to assist in the differential diagnosis of neurodegenerative parkinsonian disorders. abstract_id: PUBMED:11563438 Magnetic resonance imaging and magnetic resonance spectroscopy in dementias. This article reviews recent studies of magnetic resonance imaging and magnetic resonance spectroscopy in dementia, including Alzheimer's disease, frontotemporal dementia, dementia with Lewy bodies, idiopathic Parkinson's disease, Huntington's disease, and vascular dementia. Magnetic resonance imaging and magnetic resonance spectroscopy can detect structural alteration and biochemical abnormalities in the brain of demented subjects and may help in the differential diagnosis and early detection of affected individuals, monitoring disease progression, and evaluation of therapeutic effect. abstract_id: PUBMED:16003111 An update on conventional and advanced magnetic resonance imaging techniques in the differential diagnosis of neurodegenerative parkinsonism. Purpose Of Review: The clinical differentiation between Parkinson's disease and atypical parkinsonian disorders (APD) remains a challenge for every neurologist. Conventional magnetic resonance imaging (MRI) and different advanced MRI techniques offer the potential for objective criteria in the differential diagnosis of neurodegenerative parkinsonism. The aim of this article is to review the recent literature on the role of conventional and advanced MRI techniques in the differential diagnosis of neurodegenerative parkinsonian disorders. Recent Findings: An important role of MRI is the exclusion of symptomatic parkinsonism due to other pathologies. Over the past two decades, conventional MRI and different advanced MRI techniques, including proton magnetic resonance spectroscopy (1H-MRS), diffusion-weighted imaging (DWI), magnetization transfer imaging (MTI) and magnetic resonance volumetry (MRV) have been found to show abnormalities in the substantia nigra and basal ganglia, especially in APD. Recent studies using MRV, MTI, DWI and 1H-MRS to discriminate Parkinson's disease from APD are discussed extensively. Summary: Research findings suggest that novel MRI techniques such as MTI, DWI and MRV have superior sensitivity compared to conventional MRI in detecting abnormal features in neurodegenerative parkinsonian disorders. Whether these techniques will emerge as standard investigations in the work-up of patients presenting with parkinsonism requires further prospective magnetic resonance studies during early disease stages. abstract_id: PUBMED:35976320 Magnetic resonance and dopamine transporter imaging for the diagnosis of Parkinson´s disease: a narrative review. Background: the diagnosis of Parkinson's disease (PD) can be challenging, especially in the early stages, albeit its updated and validated clinical criteria. Recent developments on neuroimaging in PD, altogether with its consolidated role of excluding secondary and other neurodegenerative causes of parkinsonism, provide more confidence in the diagnosis across the different stages of the disease. This review highlights current knowledge and major recent advances in magnetic resonance and dopamine transporter imaging in aiding PD diagnosis. Objective: This study aims to review current knowledge about the role of magnetic resonance imaging and neuroimaging of the dopamine transporter in diagnosing Parkinson's disease. Methods: We performed a non-systematic literature review through the PubMed database, using the keywords "Parkinson", "magnetic resonance imaging", "diffusion tensor", "diffusion-weighted", "neuromelanin", "nigrosome-1", "single-photon emission computed tomography", "dopamine transporter imaging". The search was restricted to articles written in English, published between January 2010 and February 2022. Results: The diagnosis of Parkinson's disease remains a clinical diagnosis. However, new neuroimaging biomarkers hold promise for increased diagnostic accuracy, especially in earlier stages of the disease. Conclusion: Future validation of new imaging biomarkers bring the expectation of an increased neuroimaging role in the diagnosis of PD in the following years. abstract_id: PUBMED:27288741 The role of magnetic resonance imaging in the diagnosis of Parkinson's disease: a review. Parkinson's disease (PD) is the second most common neurodegenerative disease after Alzheimer's in elderly people. Different structural and functional neuroimaging methods play a great role in the early diagnosis of neurodegenerative diseases. This review discusses the role of magnetic resonance imaging (MRI) in the diagnosis of PD. MRI provides clinicians with structural and functional information of human brain noninvasively. Advanced quantitative MRI techniques have shown promise for detecting pathological changes related to different stages of PD. Collectively, advanced MRI techniques at high and ultrahigh magnetic fields aid in better understanding of the nature and progression of PD. abstract_id: PUBMED:2357153 Magnetic resonance imaging in Parkinson's disease. Magnetic resonance imaging was utilized to examine changes in the pars compacta associated with Parkinson's disease. We found a reduction in average width of the pars compacta in advanced- but not early-stage patients. Conversely, a significant difference in width (right minus left) was observed in early- but not late-stage patients. Width asymmetry may be a sensitive magnetic resonance imaging indicator for the early detection of Parkinson's disease when there is a predominance of unilateral clinical symptoms. Answer: Magnetic resonance imaging (MRI) features of the nigrostriatal system have been investigated as potential biomarkers for different stages of Parkinson's disease (PD). Studies have identified several MRI characteristics that may serve as indicators of PD progression: 1. Iron Overload: R2* values, which reflect iron content, have been found to be significantly higher in the substantia nigra, putamen, and caudate nucleus of de novo PD patients compared to controls. This iron overload is associated with high levels of oxidative stress and is considered a biomarker for the de novo stage of PD (PUBMED:27035571). 2. Striatal Shape Changes: In early-stage PD patients, there are significant changes in the shape of the striatum, along with increased R2* values in the substantia nigra. These changes are indicative of neuronal death and can be used to characterize early-stage disease (PUBMED:27035571). 3. Generalized Striatal Atrophy: Advanced-stage PD patients display significant atrophy of the striatum relative to earlier disease stages, suggesting that striatal atrophy could be a biomarker for advanced PD (PUBMED:27035571). 4. Additional MRI Markers: Other studies have reported that PD patients show higher mean diffusivity values in the thalamus, lower fractional anisotropy values in the substantia nigra and thalamus, and higher R2* values in the substantia nigra compared to control subjects. These markers, particularly when combined, have demonstrated high accuracy in discriminating PD patients from controls (PUBMED:20736190). 5. Multimodal Imaging: Combining different MRI parameters, such as R2*, mean diffusivity, and fractional anisotropy, has been shown to be useful in evaluating PD and could potentially be applied to other subcortical pathologies (PUBMED:20736190). 6. Early Diagnosis: Advanced MRI techniques, including magnetic resonance spectroscopy imaging, diffusion-weighted and diffusion tensor imaging, and functional MRI, have been useful in differentiating early PD from atypical parkinsonian disorders and in identifying abnormalities in the olfactory system in prodromal PD (PUBMED:25745453). 7.
Instruction: Seriously ill hospitalized adults: do we spend less on older patients? Abstracts: abstract_id: PUBMED:35595375 Evolution of Investigating Informed Assent Discussions about CPR in Seriously Ill Patients. Context: Outcomes after cardiopulmonary resuscitation (CPR) remain poor. We have spent 10 years investigating an "informed assent" (IA) approach to discussing CPR with chronically ill patients/families. IA is a discussion framework whereby patients extremely unlikely to benefit from CPR are informed that unless they disagree, CPR will not be performed because it will not help achieve their goals, thus removing the burden of decision-making from the patient/family, while they retain an opportunity to disagree. Objectives: Determine the acceptability and efficacy of IA discussions about CPR with older chronically ill patients/families. Methods: This multi-site research occurred in three stages. Stage I determined acceptability of the intervention through focus groups of patients with advanced COPD or malignancy, family members, and physicians. Stage II was an ambulatory pilot randomized controlled trial (RCT) of the IA discussion. Stage III is an ongoing phase 2 RCT of IA versus attention control in in patients with advanced chronic illness. Results: Our qualitative work found the IA approach was acceptable to most patients, families, and physicians. The pilot RCT demonstrated feasibility and showed an increase in participants in the intervention group changing from "full code" to "do not resuscitate" within two weeks after the intervention. However, Stages I and II found that IA is best suited to inpatients. Our phase 2 RCT in older hospitalized seriously ill patients is ongoing; results are pending. Conclusions: IA is a feasible and reasonable approach to CPR discussions in selected patient populations. abstract_id: PUBMED:10809473 Pain and suffering in seriously ill hospitalized patients. Background: Previous studies had suggested a high prevalence of pain in hospitalized patients but had not specifically evaluated pain and other symptoms in seriously ill and older hospitalized patients. Objective: The SUPPORT and HELP studies were designed to (1) assess the frequency and severity of pain and other symptoms during hospitalization 2 and 6 months later, and before death; (2) identify factors associated with pain and other symptoms; and (3) test an intervention to improve pain. Design: An observational cohort and randomized controlled trial. Setting: Five major teaching hospitals in the US. Patients: Hospitalized patients aged 80 years and older or with one of nine serious illnesses. Intervention: Education of patients and family members about pain control, monitoring of patients' pain, and feedback about pain with treatment suggestions to nurses and physicians. Measurements: Data from the medical record and interview-based information about pain and other symptoms and preferences for care and symptom control from patients and family members. Results: Pain and other symptoms were frequent and often severe in seriously ill and older patients during hospitalization, at follow-up, and before death, even in those with diseases not traditionally associated with pain. There was wide variation in symptom experience across hospitals. Patients' preference for pain control was not associated with symptom experience. The intervention did not improve pain control. Conclusions: Control of pain and other symptoms remains an important medical and ethical issue. Routine monitoring of pain and other symptoms should be linked to treatment strategies aimed at combinations of symptoms and tested to assuage concerns about side effects. abstract_id: PUBMED:27091256 Leveraging the lived experience of surrogate decision makers of the seriously ill to develop a decision support intervention. Background: Surrogate decision makers of seriously ill patients are frequently asked to make healthcare decisions without evidence-based decision support. Aims: To describe the lived experiences of surrogate decision makers who have made advance care decisions and identify relevant features of a decision support intervention from their perspectives. Methods: A convenience sample of surrogate decision makers (n=10) of seriously ill patients discharged to an extended care facility within the past 6 months participated in focus group discussions. Data were analyzed using content analysis. Results: Three salient themes were identified: (1) limited preparation, (2) decisional and emotional burden, and (3) content and features of a decision support intervention. Conclusions: Surrogate decision makers of the seriously ill are not prepared for their new role, and experience decisional and emotional burden. Effective decision support interventions must address the decisional and emotional needs of surrogate decision makers to enhance the quality of their decisions. abstract_id: PUBMED:36727276 Language and pain predict persistent depression among seriously ill home care clients. Objectives: This study examined potential predictors of persistent depressive symptoms in a cohort of seriously ill older adults (aged 65+ years) receiving home care services. Methods: This was a retrospective cohort study using secondary data collected from the Resident Assessment Instrument for Home Care for all assessments completed between 2001 and 2020. The cohort included seriously ill individuals with depressive symptoms at baseline and who continued to have depressive symptoms on reassessment within 12 months (n = 8,304). Serious illness was defined as having severe health instability, a prognosis of less than 6 months, or a goal of care related to palliative care (PC) on admission to the home care program. Results: The mean age of the sample was 80.8 years (standard deviation [SD] = 7.7), 61.1% were female, and 82.1% spoke English as their primary language. The average length of time between assessments was 4.9 months (SD = 3.3). During that time, 64% of clients had persistent symptoms of depression. A multivariate logistic regression model found that language, pain, caregiver burden, and cognitive impairment were the most significant predictors of experiencing persistent depressive symptoms. Significance Of Results: Persistent depressive symptoms are highly prevalent in this population and, left untreated, could contribute to the person experiencing a "bad death." Some of the risk factors for this outcome are amenable to change, making it important to continually assess and flag these factors so interventions can be implemented to optimize the person's quality of life for as long as possible. abstract_id: PUBMED:37643653 Strategies to Improve Perioperative Palliative Care Integration for Seriously Ill Veterans. Context: Seriously ill patients are at higher risk for adverse surgical outcomes. Palliative care (PC) interventions for seriously ill surgical patients are associated with improved quality of patient care and patient-centered outcomes, yet, they are underutilized perioperatively. Objectives: To identify strategies for improving perioperative PC integration for seriously ill Veterans from the perspectives of PC providers and surgeons. Methods: We conducted semistructured, in-depth individual and group interviews with Veteran Health Administration PC team members and surgeons between July 2020 and April 2021. Participants were purposively sampled from high- and low-collaboration sites based on the proportion of received perioperative palliative consults. We performed a team-based thematic analysis with dual coding (inter-rater reliability above 0.8). Results: Interviews with 20 interdisciplinary PC providers and 13 surgeons at geographically distributed Veteran Affairs sites converged on four strategies for improving palliative care integration and goals of care conversations in the perioperative period: 1) develop and maintain collaborative, trusting relationships between palliative care providers and surgeons; 2) establish risk assessment processes to identify patients who may benefit from a PC consult; 3) involve both PC providers and surgeons at the appropriate time in the perioperative workflow; 4) provide sufficient resources to allow for an interdisciplinary sharing of care. Conclusion: The study demonstrates that individual, programmatic, and organizational efforts could facilitate interservice collaboration between PC clinicians and surgeons. abstract_id: PUBMED:27040074 Ethics in the communicative encounter: seriously ill patients' experiences of health professionals' nonverbal communication. Background: The communicative encounter has been described as a fundamental element in caring for the patients, and further, in this encounter, the nonverbal body language and the tone of speech are agued to play a crucial role. Aim: This study explores how seriously ill hospitalised patients experience and assign meaning to the health professionals' communication with special attention to the nonverbal body language and tone of speech. The study is part of a larger study exploring how seriously ill patients experience and assign meaning to the sensory impressions in the physical hospital environment as well as to the health professionals' communication. Method: The study is based on qualitative interviews supplemented by observations and applies Paul Ricoeur's phenomenological-hermeneutic theory of interpretation in processing the collected data. We included twelve patients with potentially life-threatening illnesses such as cancer, severe lung, liver and heart disease. Findings: Through analysis and interpretation of the interviews, we identified two themes in the text: (i) Being confirmed, (ii) Being ignored and an inconvenience. The patients experienced that the health professionals' nonverbal communication was imperative for their experience of being confirmed or in contrast, their experience of being ignored and an inconvenience. Conclusion: The health professionals' nonverbal communication proved essential for the seriously ill patients' experience of well-being in the form of positive thoughts and emotions. Consequently, this sensory dimension of the communicative encounter represents a significant ethical element in caring for the patients. abstract_id: PUBMED:36637618 Goals of older hospitalized patients with multimorbidity. Purpose: Patient-centered healthcare, with aligning treatment to a patients' goal, is recognized by experts as essential to improve healthcare for older patients with multimorbidity. Little literature exists on goals of these specific patients. Therefore, we aimed to explore goals of older hospitalized patients with multimorbidity and compare their goals to those of older hospitalized patients without multimorbidity. Methods: Older hospitalized patients (aged ≥ 70 years) were included in a prospective mixed-methods cohort study at the University Medical Centre Groningen, the Netherlands. Goals were assessed by a standardized interview, whereafter they were categorized and analyzed descriptively. Results: Four hundred and ninety-three older hospitalized patients (median age 75 (IQR 72-80), 64% male) were included, of which 223 patients presented with multimorbidity (45%). Goals mentioned most often were 'controlling disease' and 'alleviating complaints'. No differences were found in goals mentioned by patients with and without multimorbidity. Forty-one percent of both patients with and without multimorbidity mentioned goals that were disease-unrelated. Conclusion: No major differences were found in goals of older hospitalized patients with and without multimorbidity. However, the large proportion of patients mentioning disease-unrelated goals emphasizes the importance of goal elicitation by healthcare professionals within hospital care to provide optimally integrated care. abstract_id: PUBMED:35659075 Sleep in Hospitalized Older Adults. Hospitalization is a period of acute sleep deprivation for older adults due to environmental, medical, and patient factors. Although hospitalized patients are in need of adequate rest and recovery during acute illness, older patients face unique risks due to acute sleep loss during; hospitalization. Sleep loss in the hospital is associated with worse health outcomes, including; cardio-metabolic derangements and increased risk of delirium. Because older patients are at risk of; polypharmacy and medication side effects, a variety of nonpharmacological interventions are recommended first to improve sleep loss for hospitalized older adults. abstract_id: PUBMED:28936138 Prevalence and overlap of Disease Management Program diseases in older hospitalized patients. Many countries, like Denmark, have tailored Disease Management Programs (DMPs) based on patients having single chronic diseases [defined institutionally as "program diseases" (PDs)], which can complicate treatment for those with multiple chronic diseases. The aims of this study were (a) to assess the prevalence and overlap among acutely hospitalized older medical patients of PDs defined by the DMPs, and (b) to examine transitions between different departments during hospitalization and mortality and readmission within two time intervals among patients with the different PDs. We conducted a registry study of 4649 acutely hospitalized medical patients ≥65 years admitted to Copenhagen University Hospital, Hvidovre, Denmark, in 2012, and divided patients into six PD groups (type 2 diabetes, chronic obstructive pulmonary disease, cardiovascular disease, musculoskeletal disease, dementia and cancer), each defined by several ICD-10 codes predefined in the DMPs. Of these patients, 904 (19.4%) had 2 + PDs, and there were 47 different combinations of the six different PDs. The most prevalent pair of PDs was type 2 diabetes with cardiovascular disease in 203 (22.5%) patients, of whom 40.4% had an additional PD. The range of the cumulative incidence of being readmitted within 90 days was between 28.8% for patients without a PD and 46.6% for patients with more than one PD. PDs overlapped in many combinations, and all patients had a high probability of being readmitted. Hence, developing strategies to create a new generation of DMPs applicable to older patients with comorbidities could help clinicians organize treatment across DMPs. abstract_id: PUBMED:35620814 Using a home time measure to differentiate ACO performance for seriously ill populations. Background: Alternative Payment Models (APMs) piloted by the Centers for Medicare and Medicaid Services (CMS) such as ACO Realizing Equity, Access and Community Health (REACH) seek to improve care and quality of life among seriously ill populations (SIP). Days at Home (DAH) was proposed for use in this model to evaluate organizational performance. It is important to assess the utility and feasibility of person-centered outcomes measures, such as DAH, as CMS seeks to advance care models for seriously ill beneficiaries. We leverage existing Accountable Care Organization (ACO) contracts to evaluate the feasibility of ACO-level DAH measure and examine characteristics associated with ACOs with more DAH. Methods: We calculated DAH for Medicare fee-for-service beneficiaries aged 68 and over who were retrospectively attributed to a Medicare ACO between 2014 and 2018 and met the seriously ill criteria. We then aggregated to the ACO level DAH for each ACO's seriously ill beneficiaries and risk-adjusted this aggregated measure. Finally, we evaluated associations between risk-adjusted DAH per person-year and ACO, beneficiary, and market characteristics. Results: ACOs' seriously ill beneficiaries spent an average of 349.3 risk-adjusted DAH per person-year. Risk-adjusted ACO variation, defined as the interquartile range, was 4.21 days (IQR = 347.32-351.53). Beneficiaries of ACOs are composed of a less racially diverse beneficiary cohort, opting for two-sided risk models, and operating in markets with fewer hospital and Skilled Nursing Facility beds had more DAH. Conclusions: Substantial variation across ACOs in the DAH measure for seriously ill beneficiaries suggests the measure can differentiate between high and low performing provider groups. Key to the success of the metric is accurate risk adjustment to ensure providers have adequate resources to care for seriously ill beneficiaries. Organizational factors, such as the ACO size and level of risk, are strongly associated with more days at home. Answer: The provided abstracts do not directly address the question of whether less money is spent on older seriously ill hospitalized patients compared to younger ones. The abstracts focus on various aspects of care for seriously ill patients, such as communication about cardiopulmonary resuscitation (CPR) (PUBMED:35595375), pain and suffering (PUBMED:10809473), surrogate decision-making (PUBMED:27091256), predictors of persistent depression (PUBMED:36727276), perioperative palliative care integration (PUBMED:37643653), nonverbal communication by health professionals (PUBMED:27040074), goals of older hospitalized patients with multimorbidity (PUBMED:36637618), sleep deprivation in hospitalized older adults (PUBMED:35659075), prevalence of disease management program diseases in older hospitalized patients (PUBMED:28936138), and the use of a home time measure to differentiate Accountable Care Organization (ACO) performance for seriously ill populations (PUBMED:35620814). However, none of these abstracts provide information on the financial expenditures for older versus younger seriously ill hospitalized adults. To answer the question about spending, one would need to look at healthcare expenditure data, insurance claims, or studies specifically designed to compare the costs of care between different age groups of seriously ill hospitalized patients.
Instruction: Functional prognosis in ocular trauma. Does visual deficiency help in localizing the injuries that cause it? Abstracts: abstract_id: PUBMED:15694048 Functional prognosis in ocular trauma. Does visual deficiency help in localizing the injuries that cause it? Background: Prevalence of reported trauma-related blindness is 0.6% with almost half of serious ocular injuries involving the retina. Causes of visual deficit (VD) in ocular trauma were identified in patients with different grades (visual capacity), in order to determine the rate of poor functional prognosis at initial evaluation caused by posterior segment involvement and to identify a grade that could predict it. Methods: Ocular trauma referrals were re-qualified (1995-2003) according to the standardized classification. VD (grade &gt;1) and zone were evaluated. The rate of VD and zone III with VD were determined. Injuries causing VD were identified and grouped by zone. The rate of these injuries was compared by zone, between each grade and the remaining, with chi2. Results: Five hundred fifty-seven eyes were included (mean age 27.7 years), 165 had VD (29.6%, CI 95% 25.8-33.4); 93 had zone III involvement (16.7%), 53 of whom had VD (9.5%, CI 95% 7.1-11.9%). Among injuries that caused VD (n = 323), 102 involved zone I (31.6%), 144 zone II (44.6%), and 77 zone III (23.8%). Injuries in zone III were more frequent in grade 5 (52%) than in the remaining groups (21.5%, p = 0.0005, OR 3.9). Conclusions: In 90.5% of the eyes (CI 95%, 88.1-92.9), injuries caused VD by means of anterior segment involvement or did not cause it at all. A high rate of ocular trauma patients, even those with grade 5 at initial evaluation, may have an opportunity for visual recovery with early referral. abstract_id: PUBMED:15180143 Visual prognosis in patients with traumatic choroidal rupture. Background: There is a paucity of work in the literature examining the long-term visual prognosis of patients with choroidal ruptures. We performed a study to get a better assessment of long-term visual recovery in these patients as well as to identify prognostic indicators. Methods: We reviewed the charts of 32 patients who experienced traumatic choroidal rupture following blunt ocular trauma. All charts contained International Classification of Diseases code 363.63 (choroidal rupture). For inclusion in the study, the chart had to contain detailed drawings or photographic evidence that could confirm the presence and location (foveal, juxtafoveal or extrafoveal) of the choroidal rupture. In addition, demographic data and visual acuity on follow-up examinations had to be present. The mechanism and location of the injury, initial and final visual acuity, associated ocular findings and length of follow-up were recorded. Results: Of the 32 eyes 31 had indirect choroidal ruptures and I had a direct rupture. The mean final visual acuity values in the foveal, juxtafoveal and extrafoveal subgroups were 20/68, 20/35 and 20/60 respectively after a mean duration of follow-up of 4.5, 3.5 and 4.4 years respectively. There was no statistically significant difference in mean final visual acuity or mean length of follow-up between the three groups. The eight patients with multiple choroidal ruptures had a mean final vision of 20/31 after a mean follow-up period of 3.8 years. The 24 patients with a single choroidal rupture achieved a mean final vision of 20/47 over a mean duration of follow-up of 4.1 years. There was no difference in final vision or in length of follow-up between the two groups. The six patients under 15 years of age attained a mean final vision of 20/34 over a mean follow-up period of 4.5 years, whereas the adult group achieved a mean final vision of 20/44 over a mean follow-up period of 3.8 years. Again, there was no difference in final vision or in length of follow-up between the two groups. Interpretation: Traumatic choroidal rupture involving the fovea has been thought to have a poor visual prognosis. Our findings show that eyes with foveal choroidal ruptures may regain good central vision over longer follow-up. Furthermore, multiple choroidal ruptures do not necessarily predict a poor visual outcome. Children with choroidal ruptures attained good final visual outcomes. abstract_id: PUBMED:21586844 Visual recovery and predictors of visual prognosis after managing traumatic cataracts in 555 patients. Aim: The aim was to evaluate the visual recovery after managing traumatic cataracts and determine the predictors of a better visual prognosis. Materials And Methods: This was a prospective study. We enrolled patients with specific inclusion criteria, examined their eyes to review the comorbidities due to trauma, performed surgery for traumatic cataracts, and implanted lenses. The patients were reexamined 6 weeks postoperatively. We divided the cases of traumatic cataract into two groups, the "open globe" (Group 1) and "closed globe" (Group 2) groups, according to the ocular trauma based on the Birmingham Eye Trauma Terminology System (BETTS) and compared the determinants of visual acuity. Results: Our cohort of 555 eyes with traumatic cataracts included 394 eyes in Group 1 and 161 in Group 2. Six weeks postoperatively, the visual acuity in the operated eye was &gt;20/60 in 193 (48%) and 49 (29%) eyes in Groups 1 and 2, respectively (P = 0.002, ANOVA). At follow-up, &gt;20/60 vision was significantly higher in Group 1 than in Group 2 (OR = 1.61; 95% CI, 0.85-3.02). Overall 242 (43.5%) eyes gained a final visual acuity of &gt;20/60. Conclusions: Open globe injury has a favorable prognosis for satisfactory (&gt;20/60) visual recovery after the management of traumatic cataracts. abstract_id: PUBMED:35579769 The clinical features and the factors affecting visual prognosis in pediatric open-globe ınjuries. Purpose: To investigate clinical features and factors affecting visual prognosis after pediatric open-globe injuries. Methods: Retrospective study of 223 children with open-globe injury was conducted. Children with final logMAR visual acuity (LVA) &gt; 0.70 were determined as poor-vision group (group 1, n = 108) and those with final LVA ≤ 0.70 as good-vision group (group 2, n = 115). Demographic characteristics (age, gender, and damaged eye), time between trauma and surgery, ocular trauma score (OTS), follow-up time, injury size, initial and final visual acuity levels, injury type (penetrating injury, globe rupture, perforating injury, and intraocular foreign body injury), injury localization (zone 1 = within the corneal and/or limbal area, zone 2 = within the scleral area extending 5 mm back from the limbus, and zone 3 = within the area posterior to zone 2), injury cause [metal objects (fork, knife, needle), broken glass, blunt objects (ball, punch), pen-pencil, and unidentified objects], and accompanying ocular findings of the groups were detected, and comparisons were done. Additionally, effects of age, time between trauma and surgery, OTS, injury size, follow-up time, initial LVA, injury type, and injury zone on final LVA were analyzed in both groups. Results: Mean age was 9.1 ± 2.0 years. There were 151 males and 72 females. Compared to group 1, group 2 had better initial and final visions (1.21 ± 0.26 vs 0.60 ± 0.28, p &lt; 0.001 for initial LVA; 1.00 ± 0.32 vs 0.30 ± 0.13, p &lt; 0.001 for final LVA), greater OTS (1.72 ± 0.53 vs 3.73 ± 0.61, p = 0.025), and smaller injury size (10.4 ± 3.5 vs 5.8 ± 2.4 mm, p = 0.002). Globe rupture (p = 0.015) and relative afferent pupillary defect (RAPD) (p = 0.037) were higher in group 1, while penetrating injury (p = 0.044), zone 1 involvement (p = 0.038), and metal object injury (p = 0.041) were higher in group 2. Based on multivariate analysis, the presences of globe rupture (p = 0.024) and RAPD (p = 0.035), the involvement without zone 1 (p = 0.042), and the injury without metal object (fork, knife, needle) (p = 0.046) were associated with poor final vision. Final LVA (for group 1 and group 2) was negatively correlated with OTS (r = - 0.398, p = 0.037; r = - 0.369, p = 0.040), while positively correlated with injury size (r = 0.412, p = 0.031; r = 0.318, p = 0.046) and initial LVA (r = 0.335, p = 0.043; r = 0.402, p = 0.034). Conclusion: In our study, poor prognostic factors affecting final vision were low OTS, poor initial vision, the presences of globe rupture and RAPD, the large injury size, the involvement without zone 1, and the injury without metal object (fork, knife, needle). abstract_id: PUBMED:31784641 Etiology and Visual Prognosis in Open Globe Injuries: Results of A Tertiary Referral Center in Turkey. We aimed to analyse the clinical characteristics of OGI and evaluate the correlation between baseline ocular trauma score (OTS) and visual outcomes in cases with OGI. The charts of 257 OGI patients who had at least six months of follow-up were reviewed retrospectively. Demographics, data about the etiology, localization and size of the OGI, baseline and final best-corrected visual acuity (BCVA) were noted. At the time of approval OTS was calculated and compared with final BCVA. All analysis was performed in both entire study population and our pediatric subgroup. A total of 261 eyes of 257 patients with a mean age of 34.9 ± 19.8 years were enrolled. Globe injury with a mean size of 6.7 ± 4.5 mm was within zone I in 46.7% of the eyes. Older age (p &lt; 0.001, OR = 1.029, 95% CI = 1.015-1.043), higher baseline logMAR BCVA scores (p &lt; 0.001, OR = 4.460, 95% CI = 2.815-7.065), bigger wound size (p &lt; 0.001, OR = 1.159, 95% CI = 1.084-1.240), relative afferent pupillary defect (RAPD) positiveness (p &lt; 0.001, OR = 0.021 95% CI = 0.005-0.087), lower OTS (p &lt; 0.001, OR = 27.034, 95% CI = 6.299-116.021), presence of concomitant retinal detachment (p &lt; 0.001, OR = 0.157, 95% CI = 0.080-0.306), and endophthalmitis (p = 0.045, OR = 0.207, 95% CI = 0.044-0.962) were found to be related to poor visual prognosis. Cases with OGI caused by a sharp object (p = 0.007, OR = 0.204, 95% CI = 0.065-0.641) and those injured by a glass (p = 0.039, OR = 0.229, 95% CI = 0.056-0.931) had more favorable final vision. This study highlights that baseline BCVA, wound size, RAPD, retinal detachment, and OTS were the most significant markers for poor visual outcomes in both the entire population and pediatric subgroup. In cases with OGI, OTS was also found effective in predicting visual prognosis. abstract_id: PUBMED:6729265 Penetrating ocular injuries in young patients. Initial injuries and visual results. Penetrating injuries are a leading cause of unilateral visual loss in young patients. We reviewed the records of 197 patients aged 18 years or younger who underwent primary repair of a penetrating ocular injury at the Wilmer Ophthalmological Institute from January 1970 through December 1981. The injury was caused by sharp objects in 49% of cases, missiles in 35%, and blunt trauma in 14%. Of 159 patients with at least 6 months follow-up, 110 (69%) achieved final vision of 5/200 or better, and 77 patients (48%) achieved final visual acuity of 20/50 or better. The prognosis after a penetrating injury is strongly influenced by the nature of the injury and the extent of initial damage. Several factors were found to correlate with an unfavorable visual outcome, including: initial preoperative visual acuity of worse than 5/200, injuries due to blunt trauma, wounds involving the sclera, double penetrating injuries, dense vitreous hemorrhage, and wounds associated with an intraocular "BB" pellet. abstract_id: PUBMED:35935081 Prognostic value of Ocular Trauma Score and pediatric Penetrating Ocular Trauma Score in predicting the visual prognosis following ocular injury. Aim: To assess the role of Ocular Trauma Score (OTS) and Pediatric Penetrating Ocular Trauma Score (POTS) in predicting visual prognosis following ocular injury. Methods: 120 patients, aged 3 to 70 years, with ocular injury, presenting between August 2020 and 2021, who fulfilled the inclusion criteria, were classified using the Birmingham Eye Trauma Terminology System (BETTS). Data regarding age, sex, injury type, location, initial visual acuity, and treatment were recorded. Patients were evaluated using OTS and POTS, wherever applicable, to predict visual prognosis. Follow-up was done for 6 months post-treatment to compare the predicted and actual visual outcome. Results: 120 patients (96 males and 24 females) were included in the study. The mean age was 17.2 ± 14.7 years (range 3 to 70 years). 91.6% patients included were under the age of 40 years. Blunt trauma (n=62) was slightly more common. 80 patients (66.67%) reported to the hospital after 48 hours of injury. 34 patients (28.33%) had traumatic cataract alone and IOL implantation was done as a primary procedure. In 20 patients (16.67%), lens aspiration was done primarily, with secondary IOL implantation. Six months after the treatment, the BCVA in the open globe injury patients was 20/ 200 or better in 36 patients (75%) and 20/ 40 or better in 18 patients (37.5%). Amongst the closed globe injuries, 48 patients (66.67%), had a BCVA 20/ 200 or better, while 32 (44.44%) had a BCVA of 20/ 40 or better. Conclusion: OTS is a reliable predictor of final visual outcome, both in blunt and penetrating injuries. The POTS shows gross underestimation of final visual outcome. Abbreviations: BETTS = Birmingham Eye Trauma Terminology System, POTS = Pediatric Penetrating Ocular Trauma Score, OTS = Ocular Trauma Score, OGI = Open Globe Injury, HM = Hand Movement, NLP = No Light Perception, LP = Light Perception, CGI = Closed Globe Injury, VA = Visual Acuity, BCVA = Best Corrected Visual Acuity, IAP = Indian Academy of Pediatrics, IOL = Intraocular Lens, IGATES = International Globe and Adnexal Trauma Epidemiology Study, IOP = Intraocular Pressure, CART = Classification and Regression Tree, USA = United States of America. abstract_id: PUBMED:34833416 Clinical Characteristics, Visual Outcomes, and Prognostic Factors of Open Globe Injuries. Background and Objectives: Open globe injuries (OGI) remain an important cause of visual impairment and loss, impacting all ages. A better understanding of the factors influencing visual outcomes is important in an attempt to improve the results of the treatment of OGI patients. The author aimed to contribute to this knowledge with the analysis of clinical characteristics, prognostic factors, and visual outcomes of their cohort of OGI patients. Materials and Methods: A retrospective medical record review was performed for 160 patients (161 eyes) who sustained an open globe injury between January 2015 and December 2017 and presented to the Hospital of Lithuanian University of Health Sciences. Data analyzed included age, sex, type, cause, place of OGI, initial visual acuity (VA), final best-corrected visual acuity (BCVA), and tissue involvement. Open globe injuries were classified using the Birmingham Eye Trauma Terminology (BETT) and Ocular Trauma Classification System (OTCS). Univariate analysis was conducted to evaluate the prognostic factors. Results: The mean age of the patients was 41.9 years. The male-to-female ratio was found to be 8.4:1. The home was the leading place of eye injury (59.6%), followed by an outdoor environment (14.3%) and workplace (11.8%). Penetrating injury accounted for 43.5%, followed by intraocular foreign body injury (39.1%) and globe rupture (13%). Overall, 19.5% of patients regained a good final vision of ≥0.5, but for 48.1% of them, eye trauma resulted in severe visual impairment (BCVA ≤ 0.02). In the univariate analysis, a bad visual outcome of less than 0.02 was correlated with bad initial VA, iris dialysis, hypotony, vitreous hemorrhage, and vitreous prolapse at presentation. Phthisis bulbi was correlated with eyelid laceration, iris prolapse, iris dialysis, hyphema, vitreous prolapse, vitreous hemorrhage, and choroidal rupture at initial examination. Conclusions: Open globe injury remains an important preventable cause of ocular morbidity. This study provides data indicating that open globe injuries are a significant cause of visual impairment in our research group. abstract_id: PUBMED:32518396 Characteristics of paediatric patients hospitalised for eye trauma in 2007-2015 and factors related to their visual outcomes. Objective: To analyse the epidemiological characteristics and clinical features of children under the age of 16 years with ocular trauma at the Eye and Ear, Nose and Throat Hospital of Fudan University (Shanghai, China) and to investigate the preventive measurements taken to avoid vision-threatening eye trauma. Methods: The inpatient medical records of children &lt;16 years old treated for ocular trauma between January 2007 and December 2015 were collected. The age, sex, type of trauma, cause of injury, complications and visual acuity on admission and discharge were analysed statistically. Results: A total of 2211 patients (2231 eyes) were enroled. Of these, 73.7% were male, and 61.2% were children aged 0-6 years. Mechanical ocular trauma was present in 75.3% of eyes, and penetrating injuries in 59.8%. The top three offending objects were scissors (16.3%), firecrackers (8%) and pencils (4.9%). Iris prolapse (odds ratio [OR] = 2.27), retinal detachment (OR = 2.91), endophthalmitis (OR = 2.25) or an intraocular foreign body (OR = 2.80) was associated with the same or worse visual prognosis among all the subjects. Traumatic cataract (OR = 0.37) was associated with final visual improvement. Conclusions: An efficient strategy for preventing eye injury should focus on male children during preschool years. Our root cause analysis showed specific environmental patterns of vision-devastating objects. Specific preventive measures are proposed to reduce the incidence of paediatric eye injuries. abstract_id: PUBMED:24833181 Determination of visual prognosis in children with open globe injuries. Aims: To determine the predictive factors of visual outcomes in children with open globe injury and to give guidance to reduce the incidence of open globe injury. Methods: One hundred and forty eyes of 137 consecutive open globe injury patients, who were treated at the Eye Center of Second Bethune Hospital affiliated with Jilin University between August 2005 and August 2012, were retrospectively analyzed. Data recorded included demographic characteristics, causes of injury, location and extent of injury, presenting visual acuity, detailed ocular anterior and posterior segment evaluations, details of primary and subsequent surgeries, and postoperative complications and outcomes. The follow-up data included the most recent best-corrected visual acuity, complications, and the duration of follow-up. Results: Of the 137 patients, there were 116 (84.7%) boys and 21 (15.3%) girls. Their ages ranged between 3 and 17 years old (mean=11.57±4.19 years old). Sixty (43.8%) children had a right eye injury, whereas 74 (54.0%) had a left eye injury. Only three (2.2%) children suffered bilateral eye injury. Living utensils, industrial tools, and fireworks contributed to the most common causes of open globe injury. Eighty-one (59.1%) had sharp force injuries, 23 (16.8%) had blunt injuries, and 33 (24.1%) had missile injuries. Conclusions: Unfavorable visual outcomes were related to a younger age at presentation, poor presenting visual acuity, injuries caused by blunt or missile objects, posterior wound location, hyphema, vitreous hemorrhage, and surgical intervention of pars plana vitrectomy. Answer: The functional prognosis in ocular trauma can be influenced by the location and severity of the injury, which in turn can affect visual deficiency. According to the study by PUBMED:15694048, the majority of injuries causing visual deficiency (VD) were due to anterior segment involvement or did not cause VD at all. However, injuries in zone III (posterior segment) were more frequent in patients with grade 5 visual deficiency, indicating a poorer functional prognosis with posterior segment involvement. The study by PUBMED:15180143 found that traumatic choroidal ruptures, even those involving the fovea, may regain good central vision over longer follow-up periods. This suggests that while certain injuries may initially cause significant visual deficiency, the long-term prognosis can be more favorable than expected. In the study by PUBMED:21586844, open globe injuries were associated with a favorable prognosis for satisfactory visual recovery after managing traumatic cataracts, indicating that the type of ocular trauma (open vs. closed globe) can influence visual prognosis. The research by PUBMED:35579769 highlighted that poor prognostic factors affecting final vision included low ocular trauma score (OTS), poor initial vision, globe rupture, relative afferent pupillary defect (RAPD), large injury size, involvement without zone 1, and injury without metal objects. This suggests that visual deficiency can help in localizing injuries and predicting outcomes, as certain types of injuries and their locations are associated with worse visual prognosis. PUBMED:31784641 also emphasized that factors such as baseline visual acuity, wound size, RAPD, retinal detachment, and OTS were significant markers for poor visual outcomes, further supporting the idea that visual deficiency can be indicative of the injury's location and severity. In summary, visual deficiency can indeed help in localizing the injuries that cause it and in predicting the functional prognosis in ocular trauma. The severity and location of the injury, as well as the initial visual acuity, are important factors in determining the likelihood of visual recovery (PUBMED:15694048, PUBMED:15180143, PUBMED:21586844, PUBMED:35579769, PUBMED:31784641).
Instruction: Can 1% chlorhexidine diacetate and ethanol stabilize resin-dentin bonds? Abstracts: abstract_id: PUBMED:24815823 Can 1% chlorhexidine diacetate and ethanol stabilize resin-dentin bonds? Objectives: To examine the effects of the combined use of chlorhexidine and ethanol on the durability of resin-dentin bonds. Methods: Forty-eight flat dentin surfaces were etched (32% phosphoric acid), rinsed (15 s) and kept wet until bonding procedures. Dentin surfaces were blot-dried with absorbent paper and re-wetted with water (water, control), 1% chlorhexidine diacetate in water (CHD/water), 100% ethanol (ethanol), or 1% chlorhexidine diacetate in ethanol (CHD/ethanol) solutions for 30 s. They were then bonded with All Bond 3 (AB3, Bisco) or Excite (EX, Ivoclar-Vivadent) using a smooth, continuous rubbing application (10 s), followed by 15 s gentle air stream to evaporate solvents. The adhesives were light-cured (20 s) and resin composite build-ups constructed for the microtensile method. Bonded beams were obtained and tested after 24-h, 6-months and 15-months of water storage at 37°C. Storage water was changed every month. Effects of treatment and testing periods were analyzed (ANOVA, Holm-Sidak, p&lt;0.05) for each adhesive. Results: There were no interactions between factors for both etch-and-rinse adhesives. AB3 was significantly affected only by storage (p=0.003). Excite was significantly affected only by treatments (p=0.048). AB3 treated either with ethanol or CHD/ethanol resulted in reduced bond strengths after 15 months. The use of CHD/ethanol resulted in higher bond strengths values for Excite. Conclusions: Combined use of ethanol/1% chlorhexidine diacetate did not stabilize bond strengths after 15 months. abstract_id: PUBMED:19278986 Durability of resin-dentin bonds to water- vs. ethanol-saturated dentin. Higher 24-hour resin-dentin bond strengths are created when ethanol is used to replace water during wet bonding. This in vitro study examined if ethanol-wet-bonding can increase the durability of resin-dentin bonds over longer times. Five increasingly hydrophilic experimental resin blends were bonded to acid-etched dentin saturated with water or ethanol. Following composite build-ups, the teeth were reduced into beams for 24-hour microtensile bond strength evaluation, and for water-aging at 37 degrees C for 3, 6, or 12 months before additional bond strength measurements. Although most bonds made to water-saturated dentin did not change over time, those made to ethanol-saturated dentin exhibited higher bond strengths, and none of them fell over time. Decreased collagen fibrillar diameter and increased interfibrillar spacing were seen in hybrid layers created with ethanol-wet-bonding. Increases in bond strength and durability in ethanol-wet-bonding may be due to higher resin uptake and better resin sealing of the collagen matrix, thereby minimizing endogenous collagenolytic activities. abstract_id: PUBMED:24179985 Exposed collagen in resin bonds to caries-affected dentin after dentin treatment with aqueous and alcoholic chlorhexidine solutions. Purpose: To evaluate the effect of saturation of demineralized dentin with aqueous and alcoholic excipients of chlorhexidine (CHX) on the exposure of collagen fibrils in resin-dentin bonds in sound and caries-affected dentin. Materials And Methods: Flat midcoronal dentin surfaces were prepared from 24 noncarious molars, and artificial caries was induced in half of the sample. For each substrate, the surfaces were assigned to 4 groups (n = 3) according to the saturation solution of the dentin: water, ethanol, 1% CHX aqueous or alcoholic solution. Infected dentin was removed by abrasive papers. After acid etching, the dentin surface was saturated with each solution for 60 s followed by application of Single Bond. The specimens were processed for Goldner's trichrome staining and the thickness of the exposed collagen zone (ECZ) at the resin/dentin interfaces was measured under optical microscopy. Data were analyzed statistically by Kruskal-Wallis and Mann-Whitney tests (α = 0.05). Results: Regardless of the saturation solution, caries-affected dentin presented a thicker ECZ at the bottom of the hybrid layer than did sound dentin. For both substrates, 100% ethanol had a negative influence on collagen exposure in comparison with water, but the same was not observed for the CHX alcoholic solution. CHX solutions did not differ significantly from each other or from their respective solvents. Conclusion: The saturation of phosphoric acid-demineralized dentin with either CHX aqueous or alcoholic solutions did not affect the exposure of collagen fibrils in the resin-dentin bonds produced in sound and caries-affected dentin. A thicker zone of exposed collagen was found in hybridized caries-affected dentin compared to noncarious dentin. abstract_id: PUBMED:26957803 The effects of chlorhexidine and ethanol on push-out bond strength of fiber posts. Context: Irrigation of root canals with chlorhexidine (CHX) and ethanol is common practice to prevent root canal infection during postplacement. However, pretreatment with these solvents may interfere with the bond strength of posts. Aims: This study aimed to evaluate if root dentin pretreatment using CHX and/or ethanol influences the push-out bond strength of fiber-reinforced composite resin (FRCR) posts. Materials And Methods: Fifty space posts prepared in endodontically treated extracted human canine roots were randomly divided into five groups (n = 10) according to the dentin pretreatment: Distilled water (W); 1% CHX diacetate solution (1C); CHX diacetate + 99% ethanol (1CE); 99% ethanol (E); and 2% CHX digluconate solution (2C). After pretreatment, the adhesive system (Peak Universal Bond; Ultradent, South Jordan, UT, USA) was applied in the root dentin and the FRCR was cemented with resin cement. Then, horizontal slices of 2 mm were obtained from each root third and the push-out bond strength was assessed. Statistical analysis was done using analysis of variance (ANOVA) and Tukey's tests (P = 0.05). Results: At all thirds, 1CE and E groups presented similar push-out bond strength values (P &gt; 0.05), which were higher than the other groups (P &lt; 0.05). W, 1C, and 2C groups were similar (P &gt; 0.05). Conclusion: The root dentin pretreatment with ethanol, alone or mixed with CHX diacetate increased the bond strength of FRCR luted with resin cement. abstract_id: PUBMED:20662916 Chlorhexidine increases the longevity of in vivo resin-dentin bonds. The aim of this study was to evaluate the mechanical stability of resin-dentin bonds produced in vivo in the presence of chlorhexidine. Children presenting at least one pair of contralateral primary molars with occlusal carious lesions were enrolled in the study. After cavity preparation and phosphoric acid etching, dentin was treated with 5 microl of either 2% chlorhexidine (experimental group) or deionized water (control group) [corrected].Thirteen pairs of teeth were restored and were collected after physiological exfoliation. The periods in oral function after restoration were divided as follows: up to 30 d; and 1-5, 10-12, and 18-20 months. Beam-shaped specimens (0.81 mm(2)) were obtained and subjected to microtensile bond-strength testing. A significant decrease of the bond strength was observed in the control group starting at the 1-5 month period (30.6%), while in the experimental group this decrease started only after 10-12 months of function (26.3%). The use of chlorhexidine as an adjuvant to the adhesion to dentin did not produce any detrimental effect to the immediate bond strength and was capable of reducing the rate of resin-dentin bond degradation within the first few months after restoration. abstract_id: PUBMED:25202747 Ethanol-wet bonding and chlorhexidine improve resin-dentin bond durability: quantitative analysis using raman spectroscopy. Purpose: To directly test the effectiveness of ethanol-wet bonding (EW) in improving monomer infiltration into demineralized dentin through quantitative measurement of bis-GMA and TEG-DMA molar concentrations within hybrid layers, and to comprehensively evaluate the effect of EW and chlorhexidine on durability of resin-dentin bonds compared to conventional water-wet bonding (WW). Materials And Methods: A three-step etch-and-rinse adhesive (70% bis-GMA/28.75%TEG-DMA) was applied to coronal dentin using a clinically relevant ethanol-wet bonding protocol (EW) or the conventional water-wet bonding (WW) technique. Bis-GMA and TEG-DMA molar concentrations at various positions across the resin/dentin interfaces formed by EW and WW were measured using micro-Raman spectroscopy. The experiment was repeated at the same positions after 7-month storage in phosphate buffer solution containing 0.1% sodium azide. The μTBS and hybrid layer morphology (TEM) of bonding groups with and without chlorhexidine application were compared immediately and after 1-year storage in terms of nanoleakage, collagen fibril diameter, collagen interfibrillar width, and hybrid layer thickness. Results: Specimens bonded with EW showed significantly higher monomer molar concentrations and μTBS throughout the hybrid layer immediately and after storage, providing direct evidence of superior infiltration of hydrophobic monomers in EW compared to WW. Microscopically, EW maintained interfibrillar width and hybrid layer thickness for resin infiltration and retention. The application of chlorhexidine further preserved collagen integrity and limited the degree of nanoleakage in EW after 1-year storage. Conclusion: EW enhances infiltration of hydrophobic monomers into demineralized dentin. The results suggest that a more durable resin-dentin bond may be achieved with combined usage of a clinically relevant EW and chlorhexidine. abstract_id: PUBMED:21594224 Exposed collagen in aged resin-dentin bonds produced on sound and caries-affected dentin in the presence of chlorhexidine. Purpose: To evaluate the effect of chlorhexidine on the presence of collagen in aged resin-dentin bonds produced on sound and caries-affected dentin. Materials And Methods: Flat dentin surfaces were obtained from 16 sound molars, from which 8 were microbiologically processed for induction of caries. Single Bond 2 was applied to both sound and caries-affected substrates. In half of the teeth assigned for 6-month storage in water, the phosphoric acid demineralized dentin was impregnated with 2% chlorhexidine before the application of the adhesive. Specimens (2 x 2 x 5 mm) were produced and stored in water for 24 h, or 6 months in either water or mineral oil. The specimens were subjected to histological processing and sections were stained with Goldner's Trichrome. The thickness of the zone of exposed collagen was measured by optical microscopy and the data were subjected to two-way ANOVA and Tukey's test (α = 0.05). Results: There was no statistically significant difference (p &gt; 0.05) between sound and caries-affected dentin regardless of the storage condition. For both substrates, significantly greater collagen exposure was observed after 6 months in water. Chlorhexidine-treated groups resulted in similar collagen exposure to that of the control and 6 months in water groups (p &gt; 0.05), while no increase of the exposed collagen zone was observed after mineral oil storage. Conclusion: Aging in water resulted in degradation of the resin-dentin bond, as demonstrated by the increase of the zone of exposed collagen. However, the degradation of the exposed collagen was decelerated in the presence of chlorhexidine. abstract_id: PUBMED:19678455 Chlorhexidine-containing acid conditioner preserves the longevity of resin-dentin bonds. The current study evaluated the effect of 2% chlorhexidine digluconate (CHX) on the immediate and six-month resin-dentin bond strength (BS) and nanoleakage pattern (NL) of etch-and-rinse adhesives when applied in aqueous or associated to the phosphoric acid conditioner. The occlusal enamel of 42 caries-free extracted molars was removed in order to expose a flat dentin surface. In groups 1 and 2 (control-C), the surfaces were acid etched with conventional phosphoric acid, and the adhesives Prime&amp;Bond NT (PB) and Adper Single Bond 2 (SB) were applied after rinsing, drying and rewetting with water. In groups 3 and 4 (Ac/CHX), the adhesives were applied in a similar manner, however, a 2% CHX-containing acid was previously applied. In groups 5 and 6 (CHX), the adhesives were applied according to the control group; however, the rewetting procedure was performed with an aqueous solution of 2% CHX for 60 seconds. Composite buildups (Opallis, FGM) were constructed incrementally, and the specimens were longitudinally sectioned in the "x" and "y" directions to obtain bonded sticks (0.8 mm2) to be tested in tension at 0.5 mm/minute immediately or after six months of water storage. For NL, two bonded sticks from each tooth were coated with nail varnish, placed in silver nitrate and polished down with SiC paper. Resin-dentin interfaces were analyzed by EDX-SEM. The BS and NL data from each adhesive was submitted to two-way repeated measures ANOVA and Tukey's test (alpha=0.05). After six months of water storage, significant reductions in BS were observed for both adhesives in the control group (p&lt;0.05). When Ac/CHX or CHX was used, no significant reductions in BS were observed for both systems. Nanoleakage was more evident in the control group than in the experimental groups (p&lt;0.05), even after six months. The use of CHX in an aqueous solution or associated with the acid conditioner was effective for reducing degradation of resin-dentin bonds after six months of water storage. abstract_id: PUBMED:22518385 Effect of chlorhexidine incorporation into dental adhesive resin on durability of resin-dentin bond. Purpose: This study evaluated the effect of chlorhexidine (CHX) incorporation into experimental dentin adhesives with different hydrophilicities on the microtensile bond strength (µTBS) to dentin. Materials And Methods: Flat, deep dentin surfaces were prepared from 60 extracted human third molars. Three ethanol-solvated (50 wt% ethanol/50 wt% comonomers) experimental adhesives with varying degrees of hydrophilicity were prepared for the CHX-free groups. For the CHX-containing groups, chlorhexidine diacetate was further added to the ethanol-solvated adhesives to form a concentration of 2.0 wt% CHX. Dentin surfaces were etched with 37% phosphoric acid for 15 s, rinsed and blot dried before bonding. The adhesives were generously applied to dentin with a microbrush for 15 s. A second application of fresh adhesive was made and light cured for 20 s (600 mW/cm2) after solvent evaporation. Composite buildups were made using Filtek Z250 (3M ESPE). The bonded teeth were sectioned into 0.9 mm x 0.9 mm beams and stressed to failure at a crosshead speed of 1 mm/min. Testing was performed 24 h after specimen preparation and 12 months after storage in artificial saliva. The µTBS data were analyzed using three-way ANOVA and Tukey's multiple comparison tests. Fractographic analysis was performed by SEM. Results: Significant differences were observed for the three factors "adhesive hydrophilicity" (p &lt; 0.001), "CHX incorporation" (p = 0.001), and "storage time" (p &lt; 0.001). Interaction among these three factors was also significant (p &lt; 0.001). Incorporation of CHX had no effect on the immediate bond strength of the three experimental adhesives (p &gt; 0.05). After storage in artificial saliva, significant reduction in bond strength was observed in all adhesive groups, except for CHX-containing adhesive I (p &lt; 0.001). The µTBS of the CHX-containing experimental adhesive III was significantly higher than the corresponding CHX-free adhesive (p &lt; 0.001) after aging. Conclusion: When incorporated into hydrophilic dental adhesives, chlorhexidine can partially reduce the degradation of the resin-dentin bonds. abstract_id: PUBMED:20940353 Ethanol wet-bonding challenges current anti-degradation strategy. The long-term effectiveness of chlorhexidine as a matrix metalloproteinase (MMP) inhibitor may be compromised when water is incompletely removed during dentin bonding. This study challenged this anti-bond degradation strategy by testing the null hypothesis that wet-bonding with water or ethanol has no effect on the effectiveness of chlorhexidine in preventing hybrid layer degradation over an 18-month period. Acid-etched dentin was bonded under pulpal pressure simulation with Scotchbond MP and Single Bond 2, with water wet-bonding or with a hydrophobic adhesive with ethanol wet-bonding, with or without pre-treatment with chlorhexidine diacetate (CHD). Resin-dentin beams were prepared for bond strength and TEM evaluation after 24 hrs and after aging in artificial saliva for 9 and 18 mos. Bonds made to ethanol-saturated dentin did not change over time with preservation of hybrid layer integrity. Bonds made to CHD pre-treated acid-etched dentin with commercial adhesives with water wet-bonding were preserved after 9 mos but not after 18 mos, with severe hybrid layer degradation. The results led to rejection of the null hypothesis and highlight the concept of biomimetic water replacement from the collagen intrafibrillar compartments as the ultimate goal in extending the longevity of resin-dentin bonds. Answer: Based on the provided abstracts, the combined use of 1% chlorhexidine diacetate and ethanol does not appear to consistently stabilize resin-dentin bonds over an extended period. Specifically, one study found that the combined use of ethanol and 1% chlorhexidine diacetate did not stabilize bond strengths after 15 months (PUBMED:24815823). However, other studies suggest that chlorhexidine and ethanol may have some positive effects on the durability of resin-dentin bonds under certain conditions. For instance, ethanol-wet bonding has been shown to increase the durability of resin-dentin bonds over longer times, with bonds made to ethanol-saturated dentin exhibiting higher bond strengths that did not decrease over time (PUBMED:19278986). Additionally, the use of chlorhexidine has been found to increase the longevity of in vivo resin-dentin bonds (PUBMED:20662916), and chlorhexidine-containing acid conditioner was effective in preserving the longevity of resin-dentin bonds (PUBMED:19678455). Moreover, ethanol-wet bonding combined with chlorhexidine was found to improve resin-dentin bond durability, as evidenced by quantitative analysis using Raman spectroscopy (PUBMED:25202747). In the context of fiber post bonding, root dentin pretreatment with ethanol, alone or mixed with chlorhexidine diacetate, increased the bond strength of fiber-reinforced composite resin posts (PUBMED:26957803). Furthermore, chlorhexidine incorporation into dental adhesive resin has been shown to partially reduce the degradation of resin-dentin bonds when incorporated into hydrophilic dental adhesives (PUBMED:22518385). However, it is important to note that the effectiveness of chlorhexidine as a matrix metalloproteinase inhibitor may be compromised when water is not completely removed during dentin bonding, as suggested by a study that found that bonds made to chlorhexidine pre-treated acid-etched dentin with commercial adhesives with water wet-bonding were not preserved after 18 months (PUBMED:20940353). In summary, while there is evidence that chlorhexidine and ethanol can have beneficial effects on the durability of resin-dentin bonds, the combined use of 1% chlorhexidine diacetate and ethanol does not appear to consistently stabilize these bonds over long periods. The effectiveness of these agents may depend on the specific bonding protocols and materials used, as well as the the specific bonding protocols and materials used, as well as the conditions under which the bonding occurs. For example, the saturation of dentin with chlorhexidine solutions, whether aqueous or alcoholic, did not affect the exposure of collagen fibrils in the resin-dentin bonds produced in sound and caries-affected dentin (PUBMED:24179985). This suggests that while chlorhexidine may not necessarily stabilize the bond by preventing collagen exposure, it may still contribute to bond durability through other mechanisms, such as inhibiting matrix metalloproteinases that degrade collagen over time. The variability in outcomes across different studies indicates that the stabilization of resin-dentin bonds by chlorhexidine and ethanol is complex and may be influenced by multiple factors, including the adhesive systems used, the application protocols, and the hydrophilicity of the adhesives. Therefore, while there is potential for these agents to enhance bond durability, further research is needed to fully understand their role and to optimize bonding procedures for long-term stability of resin-dentin bonds.
Instruction: Alcohol consumption and cognitive impairment among Korean older adults: does gender matter? Abstracts: abstract_id: PUBMED:24230998 Alcohol consumption and cognitive impairment among Korean older adults: does gender matter? Background: This study investigated gender differences in the relationship between alcohol consumption and cognitive impairment among older adults in South Korea. Methods: Using data from the Korean Longitudinal Study of Ageing, 2,471 females and 1,657 males were analyzed separately. Cognitive impairment was measured based on the Korean version of the Mini-Mental State Exam score. Logistic regression was conducted to examine the relationship between alcohol consumption and cognitive impairment among Korean older adults. Results: Multivariate analysis showed that compared to moderate drinkers, past drinkers were more likely to be cognitively impaired for women, while heavy drinkers were more likely to be cognitively impaired for men. Conclusions: Findings suggest that the relationship between alcohol consumption and cognition varies with gender. Clinicians and service providers should consider gender differences when developing strategies for the prevention and treatment of alcohol-related cognitive decline among older adults. abstract_id: PUBMED:36625173 Associations Between Korean Coronary Heart Disease Risk Score and Cognitive Function in Dementia-Free Korean Older Adults. Background: Cardiovascular risk is a modifiable factor that can help prevent dementia. Given the dearth of optimal treatment options, managing dementia risk factors is crucial. We examined the association between cardiovascular risk, as measured by the Korean coronary heart disease risk score (KRS), and cognitive function in dementia-free elderly individuals. Methods: We enrolled 8,600 individuals (average age: 69.74 years; 5,206 women) who underwent a medical evaluation from the National Health Insurance Service. KRS was calculated using age, sex, blood pressure, lipid profile, diabetes, and smoking status. Cognitive function was evaluated using Korean Dementia Screening Questionnaire-Cognition (KDSQ-C). Scores of ≥ 6 indicated a cognitive decline. Logistic regression analysis was used to estimate the odds ratio (OR) and 95% confidence interval (CI). Weight, height, stroke history, coronary heart disease history, alcohol consumption, and physical activity engagement were adjusted. Results: The lowest, middle, and highest groups, according to the KRS, were 5,923 (68.9%), 2,343 (27.2%), and 334 (3.9%), respectively. The highest KRS group in all participants exhibited a greater risk of cognitive decline than the lowest KRS group (OR, 1.339; 95% CI, 1.034-1.734; P = 0.027). The highest KRS female group aged 71-75 years old exhibited greater cognitive decline than the corresponding lowest KRS group (OR, 1.595; 95% CI, 1.045-2.434; P = 0.031). Conclusion: Individuals with high cardiovascular risk were associated with poorer cognitive function than those with low risk, especially older women. Cardiovascular risk factors should be carefully managed to promote healthy mental aging in dementia-free elderly individuals. abstract_id: PUBMED:25672924 Effects of vascular risk factors and APOE ε4 on white matter integrity and cognitive decline. Objective: To investigate the effects of vascular risk factors and APOE status on white matter microstructure, and subsequent cognitive decline among older people. Methods: This study included 241 participants (age 60 years and older) from the population-based Swedish National Study on Aging and Care in Kungsholmen in central Stockholm, Sweden, who were free of dementia and stroke at baseline (2001-2004). We collected data through interviews, clinical examinations, and laboratory tests. We measured fractional anisotropy (FA) and mean diffusivity (MD) on diffusion tensor imaging, and estimated volume of white matter hyperintensities using automatic segmentation. We assessed global cognitive function with the Mini-Mental State Examination at baseline and at 3- and/or 6-year follow-up. We analyzed the data using multivariate linear regression and linear mixed models. Results: Heavy alcohol consumption, hypertension, and diabetes were significantly associated with lower FA or higher MD (p &lt; 0.05). When aggregating heavy alcohol consumption, hypertension, and diabetes together with current smoking, having an increasing number of these 4 factors concurrently was associated with decreasing FA and increasing MD (ptrend &lt; 0.01), independent of white matter hyperintensities. Vascular risk factors and APOE ε4 allele interacted to negatively affect white matter microstructure; having multiple (≥2) vascular factors was particularly detrimental to white matter integrity among APOE ε4 carriers. Lower tertile of FA and upper tertile of MD were significantly associated with faster Mini-Mental State Examination decline. Conclusions: Vascular risk factors are associated with reduced white matter integrity among older adults, which subsequently predicted faster cognitive decline. The detrimental effects of vascular risk factors on white matter microstructure were exacerbated among APOE ε4 carriers. abstract_id: PUBMED:25336352 Drinking behavior among low-income older adults: a multimethod approach to estimating alcohol use. Background: Substance abuse is the fastest growing health concern for older adults. Heavy drinking among older persons is associated with an increased risk of health consequences such as diabetes, cognitive impairment, sleep issues, and depression. It is important to note, however, the prevalence estimates of alcohol use among older adults are often based on inconsistent methodology. To address these potential methodological shortcomings, this study examines drinking patterns among low-income older adults using both self-report and unobtrusive methods. Methods: The study was conducted in a low-income residential senior center in the United States. A total of 174 participants, aged 60 years or older, completed 2 self-administered cross-sectional surveys. A bogus recycling program was implemented to assess the amount of alcohol consumed by residents. Logistic regression analysis was utilized to model predictors of drinking status and to determine predictors of 3 category Alcohol Use and Disorders Identification Test scores. Bivariate associations that predicted associations with alcohol use were included in the final multivariate model. Alcohol containers collected from recycling were converted to standard drink estimates in order to calculate the capital consumption of residents. Results: About 40% of respondents reported consuming alcohol and 25% reported drinking at least once a week. On average, a total of 1,079 drinks were consumed per month. There were 3 significant predictors of drinking status: age, education, and diagnosis of diabetes. Additionally, there appears to be an increase in recycled alcohol containers coinciding with the time residents received their social security checks. Conclusions: Overall, the combination of self-report and unobtrusive measures of alcohol consumption has potential to highlight different aspects of drinking behavior with a population living in a single dwelling such as a senior center apartment complex, residential hall, and the like. abstract_id: PUBMED:36852519 Factors affecting self-care among community-dwelling hypertensive older adults: A cross-sectional study. Aim: To examine self-care behaviours among older adults with hypertension and identify related factors, including cognitive function, religious belief and comorbidities. Design: A cross-sectional study. Methods: Self-care behaviours included diet and health. Participants completed a survey including items on demographics, disease-related characteristics and self-care behaviours and underwent cognitive function tests. Data were analysed using descriptive statistics, and multiple regression analysis was performed to analyse the factors affecting self-care. Results: Regarding diet behaviour, older religious adults and those with higher scores on the Montreal Cognitive Assessment-Korean version had higher levels of self-care scores according to the HBP-SC. Regarding health behaviour, older adults with no comorbidities had higher levels of self-care scores according to the HBP-SC. Conclusion: Factors affecting self-care diet behaviour include religion and Montreal Cognitive Assessment-Korean version scores and those affecting health behaviour include comorbidities among older adults with hypertension. Therefore, to improve their self-care behaviours, their religious practices and comorbidities should be considered, cognitive function should be assessed, and tailored education should be provided. Implications For The Profession And/or Patient Care: This study investigated factors affecting self-care behaviours of hypertensive older adults in South Korea. The self-care was divided into diet and health behaviours. The factors influencing diet behaviour were religion and Montreal Cognitive Assessment-Korean version scores, and the factor influencing health behaviour was comorbidities. We also investigated self-care behaviour patterns. Older adults with hypertension were good at controlling alcohol consumption and did not forget to fill prescriptions. However, they were poor at reading nutrition labels to check on sodium content and checking blood pressure at home. Therefore, nurses could develop interventions considering these influencing factors and behavioural patterns to improve self-care behaviours and enhance health for older adults with hypertension. Impact: Hypertension in older adults affects their health conditions and performance of self-care behaviours. Nurses could assess self-care based on diet and health behaviours. Additionally, further developing tailored programmes is recommended considering factors like religious belief, cognitive function and comorbidities. Reporting Method: This study followed the STROBE guidelines. Patient Or Public Contribution: This study used a convenience sample of 105 participants aged ≥ 65 years recruited from a Korean hospital. abstract_id: PUBMED:28320132 Early markers of kidney dysfunction and cognitive impairment among older adults. Background: Age-related decline in kidney function can be an important risk factor for cognitive impairment in older adults. In this study, we examined several kidney function measures for the association with cognitive function in older adults in hope to identify early and sensitive markers that can be used for the detection or screening for cognitive impairment. Methods: A total of 1982 older participants (aged ≥60years) in the 1999-2002 National Health and Nutrition Examination Survey was analyzed for the association between kidney function and cognitive impairment using multivariate logistic regression and general linear models. Cognitive functioning was assessed during the household interview using a version of the Digit Symbol Substitution Test of the Wechsler Adult Intelligence Scale III. In our study, participants with a score of &lt;31, the 25th percentiles of the distribution, or who were unable to complete the sample exercise due to cognitive limitations were classified as having cognitive impairment. Results: Of 1982 older adults, 503 were having cognitive impairment (weighted prevalence, 15.38%). Among the kidney function measures that we examined, the levels of serum cystatin C and urinary albumin were found being significantly associated with cognitive impairment after adjusting for age, sex, race/ethnicity, poverty status, education, physical activity, BMI, cigarette smoking, and alcohol consumption. Cognitive functioning scores were significantly decreasing with increasing levels of kidney dysfunction markers. Conclusion: Serum cystatin C and urinary albumin that are early markers of chronic kidney disease might serve as early and effective markers for cognitive decline in older adults. Mechanisms underlying the observed association need to be further characterized. abstract_id: PUBMED:27719514 Lifetime alcohol use and cognitive performance in older adults. Substance use is an important clinical issue in the older adult population. As older adults are susceptible to cognitive disorders, the intersection of the fields of substance use and cognitive neuroscience is an active area of research. Prior studies of alcohol use and cognitive performance are mixed, and inconsistencies may be due to under- or over-adjustment for confounders. This article adds to this literature by conducting a secondary analysis of self-reported lifetime history of alcohol use and cognitive performance in older adults (n = 133). It was hypothesized that current alcohol users would have poorer cognitive performance compared to never/minimal and former alcohol users. Older adult participants were classified into never/minimal alcohol users, former alcohol users, and current alcohol users. A neurocognitive battery included a global cognitive measure and individual measures of attention, memory, fluency, and executive function. A directed acyclic graph-based approach was used to select variables to be included in the multiple linear regression models. Though unadjusted analyses showed some significant associations between alcohol use and cognitive performance, all associations between alcohol use and cognitive performance were eliminated after adjusting for age, education, sex, race, and smoking pack years. Alcohol drink years were not significantly associated with cognitive performance among current and former alcohol users. These results suggest that lifetime alcohol use is not significantly associated with cognitive performance in older adults after adjustment for key confounders. Inconsistencies in prior studies may be due to uncontrolled confounding and/or unnecessary adjustment of mediators and/or colliders. abstract_id: PUBMED:27865039 Alcohol consumption during adolescence is associated with reduced grey matter volumes. Background And Aims: Cognitive impairment has been associated with excessive alcohol use, but its neural basis is poorly understood. Chronic excessive alcohol use in adolescence may lead to neuronal loss and volumetric changes in the brain. Our objective was to compare the grey matter volumes of heavy- and light-drinking adolescents. Design: This was a longitudinal study: heavy-drinking adolescents without an alcohol use disorder and their light-drinking controls were followed-up for 10 years using questionnaires at three time-points. Magnetic resonance imaging was conducted at the last time-point. Setting: The area near Kuopio University Hospital, Finland. Participants: The 62 participants were aged 22-28 years and included 35 alcohol users and 27 controls who had been followed-up for approximately 10 years. Measurements: Alcohol use was measured by the Alcohol Use Disorders Identification Test (AUDIT)-C at three time-points during 10 years. Participants were selected based on their AUDIT-C score. Magnetic resonance imaging was conducted at the last time-point. Grey matter volume was determined and compared between heavy- and light-drinking groups using voxel-based morphometry on three-dimensional T1-weighted magnetic resonance images using predefined regions of interest and a threshold of P &lt; 0.05, with small volume correction applied on cluster level. Findings: Grey matter volumes were significantly smaller among heavy-drinking participants in the bilateral anterior cingulate cortex, right orbitofrontal and frontopolar cortex, right superior temporal gyrus and right insular cortex compared to the control group (P &lt; 0.05, family-wise error-corrected cluster level). Conclusions: Excessive alcohol use during adolescence appears to be associated with an abnormal development of the brain grey matter. Moreover, the structural changes detected in the insula of alcohol users may reflect a reduced sensitivity to alcohol's negative subjective effects. abstract_id: PUBMED:33587891 Relationship Between Oral Health and Fractures in Community-Dwelling Older Japanese Adults. Objective: To investigate the relationship between poor oral health and the incidence of fall-related fractures in older Japanese individuals. Design: A 9-year prospective cohort study. Setting And Participants: Participants comprised 937 community-dwelling older Japanese adults aged 70 years or older. They all lived in the Tsurugaya district, a suburban area of Sendai city, and underwent comprehensive geriatric assessment, including an oral examination, in a public facility. Measurements: The exposure variables were related to oral health status (posterior occlusal support, number of remaining teeth, and occlusal force). The outcome measure was the incidence of fall-related fractures, which was determined by National Health Insurance data. Analyzed covariates included age, sex, medical history, smoking, alcohol drinking, educational level, depressive symptoms, cognitive impairment, physical function, body mass index, and history of falls. Statistical relationships were examined by calculating hazard ratios (HRs) at 95% confidence intervals (CIs) using the Cox proportional hazard model. Results: In the multivariate analysis, the HRs of fall-related fractures were significantly higher in those with unilateral posterior occlusal support (HR, 2.72; 95% CI, 1.13-6.55) and no posterior occlusal support (HR, 2.58; 95% CI, 1.29-5.15) than in those with bilateral posterior occlusal support. The HRs (95% CIs) of fall-related fractures in individuals with 10-19 and 1-9 teeth and edentulous individuals were 1.77 (0.81-3.89), 2.67 (1.24-5.75), and 2.31 (1.01-5.28), respectively, compared to those with ≥20 teeth. Conclusions And Implications: Poor oral health status is a risk factor for the incidence of fall-related fractures in community-dwelling older Japanese individuals. The findings suggest that attention should be focused on oral health status to further understand the risk of fall-related fractures among community-dwelling older adults. abstract_id: PUBMED:33538990 The identification of established modifiable mid-life risk factors for cardiovascular disease which contribute to cognitive decline: Korean Longitudinal Study of Aging (KLoSA). Introduction: We explored how different chronic diseases, risk factors, and protective factors highly associated with cardiovascular diseases (CVD) are associated with dementia or Mild Cognitive Impairment (MCI) in Korean elders, with a focus on those that manifest in mid-life. Methods: A CVD-free cohort (n = 4289) from the Korean Longitudinal Study of Aging was selected to perform Cox mixed-effects proportional hazard regressions. Eighteen control variables with strong associations to CVD were chosen as explanatory variables, and Mini-Mental State Examination (MMSE) score cut-off for dementia and MCI were used as outcome variables. Results: The statistically significant (P &lt; 0.05) adverse factors that contribute in developing dementia were age (aHR 1.07, 1.05-1.09), Centre for Epidemiological Studies Depression Scale (CESD-10) (aHR 1.17, 1.12-1.23), diagnosis with cerebrovascular disease (aHR 3.73, 1.81-7.66), living with diabetes (aHR 2.30, 1.22-4.35), and living with high blood pressure (HBP) (aHR 2.05, 1.09-3.87). In contrast, the statistically significant protective factors against developing dementia were current alcohol consumption (aHR 0.67, 0.46-0.99), higher educational attainment (aHR 0.36, 0.26-0.56), and regular exercise (aHR 0.37, 0.26-0.51). The factors with a statistically significant adverse association with progression to MCI were age (aHR 1.02, 1.01-1.03) and CESD-10 (aHR 1.17, 1.14-1.19). In contrast, the statistically significant protective factors against developing MCI were BMI (aHR 0.96, 0.94-0.98), higher educational attainment (aHR 0.33, 0.26-0.43), and regular exercise (aHR 0.83, 0.74-0.92). Conclusion: In lieu of the protective factor of MCI and dementia, implementing regular exercise routine well before mid-life and cognitive decline is significant, with adjustments made for those suffering from health conditions, so they can continue exercising despite their morbidity. Further attention in diabetes care and management is needed for patients who already show decline in cognitive ability as it is likely that their MCI impacts their ability to manage their existing chronic conditions, which may adversely affect their cognitive ability furthermore. Answer: Yes, gender does matter in the relationship between alcohol consumption and cognitive impairment among Korean older adults. According to a study that investigated this relationship, it was found that past drinkers were more likely to be cognitively impaired for women, while heavy drinkers were more likely to be cognitively impaired for men (PUBMED:24230998). This suggests that the effects of alcohol consumption on cognition vary with gender, and this should be taken into account by clinicians and service providers when developing strategies for the prevention and treatment of alcohol-related cognitive decline among older adults.
Instruction: Is there a minimum number of lymph nodes that should be histologically assessed for a reliable nodal staging of T3N0M0 colorectal carcinomas? Abstracts: abstract_id: PUBMED:12355405 Is there a minimum number of lymph nodes that should be histologically assessed for a reliable nodal staging of T3N0M0 colorectal carcinomas? Background And Objectives: Because of the existing controversy, we searched for a cutoff value for the number of lymph nodes (LNs) to be examined in order to establish a reliable node-negative stage in colorectal carcinomas (CRCs). Methods: From the SEER database, 8,574 T3N0M0 first, single, histologically confirmed, surgically treated CRCs, with at least 1 LN examined histologically, were considered. As a first approach, the relationships between number of examined LNs and 5- and 10-year overall survival (OS) rates, computed by the Kaplan-Meier method, were assessed. Next, multivariate analysis was performed; a proportional hazards model was fitted to the data and used to obtain a smoothed plot of the martingale residuals vs. the number of negative LNs. Results: Both OS rates displayed an improvement with an increase of number of LNs examined. The smoothed plot of the martingale residuals against the number of negative LNs was reasonably linear. Conclusions: Both approaches suggest that there is no cutoff value for the number of LNs to be examined for an adequate nodal staging; for a reliable pN0 staging, as many LNs should be assessed as possible. However, qualitative features of lymph nodes (e.g., those identified by sentinel lymphadenectomy) may alter this recommendation. abstract_id: PUBMED:11986347 The influence of nodal size on the staging of colorectal carcinomas. Aims: The reliable identification of node negative colorectal carcinomas (CRCs) has often been linked to the histological examination of a minimum number of lymph nodes. The sizes of the lymph nodes, their metastatic status, and their number were investigated to establish whether these parameters are related, and whether their relation could help in determining the adequacy of staging. Methods: One thousand three hundred and thirty four negative lymph nodes, 189 metastatic lymph nodes, and 43 pericolonic/perirectal tumour deposits measuring &gt; or = 3 mm from 60 node positive and from 63 node negative patients with CRC were assessed for size. Results: The mean size (SD) of these structures was 4.5 (2.7) mm. The lymph nodes were significantly larger in the CRCs with metastatic nodes (4.7 v 4.3 mm). Involved nodes were significantly larger than negative nodes (6.3 v 4.2 mm), despite the fact that the largest node was &lt; or = 5 mm in one third of node positive CRCs. The examination of the seven largest nodes could have adequately staged 97% of node positive CRCs and 98% of all CRCs. Conclusions: The nodal staging of CRCs is dependent not only on the number of lymph nodes investigated, but also on qualitative features of the lymph nodes assessed, including their size. Lymph nodes are not equivalent and any study neglecting this fact will give grounds for error in the recommendation of a minimum number of nodes for the reliable determination of node negative CRCs. Although pathologists should aim to recover all nodes, a negative nodal status based on only seven nodes can be reliable. abstract_id: PUBMED:24045143 Nodal status, number of lymph nodes examined, and lymph node ratio: what defines prognosis after resection of colon adenocarcinoma? Background: Lymph node ratio (LNR) has been proposed as an optimal staging variable for colorectal cancer. However, the interactive effect of total number of lymph nodes examined (TNLE) and the number of metastatic lymph nodes (NMLN) on survival has not been well characterized. Study Design: Patients operated on for colon cancer between 1998 and 2007 were identified from the Surveillance, Epidemiology, and End Results database (n = 154,208) and randomly divided into development (75%) and validation (25%) datasets. The association of the TNLE and NMLN on survival was assessed using the Cox proportional hazards model with terms for interaction and nonlinearity with restricted cubic spline functions. Findings were confirmed in the validation dataset. Results: Both TNLE and NMLN were nonlinearly associated with survival. Patients with no lymph node metastasis had a decrease in the risk of death for each lymph node examined up to approximately 25 lymph nodes, while the effect of TNLE was negligible after approximately 10 negative lymph nodes (NNLN) in those with lymph node metastasis. The hazard ratio varied considerably according to the TNLE for a given LNR when LNR ≥ 0.5, ranging from 2.88 to 7.16 in those with an LNR = 1. The independent effects of NMLN and NNLN on survival were summarized in a model-based score, the N score. When patients in the validation set were categorized according to the N stage, the LNR, and the N score, only the N score was unaffected by differences in the TNLE. Conclusions: The effect of the TNLE on survival does not have a unique, strong threshold (ie, 12 lymph nodes). The combined effect of NMLN and TNLE is complex and is not appropriately represented by the LNR. The N score may be an alternative to the N stage for prognostication of patients with colon cancer because it accounts for differences in nodal samples. abstract_id: PUBMED:17058731 Number of lymph nodes examined and its impact on colorectal cancer staging. The status of lymph nodes is the most important prognosticator in colorectal cancer patients. Patients with lymph node involvement have a lower survival rate and are candidates for adjuvant therapy. The purpose of our study was to determine the number of lymph nodes that needs to be examined to accurately detect nodal metastasis. We conducted a retrospective study of 151 patients who underwent colorectal cancer operation at Harbor-UCLA Medical Center. Data from the operative report and pathology report were collected and analyzed. Fourteen (33.3%) patients with five to nine nodes examined had positive nodes. Twenty-six (57.8%) patients with 10 to 14 nodes examined had positive nodes. Patients who had 10 to 14 nodes examined were significantly more likely to have positive lymph nodes (P = 0.03). Patients with advanced T stage had a significantly higher number of positive lymph nodes (78.1% in T4 vs 11.1% in T1, P &lt; 0.0001). Patients with poorly differentiated cancer showed a trend toward a higher positive node rate. Tumor differentiation and T stage seem to correlate with higher nodal metastasis rate. A higher number of lymph nodes examined was associated with a higher nodal metastasis rate. Examination of at least 10 lymph nodes would increase the yield of positive lymph nodes and avoid under-staging of patients with colorectal cancer. abstract_id: PUBMED:15783001 Minimal number of lymph nodes that need to be examined for adequate staging of colorectal cancer--factors influencing lymph node harvest. Background/aims: Accurate assessment of lymphatic node status is an essential component in staging of colorectal cancer and determining the need for adjuvant treatment. The risk of understaging nodal status decreases by increasing the number of examined lymph nodes. Several recommendations have been published for the minimum number of lymphatic nodes that need to be harvested for accurate nodal staging with minimal chance of error, ranging from 6 to 17 lymph nodes. However, the number of harvested lymph nodes is very variable, and this variability is poorly understood. The aim of this study was to determine factors associated with the number of retrieved lymph nodes in patients with colorectal cancer. Methodology: Clinical and histological characteristics of 177 patients operated for colorectal cancer were analyzed. Results: In multiple regression analysis, male gender, better tumor differentiation and greater tumor size, as well as the presence of acute inflammation were determined as significant independent predictors of the increased number of resected and examined lymph nodes. Conclusions: The determination of the minimum number of lymph nodes required for accurate nodal staging of patients with colorectal cancer needs to be individualized. abstract_id: PUBMED:38177714 Acetone compression improves lymph node yield and metastasis detection in colorectal cancer. Lymph node status is one of the most important prognostic factors in colorectal cancer, and accurate pathological nodal staging and detection of lymph node metastases is crucial for determination of post-operative management. Current guidelines, including the TNM staging system and European Society for Medical Oncology (ESMO) guidelines, recommend examination of at least 12 lymph nodes. However, identification of an adequate number of lymph nodes can be challenging, especially in the setting of neoadjuvant treatment, which may reduce nodal size. In this study, we investigated 384 colorectal cancer resections that were processed at our department of pathology between January 2012 and December 2022, in which the number of detected lymph nodes was less than 12 subsequent to conventional preparation of mesocolic fat tissue. By means of acetone compression, lymph node harvest increased significantly (p &lt; 0.0001), and the intended number of ≥ 12 lymph nodes was achieved in 98% of resection specimens. The number of nodal positive cases increased significantly from n = 95 (24.7%) before versus n = 131 (34.1%) after acetone compression due to additionally identified lymph node metastases (p &lt; 0.001). In 36 patients (9.4%) initially considered as nodal negative, acetone compression led to a staging adjustment to a nodal positive category and thereby drove a recommendation to offer post-operative therapy. In conclusion, acetone compression is a reliable and useful method implementable in routine surgical pathology for the retrieval of lymph nodes in colorectal cancer specimen, allowing for an adequate lymph node sampling and an increase in nodal staging reliability. abstract_id: PUBMED:10607924 Nodal staging of colorectal carcinomas from quantitative and qualitative aspects. Can lymphatic mapping help staging? Retrospective data analysis was performed to determine the minimum number of lymph nodes required for the staging of colorectal carcinomas, and a prospective feasibility study was carried out to identify sentinel nodes in order to clarify whether these may predict the nodal status. From among 240 colorectal carcinoma specimens investigated between 1996 and 1998, 224 tumors were analyzed for their nodal status. Lymphatic mapping with vital patent blue dye injection into the peritumoral sub-serosal layer was performed in 25 patients. Blue nodes were identified by the pathologist in the unfixed specimen immediately after the resection of the bowel and were assessed separately. Of the 123 node-positive carcinomas, 40 had more than 3 nodes involved. The nodal positivity increased substantially when more than 6 nodes were assessed. The cumulative percentage analysis demonstrated that ideally 16 and 13 nodes should be obtained for the identification of any nodal involvement or the involvement of more than 3 nodes, respectively. Lymphatic mapping was successful in 24 patients (96%). Blue nodes were predictive of the nodal status in 19 cases (79%), and were the only sites of metastasis in 2 patients (15% of the node-positive cases). Lymphatic mapping with the vital blue dye technique does not seem to facilitate the staging of colorectal cancers, at least in our patient population with relatively large and deeply infiltrating tumors, and unless the technique is improved or other selective features of lymph nodes are found, all lymph nodes should be assessed. A minimum of 6 nodes, and an optimum of 16 nodes or more, are suggested from these series. abstract_id: PUBMED:18341898 Is there a minimum number of lymph nodes that should be examined after surgical resection of colorectal cancer? Metastasis to regional lymph nodes, after distant metastasis, is the most important prognostic factor of colorectal carcinomas. It is also of primary importance in decisions related to the administration of adjuvant treatments. Most scientific associations recommend the examination of at least 12 lymph nodes for the reliable determination of the absence of nodal metastases. We performed a literature review on lymph node recovery in order to determine whether 12 is the minimum and optimal number of lymph nodes to be examined after colorrectal cancer surgery. The differences between authors suggest that an optimal number of lymph nodes to be examined after colorectal cancer surgery probably does not exist and depends on many factors. Thus, recovering as many lymph nodes as possible seems to be a good option. abstract_id: PUBMED:20947868 Limited lymph-node recovery based on lymph-node localisation is sufficient for accurate staging. Aims: To assess the reliability of nodal staging in colorectal carcinomas (CRCs) when only lymph nodes close to the tumour are recovered and examined histologically. Methods: Lymph nodes from CRC resection specimens were recovered into two fractions: one from around the tumour and the 3 cm sidelong bowel segment associated with it in the proximal and distal directions, and another from the remaining part of the resection specimen. Results: Of the 762 CRCs (239 right colon, 251 left colon, 257 rectum, 15 unspecified localisation) there were 393 node-negative and 369 node-positive cases. The median number of LNs examined was 18. The assessment of the LNs located in the close fraction (median 13) yielded an adequate qualitative nodal status in 756 patients (99.2%). In four cases (two rectal and two right colic), no LN metastases could be identified in the close-fraction lymph nodes, but nodes from the distant fraction contained metastases. Of the node-positive carcinomas which had at least one positive lymph node in the close fraction, 203 belonged to the pN1 category and 162 to the pN2 category of the Tumour Node Metastasis staging system. Only 14 cases (10 rectal, two right and two left colic tumours) were misclassified as pN1 (on the basis of lymph nodes recovered from the close fraction) although they were of the pN2 category. Conclusions: In general, nodal status of CRCs may be adequately assessed by examining the lymph nodes from the close fraction around the tumour and the 3 cm sidelong bowel segment in both directions. abstract_id: PUBMED:8712176 Lymph node recovery from colorectal resection specimens removed for adenocarcinoma. Trends over time and a recommendation for a minimum number of lymph nodes to be recovered. Recovery of pericolorectal lymph nodes from colectomy specimens has long been part of colorectal cancer staging. Recently, adjuvant therapy has been added for high stage carcinomas, providing further impetus for performing careful lymph node dissections. Pericolorectal lymph nodes were examined to determine if there has been a change over time in the number of lymph nodes recovered and proportion of specimens with pericolonic lymph node metastases from colorectal carcinoma resection specimens. Also, the authors attempted to establish a recommendation for a minimum number of lymph nodes that should be recovered before a colon can be considered free of metastases. Slides and reports of the first 20 consecutive pT3 colorectal carcinoma resections in each year from 1955 to 1995 at William Beaumont Hospital that did not have known metastases at the time of surgery were reviewed (750 specimens total). The mean number of lymph nodes recovered per specimen and incidence of detected lymph node metastases increased over the 41-year period, with the greatest increase occurring during 1992-1995. The greatest proportion of patients with lymph node metastases detected occurred in the 17 to 20 lymph nodes recovered per specimen group. Specimens with more than 20 lymph nodes did not have a higher proportion of lymph node metastases detected compared to specimens with 17 to 20 lymph nodes. Approximately 20% of the specimens with metastases had more than 17 lymph nodes recovered. These results suggest that pathologists should retrieve all the lymph nodes that can be recovered, but at least 17 lymph nodes should be recovered to insure accurate documentation of nodal metastases when present. Answer: The question of a minimum number of lymph nodes to be histologically assessed for reliable nodal staging of T3N0M0 colorectal carcinomas has been addressed in several studies, but there is no consensus on an exact cutoff value. One study from the SEER database involving 8,574 T3N0M0 colorectal carcinoma cases suggested that there is no specific cutoff value for the number of lymph nodes to be examined for adequate nodal staging, and recommended that as many lymph nodes as possible should be assessed (PUBMED:12355405). Another study indicated that the examination of the seven largest nodes could have adequately staged 97% of node-positive colorectal carcinomas and 98% of all colorectal carcinomas (PUBMED:11986347). A study using the Surveillance, Epidemiology, and End Results database found that the effect of the total number of lymph nodes examined on survival does not have a unique, strong threshold, suggesting that the combined effect of the number of metastatic lymph nodes and the total number examined is complex and not appropriately represented by the lymph node ratio alone (PUBMED:24045143). Another study recommended examining at least 10 lymph nodes to increase the yield of positive lymph nodes and avoid under-staging (PUBMED:17058731). Factors influencing the number of lymph nodes harvested were identified, suggesting that the determination of the minimum number of lymph nodes required for accurate nodal staging needs to be individualized (PUBMED:15783001). A study on acetone compression showed that this method significantly increased lymph node yield and metastasis detection, achieving the intended number of ≥12 lymph nodes in 98% of resection specimens (PUBMED:38177714). A retrospective and prospective study suggested that ideally 16 and 13 nodes should be obtained for the identification of any nodal involvement or the involvement of more than 3 nodes, respectively, although lymphatic mapping with vital blue dye did not facilitate staging in their patient population (PUBMED:10607924). A literature review concluded that recovering as many lymph nodes as possible seems to be a good option, as an optimal number of lymph nodes to be examined probably does not exist and depends on many factors (PUBMED:18341898).
Instruction: Colonic conduit in children: protection of the upper urinary tract 16 years later? Abstracts: abstract_id: PUBMED:8709335 Colonic conduit in children: protection of the upper urinary tract 16 years later? Purpose: Complication rates of up to 86.6% have been reported after creation of an ileal conduit. In contrast to others, we construct an antirefluxing isoperistaltic colonic for incontinent diversion in children. Materials And Methods: Between 1968 and 1989 a colonic conduit was created in 105 patients up to age 20 years for neurogenic bladder (76), bladder exstrophy (16) and other reasons (13). Long-term followup (minimum 5 years, mean 16.3 years, range 5 to 26) was possible in 84 patients (159 renal units). Results: Early and late stenosis at the ureterocolic anastomosis, and stomal stenosis occurred in 7.6 and 15.5% of the cases, respectively. Renal calculi developed in 8.2% of the renal units. Eight kidneys without function after recurrent pyelonephritis, calculi or stenosis at the ureterocolic anastomosis were removed during followup. Compared to preoperative status 3 of the remaining 151 renal units had increased dilatation of the upper urinary tract at the last followup. A total of 31 patients had complications during followup and 18 underwent conversion to continent urinary diversion. Conclusions: When incontinent urinary diversion is necessary in children, a colonic conduit with the option of continent conversion should be created before deterioration of the upper urinary tract. abstract_id: PUBMED:12741337 Ileal conduit and urinary stoma complications The ileal conduit has been widely used for urinary diversion. It is a safe procedure with acceptable results, but it has become clear that the conduit can give rise to serious complications, notably renal deterioration in the long run. Stenosis of the ileal conduit, usually developing insidiously many years after the diversion, may be the cause of upper urinary tract damage. In most cases, a variety of possible factors is considered, including microvascular ischemia, urine-borne toxic material, infectious and allergic stimuli and an immunologic defect. Crohn's disease may affect an ileal loop urinary conduit. It presents as a diffuse loop stenosis. Recurrent transitional carcinoma arising within an ileal conduit following cystectomy for malignant disease is rare, but late malignancy in bowel segments exposed to urine without fecal stream is well known. In literature, distinction between conduit complications of patients with underlying benign disease and those with malignancy has not always been clear. Stoma and skin complications are frequently observed and the patients who practiced inadequate stoma care routines are more likely to show peristomal skin complications. Patients bearing an abdominal urostomy should be followed up stringently in stoma centers. abstract_id: PUBMED:16904445 Long-term effects of ileal conduit urinary diversion on upper urinary tract in bladder cancer. Objectives: To evaluate the functional and morphologic changes of the upper urinary tract after radical cystectomy and ileal conduit urinary diversion for bladder cancer. Methods: Radical cystectomy and ileal conduit urinary diversion were performed in a total of 249 patients with bladder cancer at our hospital from 1980 to 1999. Of the 249 patients, 67 were excluded because of the presence of preoperative lesions in the upper urinary tract or elevated serum creatinine (greater than 1.4 mg/dL). Of the remaining 182 patients, 82 were also excluded because of incomplete follow-up or death less than 5 years after surgery. Results: Of the 249 patients, a total of 100 (40.2%) who had no preoperative lesions in upper urinary tract and normal renal function and survived 5 years or longer were included in this analysis. The median follow-up period was 91 months (range 60 to 193). The serum creatinine was greater than the normal range in 10 patients (10.0%), and radiologic changes in the upper urinary tract were observed in 14 patients (14.0%). They had diabetes mellitus nephropathy (4.0%) or specific comorbidities that could contribute to the deterioration of the upper urinary tract, such as ureteroileal anastomotic stricture (4.0%), chronic pyelonephritis (4.0%), urinary stones (1.0%), and upper urinary tract tumor (1.0%). Conclusions: The results of our study have shown that the functional and morphologic changes in the upper urinary tract after radical cystectomy and ileal conduit urinary diversion occur in patients with specific comorbidities that could contribute to those changes. Different from previous reports, none of the upper urinary tract changes developed without an obvious cause. abstract_id: PUBMED:6652429 The colonic conduit: a better method of urinary diversion? The colonic conduit urinary diversion has been assessed in a retrospective study of 47 children in order to compare the results of this operation with that of other forms of urinary diversion. The overall complication rate was 81%. The rate of stomal stenosis was 34% and that of upper tract dilatation was 36%. These results are very similar to the complication rate for the ileal conduit, which is now the most popular form of urinary diversion. It is suggested that all forms of urinary diversion have a high long-term complication rate and that other means of management should be considered more seriously. abstract_id: PUBMED:15864655 Urinary diversion in children and adolescents with neurogenic bladder: the Mainz experience. Part III: Colonic conduit. After the failure of conservative treatment of neurogenic bladders, urinary diversion has to be considered. For patients with chronic renal failure, severe dilated upper urinary tracts with deterioration of the renal function and those who are not able to perform a self-catheterization, the colonic conduit diversion is our therapy of choice. In this part of the study, we investigate the long-term safety of our concept for these patients in regard to protecting the upper urinary tracts. Between 1968 and 2002, colonic conduit urinary diversion was performed in 88 patients, in most of them in the era before continent diversion. Overall, 11 patients with a colonic conduit were converted to bladder substitution or continent cutaneous diversion during the follow-up period. These patients are no longer included in the long-term follow-up of colonic conduit patients. Of the 77 patients with colonic conduit diversion, 21 patients were not available for follow-up: 11 were deceased and 10 were lost to follow-up. Three of the deaths were related to nephrological complications in patients who already had impaired renal function before conduit diversion. An average follow-up of 21.8 years (2-32.7; median 23.8 years.) is available in 56 patients with 99 RUs (6 solitary kidneys, 7 nephrectomies). Five non-functioning kidneys were removed after recurrent pyelonephritis and two kidneys with pyonephrosis. Ureter stenoses were corrected in 6% of the RUs. As compared to preoperatively, the upper urinary tracts remained stable or improved in 97/99 RUs at the latest follow-up. A revision of the stoma was necessary in 16% (conduit elongation n =2, stenosis n =7) and calculi formations were treated in 8% of the RUs. For patients with chronic renal failure or who are unable to perform a catheterization of a continent stoma, the colonic conduit is a safe alternative in the long run. abstract_id: PUBMED:2733114 Urodynamic investigation of the ileal conduit: upper tract dilatation and the effects of revision of the conduit. Urodynamic studies of the ileal conduit were performed in 36 patients with normal upper urinary tracts (controls) and in 32 who had progressive upper tract dilatation within 6 years preceding the study. In controls the conduit emptied mainly by means of low pressure, to-and-fro activity. In contrast, high frequency, high amplitude peristaltic activity was found commonly in patients with upper tract dilatation, such activity being consistent with obstruction of the distal conduit. Of the 32 patients with upper tract dilatation 17 underwent reoperation on the conduit as well as postoperative urodynamic studies. Among these 17 patients a decrease was observed in basal pressure in the conduit postoperatively compared to preoperatively (5 +/- 4 and 12 +/- 6 cm. water, p less than 0.001). A decrease also was found postoperatively in the frequency (4 +/- 6 and 36 +/- 28 per hour, p less than 0.001) and amplitude (34 +/- 47 and 61 +/- 28 cm. water, p less than 0.001) of peristaltic activity. Radiological improvement in the appearance of the upper urinary tracts was found in 9 patients (53 per cent) postoperatively. These findings support the hypothesis that upper tract dilatation in certain patients with an ileal conduit may be caused by high pressure activity in the presence of free ureteral reflux. Such abnormalities, which are probably the result of obstruction of the distal conduit, may be detected by means of urodynamic evaluation and may be corrected by revision of the conduit. abstract_id: PUBMED:8647150 Upper urinary tract involvement after cystectomy and ileal conduit diversion for primary bladder carcinoma. Objectives: To design a proper follow-up for cystectomy and ileal conduit urinary diversion for primary bladder transitional carcinoma, we compared the radiographic characteristics of recurrent malignant upper tract lesions with those of benign ureteroileal anastomosis strictures. Methods: Over a 20-year period, we followed 61 patients who underwent cystectomy and ileal conduit diversion at our hospitals for a minimum period of 3 years. Excretory urography was performed routinely at 1, 3, 6 and 12 months after cystectomy and once yearly thereafter. Results: Five patients (8.2%) developed malignant ureteral obstruction (4 had metachronous upper tract tumors, and 1 patient had retroperitoneal lymph node metastasis which compressed the ureter). Eleven patients (18.0%) developed benign ureteroileal anastomotic stricture. The interval between cystectomy and initial detection of the malignant and benign upper tract lesion ranged between 34 and 118 months (mean 69 months) and between 1 and 20 months (mean 5.1 months), respectively. In all patients with malignant upper tract obstruction, a complete loss of renal function occurred within 10 months after the detection. Conversely, a progressive renal dysfunction was observed in patients with benign ureteroileal anastomotic stricture. All patients were asymptomatic before the detection of lesions on excretory pyelography. Conclusions: Our results suggest that cancer recurrence can occur even 10 years after cystectomy, typically progressing very rapidly within 1 year. A benign ureteroileal anastomotic stricture, on the other hand, tends to occur within 2 years but advances slowly. Consequently, a proper follow-up necessitates annual excretory urography and/or renal ultrasonography in all patients with ileal conduit urinary diversion after cystectomy. abstract_id: PUBMED:24772749 Role of blood C - reactive protein levels in upper urinary tract infection and lower urinary tract infection in adult patients (&gt;16 years). Aim: Role of blood C - reactive protein levels in upper urinary tract infection and lower urinary tract infection in adult patients (&gt;16 years). Material And Methods: Study included 2 groups of patients -Test group (n=58). Control group (n=28).Test group further classified into 2 groups- a). Patients having upper urinary tract infection b). Patients having lower urinary tract infection. Patients were subjected to following tests - complete hemogram, blood C-reactive protein levels, urinalysis. Results: The mean value of C-reactive protein in the cases of upper urinary tract infection 127.33 mg/L which is statistically significantly raised when compared to control(t-value 12.370 and p-value&lt;0.01). C-reactive protein was significantly raised in upper urinary tract infection in comparison to control group( p&lt;0.01), while in lower urinary tract infection this difference was insignificant (p&lt;0.05). Conclusion: C-reactive protein has a good diagnostic role in differentiating upper and lower urinary tract infection. The simple size of our study is very small to say these authentically, hence, further studies of large number of cases is required. abstract_id: PUBMED:32775477 Antibiotic prophylaxis is not associated with reduced urinary tract infection-related complications after cystectomy and ileal conduit. Objectives: Majority of complications after ileal conduit urinary diversion with cystectomy are related to urinary tract infections (UTIs). Controversy exists regarding use of prophylactic antibiotics after surgery. We determined if prophylactic antibiotic use during ureteral stent placement after ileal conduit urinary diversion decreased incidence of UTI-related complications. Methods: We retrospectively identified 75 consecutive patients who underwent ileal conduit urinary diversion with cystectomy at our institution from 2010 to 2016. Patients were stratified based on presence or absence of a UTI-related complication in the 90-day postoperative period. Means were compared with independent t-test and proportions with chi-square analysis. Multivariate logistic regression was performed to determine independent predictors of UTI-related complications. Results: Forty-five patients (60%) were prescribed prophylactic antibiotics after surgery. Mean duration of antibiotic use was 15 d, and mean duration of ureteral stenting was 25 d. Most common antibiotics used included fluoroquinolones (n = 23, 30.7%) followed by sulfamethoxazole-trimethoprim (n = 14, 18.7%). Rate of 90-day UTI-related complications was 36% (n = 27), and 90-day UTI-related readmission rate was 14.7% (n = 11). On bivariate and multivariate analysis, prophylactic antibiotic use was not associated with reduced 90-day UTI-related complications (P &gt; 0.05). Patients prescribed prophylactic antibiotics had increased incidence of Clostridium difficile infections in the 90-day postoperative period compared to controls (20% vs. 3.3%; P = 0.038). Conclusions: Prophylactic antibiotic use after ileal conduit urinary diversion with cystectomy was not associated with reduced UTI-related complications, and rate of Clostridium difficile infections was higher in this patient cohort. The effect of early removal of ureteral stents on UTI risk still has to be elucidated. abstract_id: PUBMED:3765885 Comparative results of ileal conduit and colonic conduit. Analysis of 50 children with bladder exstrophy During the last 20 years 50 children with exstrophy of the bladder were treated in the Department of Paediatric Surgery of the Children's Hospital of Cologne employing besides other surgical methods such as primary closure, ureterosigmoideostomy, ureterocutaneostomy etc., in 15 cases an ileal conduit and in 12 cases a colonic conduit. These children could be followed up on the average 8.5 or 3 years after the operation, clinically, roentgenologically and in some cases via scintigraphy. Late complications requiring surgical correction, such as stomatostenoses, conduit elongation, stenoses of the ureterointestinal anastomosis, calculus formation in the conduit, or complications like ureteral reflux, recurring infections of the urinary passages with pyelonephritis, occurred only with ileal conduits, whereas no late complications requiring surgery were seen with the colonic conduits. Similar results in respect of late complications were found among the patients in Munich from 1955 to 1983 with 35 exstrophies of the bladder (13 ileal conduits, 1 colonic conduit) in which additionally an adeno-carcinoma was seen after ureterosigmoideostomy with fatal outcome. Hence, we are of the opinion that the method of choice is the preparation of a colonic conduit in patients with exstrophy of the bladder where primary closure is not possible because the bladder lamina is too small or already epithelialised. This approach offers the safest possible long-term protection of the primary normally positioned upper urinary tract. Answer: The use of colonic conduits in children for urinary diversion has been evaluated in terms of its long-term effects on protecting the upper urinary tract. A study with a mean follow-up of 16.3 years involving 105 patients who underwent colonic conduit creation for various reasons, including neurogenic bladder and bladder exstrophy, showed that early and late stenosis at the ureterocolic anastomosis occurred in 7.6% and 15.5% of cases, respectively. Renal calculi developed in 8.2% of the renal units, and eight kidneys were removed due to complications such as recurrent pyelonephritis, calculi, or stenosis at the ureterocolic anastomosis. However, compared to preoperative status, only 3 of the remaining 151 renal units had increased dilatation of the upper urinary tract at the last follow-up. The study concluded that a colonic conduit with the option of continent conversion should be created before deterioration of the upper urinary tract when incontinent urinary diversion is necessary in children (PUBMED:8709335). Another study on the Mainz experience with colonic conduit urinary diversion in children and adolescents with neurogenic bladder reported that the upper urinary tracts remained stable or improved in 97 out of 99 renal units at the latest follow-up, with an average follow-up of 21.8 years. The study suggested that for patients with chronic renal failure or who are unable to perform catheterization of a continent stoma, the colonic conduit is a safe alternative in the long run (PUBMED:15864655). In contrast, a retrospective study of 47 children who underwent colonic conduit urinary diversion reported an overall complication rate of 81%, with rates of stomal stenosis at 34% and upper tract dilatation at 36%. These results were similar to the complication rate for the ileal conduit, leading to the suggestion that all forms of urinary diversion have a high long-term complication rate (PUBMED:6652429). In summary, the colonic conduit has been shown to be a viable option for protecting the upper urinary tract in children requiring urinary diversion, with long-term follow-up indicating stable or improved conditions in the majority of cases. However, complications such as stenosis and renal calculi can occur, and the overall complication rates are comparable to those of ileal conduits.
Instruction: Is there a relationship between light touch-pressure sensation and functional hand ability? Abstracts: abstract_id: PUBMED:17453977 Is there a relationship between light touch-pressure sensation and functional hand ability? Purpose: To investigate and compare the level of light touch-pressure sensation as tested via the Semmes Weinstein monofilament (SWM) test with the level of functional hand ability. Methods: Twenty-seven persons with isolated sensory deficit due to leprosy and 31 healthy controls were tested in the Occupational Therapy department of a hospital for patients with Hansen's disease. Palmar light touch thresholds were determined by SMW testing. Functional hand ability was tested via the Jebsen-Taylor Hand Function Test (JTHFT) and the Functional Dexterity Test (FDT). All participants were measured by manual muscle testing (MMT) to exclude any motor impairment. Data analysis compared sensory thresholds and level of functional hand ability between the two groups and examined the relationship between the variables. Results: In the group with sensory deficit, the sensory thresholds were significantly higher than in the control group. Significant correlations were found between the sensory thresholds measured by the SWM test and the FDT and JTHFT scores, with higher correlations found for tasks entailing manipulation of small objects. Conclusions: The findings support the existence of a relationship between sensory light touch thresholds tested by the Semmes Weinstein monofilaments (SWMs) and hand function. However, the SWM test alone is not sufficient as an indicator of hand function and must therefore be supplemented with other hand function tests. abstract_id: PUBMED:25150666 Relationship between touch sensation of the affected hand and performance of valued activities in individuals with chronic stroke. Objective: To investigate the association between touch sensation of the affected hand and performance and satisfaction with performance of valued activities in individuals with chronic stroke. Methods: Using a cross-sectional study design, this study correlated factors related to hand sensation and activity performance in individuals with chronic stroke. The Touch Test Evaluators and Canadian Occupational Performance Measure (COPM) were used. Correlations were used to determine the relationships between touch sensation of the affected hand and individuals' performance and satisfaction with performance of valued activities. Results: There was a good to excellent relationship between sensation and performance and satisfaction with performance of valued activities for individuals with intact touch sensation of the affected hand who scored higher on the COPM. There was little to no relationship between touch sensation of the affected hand and performance of valued activities for individuals with impaired sensation. Conclusion: This is the first study to relate touch sensation of the affected hand and performance and satisfaction with performance of valued activities in individuals with stroke. The findings suggest that rehabilitation therapists need to continue to address sensory function in evaluation and intervention as it relates to performance in valued activities. This study serves as a foundation for future research in sensation and performance of valued activities in individuals with chronic stroke. abstract_id: PUBMED:22672953 Upper extremity function and its relation with hand sensation and upper extremity strength in patients with multiple sclerosis. The purpose of this study was to investigate the relationships between the upper extremity functions, upper extremity strength and hand sensation in patients with Multiple Sclerosis (MS). Twenty-two patients with MS (mean age: 38.5 ± 8.31 years, median Expanded Disability Status Scale (EDSS): 2) and 10 healthy subjects were included. Upper extremity function was measured with the Nine-hole peg test, upper extremity strength (shoulder flexion-abduction, elbow flexion, pinch and grip) with hand-held dynamometer, hand grip dynamometer and manual pinch meter, threshold of light touch-pressure with Semmes-Weinstein monofilament, duration of vibration with 128-Hz frequency tuning fork, and distance of two-point discrimination with an aesthesiometer. Strength and functional level of the upper extremity, light touch-pressure, two-point discrimination, vibration sensations of the hand were lower in patients with MS compared with healthy controls (p &lt; 0.05). Light touch-pressure sensation of thumb and index fingers, two-point discrimination of index finger and elbow flexion strength were found to be related with upper extremity function in patients with MS (p&lt; 0.05). These results indicate that the hand sensation, upper extremity strength and function were affected in MS patients. Additionally upper extremity functions seem to be related with light touch-pressure and two-point discrimination sensations of the hand and elbow flexion strength. Upper extremity strengthening and sensorial training of the hand may contribute to the upper extremity function in patients with MS. abstract_id: PUBMED:9493795 The effect of wrist position on testing light touch sensation using the Semmes-Weinstein pressure aesthesiometer: a preliminary study. The objective of the study was to determine the normative values for light touch sensation measured by the Semmes-Weinstein pressure aesthesiometer when the wrist is held in the maximal flexed position and when the wrist is held in the maximal extended position. Currently, normative values exist only for measurements taken when the wrist is held in a neutral position. Thirty-one subjects with no previous hand injury or pathology participated in this preliminary study. Each subject served as his or her own control; i.e., each finger with the wrist in a neutral position served as the control for the same finger when the wrist was positioned in maximal flexion and then in maximal extension. The neutral position was used as the control, because normative values already exist for that position. The results revealed no consistent differences between measurements taken when the wrist was held in a neutral position and those taken with the wrist held in the maximal, or complete, flexed or extended position. This suggests that any difference in sensation found when hands are placed in a provocative position--i.e., when wrists are flexed or extended rather than placed in a neutral resting position--may be due to pathology and not to position. abstract_id: PUBMED:18190502 When touch elicits cold: functional imaging of an abnormal sensation. The authors report a 58-year-old man who presented with touch-induced cold sensations and loss of pain and normal temperature sensation in the left leg and trunk. He was found to have syringomyelia at the level of the third thoracic vertebrae. A functional magnetic resonance imaging (fMRI) study was performed to evaluate the underlying cortical processing of the touch-induced cold sensations. Blood oxygen level-dependent(BOLD) responses were measured in response to light tonic pressure stimuli and to cold stimuli applied to both the symptomatic area on the left leg and to the unaffected contralateral side. Tonic pressure applied on the unaffected right thigh and the left hand resulted in activations of contralateral primary (S1) and bilateral secondary somatosensory cortices (S2). In contrast, application of tonic pressure to the affected left thigh, ie, where touch stimuli elicited cold sensations, induced robust activations not only in contralateral S1 and bilateral S2, but also in bilateral insular cortices, which are cortical areas known to be pivotally involved in the central processing of cold. The fMRI data argue in favor of supraspinal cross-submodal plastic changes within the somatosensory system. abstract_id: PUBMED:2350211 Evaluating sustained touch-pressure in severe sensory deficits: meeting an unanswered need. Sensation is critical to the functional abilities of the hand. Although evaluation techniques are available to measure a patient's ability to appreciate moving touch (MT), no test is available to assess appreciation of sustained touch-pressure (STP). A new device, designed to apply constant stimuli of known intensity, was developed and used to evaluate STP appreciation in an experimental and a control group. The experimental group consisted of six patients with severe sensory deficits due to cortical dysfunction. Each subject was evaluated for STP appreciation on the tip of the index finger of the affected side of the body. The results showed that five of the patients had some STP appreciation; the sixth had none. Furthermore, in the patient group, STP appreciation faded with time; appreciation was much greater four and six seconds after the stimulus onset than it was after 18 to 20 seconds. In the control group the sensation did not fade away. The lowest weight tested (23 g) was seldom detected by three of the five patients although it was readily sensed by all healthy individuals. An unexpected finding was that the 350-g weight was more difficult to appreciate for three of the five subjects than were the 150-g and 250-g weights. This experiment demonstrated that STP, an important component of sensation, can be evaluated quickly and accurately in the clinical environment and that the magnitude of the deficit in STP is correlated with the severity of the disability in the patient seeking physiotherapy. abstract_id: PUBMED:25298320 A neural interface provides long-term stable natural touch perception. Touch perception on the fingers and hand is essential for fine motor control, contributes to our sense of self, allows for effective communication, and aids in our fundamental perception of the world. Despite increasingly sophisticated mechatronics, prosthetic devices still do not directly convey sensation back to their wearers. We show that implanted peripheral nerve interfaces in two human subjects with upper limb amputation provided stable, natural touch sensation in their hands for more than 1 year. Electrical stimulation using implanted peripheral nerve cuff electrodes that did not penetrate the nerve produced touch perceptions at many locations on the phantom hand with repeatable, stable responses in the two subjects for 16 and 24 months. Patterned stimulation intensity produced a sensation that the subjects described as natural and without "tingling," or paresthesia. Different patterns produced different types of sensory perception at the same location on the phantom hand. The two subjects reported tactile perceptions they described as natural tapping, constant pressure, light moving touch, and vibration. Changing average stimulation intensity controlled the size of the percept area; changing stimulation frequency controlled sensation strength. Artificial touch sensation improved the subjects' ability to control grasping strength of the prosthesis and enabled them to better manipulate delicate objects. Thus, electrical stimulation through peripheral nerve electrodes produced long-term sensory restoration after limb loss. abstract_id: PUBMED:2135498 Experimental studies on the transitional characteristics from touch sensation to pressure sensation in the mechanoreceptive sensation of periodontium Many studies have made on the physiological significance of the periodontium in relation to oral functions. But, although its importance to oral functions has been pointed out, few investigations have been carried out on the periodontal pressoreceptive information system. To help rectify this situation, as a link in investigations of soft-food textures, the author attempted to determine how a sense of pressure was perceived when loads gradually increasing from very small were applied to the tooth surface. In addition, the author attempted to clarify the way the periodontium discerns differences between initial and secondary loadings in a range of small loads between the touch sensation and the pressure sensation. Subjects were 10 males with sound, natural dentition and no disorders in mandibular functions. In order to eliminate intervention from pressoreceptors in the masticatory muscles and the mandibular joints, the upper first premolar was chosen as the test tooth. Small loads were applied in 2 directions: parallel to the first premolar vertical axis (vertical) and from the lingual to the buccal sides parallel to the occlusal plane (horizontal). For vertical loading, the mesial pit on the occlusal surface was chosen as the loading point. The middle point on the lingual surface was chosen as loading point for horizontal loading. First, loading on the tooth surface was gradually increased to obtain (1) threshold value for touch sensation; that is, minimal load required to incite a touch sensation, and (2) critical value for pressure sensation; that is, minimal load required to incite the pressure sensation subsequent to the touch sensation. Each subject was requested to press a hand switch in the moment he experienced the sensation of being touched and then again when, during increases of load, the touch sensation changed to the pressure sensation. The loading device was built into an electric strain gauge. Loads were increased gradually from 0 g to 60 g, with an increase ratio of about 10 g/sec. Loading wave and switching signal were simultaneously recorded on an oscillograph. Second, in order to determine the ability to discriminate small loads, with the same loading device, initial load was applied to the tooth. This load served as control. As soon as the load was sensed, it was removed. Then the secondary (testing) load was applied. Subjects were requested to say whether the secondary load was larger, smaller than or same as the initial one.(ABSTRACT TRUNCATED AT 400 WORDS) abstract_id: PUBMED:12392338 Development and validation of two new sensory tests of the hand for patients with stroke. Objective: To establish validity and reliability of two new sensory tests evaluating moving (MTP) and sustained (STP) touch-pressure and their relationship to hand function for patients with stroke. The STP had four components in which a light or heavy ball was applied passively or held actively. Participants: Twenty-eight participants with hemiparesis (1-109 months) from rehabilitation hospitals or outpatient programmes. Design: Content validity was established. Then reliability and concurrent and construct validity were determined. Sensation was compared with hand functional ability using the Modified Moberg recognition task, the Box and Block test and one task of the TEMPA test. Results: Reliability: Both types of reliability were significant for MTP (ICC = 0.92) and all components of STP (ICC = 0.62 to 0.92). Concurrent validity: MTP and STP correlated significantly (r = -0.39 to -0.83) with the Semmes-Weinstein Monofilament test used as a 'gold standard' comparison. Construct validity: MTP was significantly related to Moberg (r = 0.49). Three of four components of STP correlated with TEMPA (r = 0.49-0.53) and Moberg (0.45-0.71). Only STP (heavy ball) correlated with Box and Block (0.42-0.48). Of the four components of STP, only the passive STP (light ball) was not related to hand function. Conclusion: The new sensory tests of moving, and three components of sustained touch-pressure were reliable. The passive STP (light ball) was discarded. All but this test were relevant to the two functional roles of sensation: exploration for MTP and dexterity during holding for STP. abstract_id: PUBMED:32162568 Sensory retraining improves light touch threshold of the paretic hand in chronic stroke survivors: a single-subject A-B design. Background: Light touch, one of the primary and basic sensations, is often neglected in sensory retraining programmes for stroke survivors.Objective: This study aimed to investigate the effects of sensory retraining on the light touch threshold of the hand, dexterity and upper limb motor function of chronic stroke survivors.Methods: Five chronic stroke survivors with sensory impairment participated in this single-subject A-B design study. In baseline (A) phase, they only received standard rehabilitation. In the treatment (B) phase, they received a 6-week sensory retraining intervention in addition to standard rehabilitation. In both phases, they were evaluated every 3 days. Light touch threshold, manual dexterity and upper limb motor function were assessed using Semmes-Weinstein Monofilaments, Box-Block Test and Fugl-Meyer Assessment, respectively. Visual analysis, nonparametric Mann-Whitney U test and, c-statistic were used for assessing the changes between phases.Results: All participants indicated changes in trend or slope of the total score of light touch or both between the two phases. The results of the c-statistic also showed the statistical difference in the total score of light touch between baseline and treatment in all participants (p &lt; 0.001). Also, the results of the c-statistic and Mann-Whitney U test supported the difference of manual dexterity and motor function of the upper limb between baseline and treatment in all participants (p &lt; 0.001).Conclusion: Current findings showed that sensory retraining may be an effective adjunctive intervention for improving the light touch threshold of the hand, dexterity and upper limb motor function in chronic stroke survivors. Answer: Yes, there is a relationship between light touch-pressure sensation and functional hand ability. Studies have found significant correlations between sensory thresholds measured by the Semmes Weinstein monofilament (SWM) test and hand function tests such as the Functional Dexterity Test (FDT) and the Jebsen-Taylor Hand Function Test (JTHFT), particularly for tasks that involve the manipulation of small objects (PUBMED:17453977). Additionally, in individuals with chronic stroke, there is a good to excellent relationship between intact touch sensation of the affected hand and performance and satisfaction with performance of valued activities (PUBMED:25150666). In patients with Multiple Sclerosis (MS), hand sensation, upper extremity strength, and function were affected, and upper extremity functions were found to be related to light touch-pressure and two-point discrimination sensations of the hand and elbow flexion strength (PUBMED:22672953). Furthermore, sensory retraining has been shown to improve the light touch threshold of the paretic hand in chronic stroke survivors, which also improved manual dexterity and upper limb motor function (PUBMED:32162568). These findings suggest that sensory function is an important factor to consider in the evaluation and intervention of hand function, particularly in rehabilitation settings.
Instruction: Are there pulmonary vascular or respiratory effects of prostaglandin vehicles (ethanol, glycine buffer) after intravenous infusion or inhalation? Abstracts: abstract_id: PUBMED:17729206 Are there pulmonary vascular or respiratory effects of prostaglandin vehicles (ethanol, glycine buffer) after intravenous infusion or inhalation? Background: Infusion or inhalation of prostaglandins PGE1 and PGI2 is used to reduce vascular resistance. PGE1 and PGI2 are dissolved in ethanol and glycine buffer, respectively. Each vehicle may cause dose-dependent haemodynamic and respiratory side effects. This study was performed to examine the role of low-dose ethanol and glycine buffer as used for the infusion and inhalation of PGE1 and PGI2. Methods: Two groups of pigs (ethanol and glycine group, n = 9 each) were anaesthetised and ventilated mechanically. Ethanol was given at 0.14 mg/kg/min by infusion and 0.12 mg/kg/min as aerosol, glycine buffer was infused at 3.8 microg/kg/min and inhaled at 3.1 microg/kg/min, respectively. Haemodynamic and respiratory data were recorded before and after application. Results: Neither infusion nor inhalation of ethanol or glycine buffer caused significant changes in systemic and pulmonary haemodynamics, right heart function, oxygenation or ventilation. Conclusions: The effect of inhaled or intravenously infused PGE1 and PGI2 is not influenced by their preparations containing ethanol or glycine buffer. abstract_id: PUBMED:8844230 Combined effects of NO inhalation and intravenous PGF2 alpha on pulmonary circulation and gas exchange in an ovine ARDS model. Objectives: Inhalation of nitric oxide (NO) selectively dilates pulmonary vessels in well-ventilated regions. Prostaglandin F2 alpha (PGF2 alpha) is a vasoconstrictor and is reported to enhance hypoxic pulmonary vasoconstriction. The objective of this study was to examine whether the combination of intravenous PGF2 alpha and inhaled NO in ARDS lungs has a beneficial effect on oxygenation. Design: We investigated the effect of intravenous PGF2 alpha infusion (0.05-10.0 micrograms/kg per min) with and without NO inhalation (60 ppm) on the hemodynamics and gas exchange in an ovine ARDS model, examining the pulmonary artery pressure versus the flow plot by varying cardiac output. Measurements And Results: After lung lavage, NO inhalation reduced the mean pulmonary arterial pressure (MPAP) by decreasing the zero-flow pressure intercept from 10.6 +/- 3.8 (mean +/- SD) to 8.5 +/- 3.8 mmHg (p &lt; 0.05) with no significant change in slope. NO inhalation improved PaO2 from 56 +/- 12 to 84 +/- 38 mmHg (p &lt; 0.005) and reduced pulmonary shunt from 65 +/- 5 to 53 +/- 8% (Qs/Qt) (p &lt; 0.001). The dose-dependent effects of PGF2 alpha infusion were: (1) increased MPAP attributed to an increased slope in pulmonary artery pressure-flow plot; (2) decreased cardiac index; (3) decreased Qs/Qt with unchanged PaO2. The dose-dependent decrease in Qs/Qt after PGF2 alpha infusion was attributed to the decreased cardiac output. Conclusions: It is suggested that inhalation of NO reduced the critical vascular pressure near alveoli without affecting upstream vessels, while infused PGF2 alpha constricted the larger upstream pulmonary artery vessels without appreciably affecting the critical pressure. Inhalation of NO into well-ventilated lung areas shifted perfusion to well-oxygenated areas, and there was no supplemental shift in blood flow by adding an infusion of PGF2 alpha. abstract_id: PUBMED:2507795 Intravenous infusion of irrigating fluids containing glycine or mannitol with and without ethanol. Irrigating fluids consisting of 5% mannitol, 3% mannitol + 1% ethanol, 2.2% glycine and 1.5% glycine + 1% ethanol in water were given by intravenous infusion to seven healthy volunteers at a constant rate of 50 ml./min. over 20 min. The irrigants containing glycine produced hyperkalemia and usually prickling and burning skin sensations and slight nausea. The severity of the symptoms apparently correlated to the blood ammonia level, which increased significantly after the glycine but not after the mannitol infusions. With 5% mannitol, there was a pronounced transient increase of the blood volume and a prolonged hyponatremia. There were no differences between the irrigants in respect to their effects on blood pressure and urine excretion. Ethanol caused no adverse effects and allowed the fluid supplementation to be followed by expired breath tests. abstract_id: PUBMED:7654728 Vasodilator therapy after heart transplantation: effects of inhaled nitric oxide and intravenous prostacyclin, prostaglandin E1, and sodium nitroprusside. Background: Vasodilator therapy is frequently needed to treat pulmonary hypertension after heart transplantation. In the present study, the effects of intravenous sodium nitroprusside, prostacyclin, prostaglandin E1, and inhaled nitric oxide (5, 10, and 20 parts per million) on central hemodynamics, right ventricular function, and pulmonary selectivity were evaluated shortly after heart transplantation. Methods: Hemodynamic measurements were made after surgery in the intensive care unit. The intravenous vasodilators were compared at equipotent infusion rates. Effects of inhaled nitric oxide were measured after 10 minutes inhalation at each dose level. Results: Cardiac output, stroke volume, right ventricular end-diastolic volume, and central filling pressures were highest with prostacyclin (16 +/- 2 ng/kg/min) compared with both prostaglandin E1 (202 +/- 27 ng/kg/min) and sodium nitroprusside (1.0 +/- 0.2 microgram/kg/min). Systemic and pulmonary vascular resistance were lowest with prostacyclin. None of the intravenous vasodilators induced a selective pulmonary vasodilation. In contrast, nitric oxide inhalation induced a selective decrease in pulmonary vascular resistance, with no change in systemic vascular resistance. Cardiac output increased with nitric oxide, whereas mean pulmonary arterial pressure, transpulmonary pressure gradient, and central venous pressure decreased, with the most pronounced effect at an inhaled concentration of 20 parts per million. Conclusions: Prostacyclin is the best choice for intravenous vasodilator therapy after heart transplantation. However, inhaled nitric oxide is the only selective pulmonary vasodilator, which should be used in cases of pulmonary hypertension and severe right ventricular failure associated with systemic hypotension. abstract_id: PUBMED:10515083 Blood ammonia levels after intravenous infusion of glycine solution with and without ethanol. Objective: Absorption of glycine 1.5% during transurethral resection of the prostate may increase blood ammonia levels, but hyperammonaemia has not been described when the fluid also contained ethanol 1%. The aim of this experimental study was to evaluate whether ethanol 1% reduces glycine-induced hyperammonaemia. Material And Methods: Two intravenous infusions of glycine solution with and without ethanol 1% added were given on different occasions to 20 male volunteers (mean age 30 years). Half of them received 22 g of glycine over 50 min and the others approximately 18 g over 30 min. Blood ammonia was measured before and 30 min after the infusion. The serum levels of free amino acids were measured on 7 occasions during 10 of the experiments. Results: The glycine infusions increased blood ammonia levels from 37 micromol/l (median, 10th and 90th percentile limits 34-53) to 57 micromol/l (27-110; p &lt; 0.001). The change was greater after the larger glycine dose, regardless of whether the fluid contained ethanol (p &lt; 0.05). The only amino acid concentration correlating with blood ammonia was glycine, which showed higher levels in those who had a rise in blood ammonia of 50% or more. Conclusions: Ethanol 1% did not reduce the increase in blood ammonia concentration after the administration of glycine solution. abstract_id: PUBMED:3196723 The metabolism of prostaglandin D2 after inhalation or intravenous infusion in normal men. Tritium-labelled prostaglandin D2 (PGD2) was administered to normal volunteers by either intravenous infusion or inhalation in order to establish which metabolites of PGD2 are initially found in human plasma. Inhaled PGD2 was rapidly absorbed from the airways, as indicated by the rapid appearance of tritium in the plasma. Metabolites chromatographically similar to 9 alpha,11 beta-PGF2 and 13,14-dihydro-15-keto-9 alpha,11 beta-PGF2 were found after both routes of administration. At later time points, other unidentified compounds were present. Only after intravenous infusion was there evidence of metabolites with 9 alpha,11 alpha stereochemistry of the ring hydroxyl functions. In human lung, 9 alpha,11 beta-PGF2 was metabolized in the presence of NAD+ to compounds tentatively identified by gas chromatography/mass spectrometry (GC/MS) as 15-keto-9 alpha,11 beta-PGF2 and 13,14-dihydro-15-keto-9 alpha,11 beta-PGF2. Thus, after 11-ketoreductase-dependent metabolism of PGD2 to the biologically active compound 9 alpha,11 beta-PGF2, further metabolism probably proceeds by the combined action of 15-hydroxyprostaglandin dehydrogenase/15-ketoprostaglandin-delta 13-reductase (15-PGDH/delta 13R). Both 9 alpha,11 beta-PGF2 and its 13,14-dihydro-15-keto metabolite may be useful analytes for the measurement of PGD2 turnover, and may therefore prove to be important in understanding the pathophysiological significance of this putative mediator. abstract_id: PUBMED:9523370 Effects of inhalation of nitroglycerin on hypoxic pulmonary vasoconstriction. Recent studies suggest that nitric oxide (NO) may play an important role in the pathophysiology of pulmonary hypertension. Nitroglycerin is metabolized to NO, which is a potent vascular smooth muscle relaxant. The aim of the present study was to compare the effects of inhaled and infused nitroglycerin on pulmonary hemodynamics and gas exchange in anesthetized, artificially ventilated dogs. Nitroglycerin was administrated either by inhalation or by infusion. Systemic blood pressure (SBP), pulmonary arterial pressure (PAP), and pulmonary capillary wedge pressure (PCWP) were measured, and cardiac output was estimated by an electromagnetic flowmeter. Blood gas measurements were performed during hypoxic gas exposure (FiO2; 0.1) with a continuous inhalation or infusion of nitroglycerin (1, 2.5 micrograms/kg/min). Inhaled (n = 4) and infused (n = 4) nitroglycerin (1 microgram/kg/min) did not produce any detectable effects on the hemodynamics. Inhaled nitroglycerin (2.5 micrograms/kg/min) reduced SBP, PAP and calculated pulmonary vascular resistance (PVR) in all dogs. Cardiac output did not change. In addition, inhaled nitroglycerin increased PaO2. In contrast, the continuous infusion of nitroglycerin (2.5 micrograms/kg/min) did not change in PAP, whereas infused nitroglycerin decreased the mean SBP. Infused nitroglycerin did not alter cardiac output and calculated PVR. A decreased PaO2 was noted in 2 dogs on nitroglycerin infusion. These findings indicate that inhaled nitroglycerin effects the pulmonary circulation relatively more than infused nitroglycerin, which tends to have more of a general effect on the systemic circulation. The effects of inhaled nitroglycerin may thus be comparable to the effects of NO inhalation. abstract_id: PUBMED:11353932 Atrial natriuretic peptide infusion and nitric oxide inhalation in patients with acute respiratory distress syndrome. Aim: To study the effects of infusion of atrial natriuretic peptide (ANP) versus the inhalation of nitric oxide (NO) in patients with an early acute respiratory distress syndrome (ARDS). Methods: Ten patients with severe ARDS were studied in a crossover study design, within 72 hours after starting mechanical ventilation. We studied the effects of ANP infusion (10 ng/kg/min for 1 hour) and of inhalation of NO (20 ppm for 1 hour) on hemodynamic and respiratory patient parameters, as well as the effects on plasma levels of ANP, guanosine 3',5'-cyclic monophosphate, nitrate and endothelin-1. Results: Despite an approximate 50% increase in mixed venous ANP plasma concentration (from 86 +/- 21 to 123 +/- 33 ng/l, P &lt; 0.05) during ANP infusion, there were no changes in mean pulmonary artery pressure, pulmonary vascular resistance index, extravascular lung water index, or in pulmonary gas exchange. NO inhalation, in contrast, lowered mean pulmonary artery pressure (from 26 +/- 1.9 to 23.9 +/- 1.7 mmHg, P &lt; 0.01), pulmonary vascular resistance index (from 314 +/- 37 to 273 +/- 32 dynes/cm5/m2, P &lt; 0.05) and central venous pressure (from 8.2 +/- 1.2 to 7.3 +/- 1.1 mmHg, P &lt; 0.02). Furthermore, NO inhalation improved pulmonary gas exchange, reflected by a decrease in alveolar-arterial oxygen gradient (from 41.9 +/- 3.9 to 40.4 +/- 3.6 kPa, P &lt; 0.05), a small increase in oxygenation (PaO2/FiO2 from 17.7 +/- 1.4 to 19.7 +/- 1.1 kPa, P = 0.07) and a small decrease in venous admixture (Qs/Qt from 35.7 +/- 2.0 to 32.8 +/- 2.7%, P = 0.11). Conclusion: This study shows that, in contrast to NO inhalation, infusion of ANP neither improves oxygenation nor attenuates pulmonary hypertension or pulmonary edema in patients with severe ARDS. abstract_id: PUBMED:9620900 Cardiopulmonary effects of aerosolized prostaglandin E1 and nitric oxide inhalation in patients with acute respiratory distress syndrome. Ten patients with acute respiratory distress syndrome (ARDS) received in random order nitric oxide (NO) inhalation, aerosolized prostaglandin E1 (PGE1), infusion of PGE1, or no intervention. Inhalation of either aerosolized PGE1 (10 +/- 1 ng/kg/min) or NO (7 +/- 1 ppm) reduced pulmonary vascular resistance (PVR) from 158 +/- 14 to 95 +/- 11 dyn . s/cm5/m2 (NO) and 100 +/- 12 dyn . s/cm5/m2 (aerosolized PGE1), and improved PaO2 from 78 +/- 3 to 96 +/- 5 mm Hg (NO) and 95 +/- 4 mm Hg (aerosolized PGE1) (p &lt; 0.05), venous admixture (Q VA/Q T) from 45 +/- 2 to 36 +/- 2% (NO), and 36 +/- 2% (aerosolized PGE1) (p &lt; 0.05), oxygen delivery (DO2) from 711 +/- 34 to 762 +/- 45 ml/min/m2 (NO) and 780 +/- 46 ml/min/m2 (aerosolized PGE1) (p &lt; 0.05), and right ventricular ejection fraction (RVEF) from 32 +/- 6 to 37 +/- 5% (NO), and 36 +/- 4% (aerosolized PGE1) (p &lt; 0.05) at a constant cardiac index (CI). Although infusion of PGE1 (12 +/- 1 ng/kg/min) caused a similar reduction in PVR as aerosolized PGE1 and NO inhalation, it improved RVEF and increased CI but decreased Q VA/Q T and PaO2. These results suggest that in ARDS patients inhalation of aerosolized PGE1 or NO in low concentrations equally improves PVR and gas exchange by selective vasodilation in ventilated areas. abstract_id: PUBMED:6616307 Effect of intravenous infusion on development of pulmonary oedema after inhalation injury. Last year, after a systematic search of pathological and patho-physiological changes of inhalation injury in dogs, we recorded severe pulmonary oedema as early as 2 h postburn (the earliest monitoring time designed in experiment) on dogs inflicted with both body surface and severe respiratory burns without any fluid replacement during the whole course of experiment. In an analysis of cases admitted into our burn centre in the past ten years, we also found that there was no definite relationship between the incidence and severity of pulmonary oedema occurring in severe burn patients and the amount of fluid infused in the early burn phase. These facts led us to the following experiment designed to study the effect of intravenous infusion on development of pulmonary oedema after severe burns complicated with inhalation injuries. Answer: According to the study with PUBMED:17729206, neither infusion nor inhalation of the prostaglandin vehicles ethanol or glycine buffer caused significant changes in systemic and pulmonary hemodynamics, right heart function, oxygenation, or ventilation. This suggests that the pulmonary vascular or respiratory effects of these vehicles, when used at the doses studied for the infusion and inhalation of prostaglandins PGE1 and PGI2, are minimal or non-existent. Therefore, the effect of inhaled or intravenously infused PGE1 and PGI2 is not influenced by their preparations containing ethanol or glycine buffer.
Instruction: Is glaucoma comprehension associated with psychological disturbance and vision-related quality of life for patients with glaucoma? Abstracts: abstract_id: PUBMED:24861547 Is glaucoma comprehension associated with psychological disturbance and vision-related quality of life for patients with glaucoma? A cross-sectional study. Objective: To investigate whether understanding of glaucoma is associated with psychological disturbance and vision-related quality of life (QoL) in patients with glaucoma. Design: A prospective, hospital-based, cross-sectional study. Setting: Shanghai, China. Participants: Five hundred patients with glaucoma were included in the study. Participants were asked to complete the glaucoma comprehension questionnaire, the Hospital Anxiety and Depression Scale (HADS) questionnaire, and the 25-item National Eye Institute Visual Functioning Questionnaire (NEI VFQ-25). Outcome Measures: Sociodemographic data and clinical data were collected. Multivariate adjusted linear regression analyses were performed to examine factors related to psychological disorder and QoL. The relationship between understanding of glaucoma and psychological status or QoL was analysed. Results: The mean age of patients was 56.5±17.2 years, and the mean duration of glaucoma was 4.6±6.3 years. About half (54.2%) of the patients had primary open angle glaucoma, 37.2% had primary angle closure glaucoma, and 8.6% had secondary glaucoma. The total score and the subscores of knowledge of glaucoma, compliance and habits were all negatively correlated with the HADS-Depression and HADS scores. For NEI VFQ-25, the total comprehension score was positively related to general vision, near activity, distance activity, social function, colour vision and the composite score. Habits had a significant influence on both HADS-Depression and the composite score of NEI VFQ-25. Conclusions: The level of understanding about glaucoma is an independent factor negatively associated with psychological disturbance and positively associated with QoL in patients with glaucoma. However, it is a subjective factor which can be regulated, so imparting knowledge about glaucoma and establishing appropriate habits might help patients alleviate psychological disturbance and enhance their QoL. abstract_id: PUBMED:36404962 Vision-related quality-of-life in Jamaican glaucoma patients at Kingston Public Hospital. Aim: To determine the vision-related quality-of-life of glaucoma patients and the association between clinical and socioeconomic factors, and vision-related quality-of-life. Methods: This was a cross-sectional study. Consecutive patients with glaucoma were interviewed using a modified 25-item National Eye Institute Visual Function Questionnaire (VFQ-25) by a single interviewer. Statistical analysis was done to find associations between patient variables and vision-related quality-of-life scores. Confidentiality and anonymity were maintained. Results: Ninety-six participants were recruited in the study. There were 44 males and the mean age for males and females was 65.7 and 69.5y, respectively. The mean composite score was 71.2 (with a maximum possible score of 100), with the highest mean score in the colour vision subscale (89.8) and the lowest mean score in the driving subscale (34.0). Worse visual acuity (P&lt;0.001), longer duration of glaucoma (P&lt;0.001) and higher number of glaucoma medications (P&lt;0.001) were associated with a worse composite score. Female participants and those who lived in urban areas had significantly better scores than male participants (P=0.002) and those who lived in rural areas (P=0.017), respectively. Conclusion: The vision-related quality-of-life in Jamaican glaucoma patients is comparable to that of glaucoma patients in the Barbados Eye Study and other international studies using the VFQ-25 questionnaire. Worse quality-of-life scores are associated with poorer visual acuity, longer duration of glaucoma, more glaucoma medications, and sociodemographic factors such as male gender and rural residence. abstract_id: PUBMED:34764722 Effect of red ginseng on visual function and vision-related quality of life in patients with glaucoma. Background: Red ginseng has been found to improve ocular perfusion and dry eye syndrome in glaucomatous eyes; however, its effects on visual function and vision-related quality of life have not been investigated. This study sought to evaluate the effects of red ginseng on visual function and vision-related quality of life in glaucoma patients using contrast sensitivity and a questionnaire. Methods: Participants were randomly assigned to two groups in this prospective, randomized, double-blind study: in one group, red ginseng was taken first, followed by a placebo, and in the other, placebo was taken first, followed by red ginseng. We measured and compared changes in contrast sensitivity and vision-related quality of life between the two groups. Contrast sensitivity was measured using OPTEC® 6500P, and vision-related quality of life was evaluated using the 25-item National Eye Institute Visual Function Questionnaire. One-way and two-way repeated measure analyses of variance were used for the comparison. Relationships between respective changes in dry eye syndrome and contrast sensitivity were also analyzed. Results: Daytime contrast sensitivity and ocular pain improved after the administration of red ginseng. Nighttime contrast sensitivity was improved in early or moderate glaucoma. Improved contrast sensitivity was not associated with improvement in dry eye syndrome. Conclusion: Red ginseng could improve contrast sensitivity and ocular pain in patients with glaucoma. The mechanism underlying improvement in contrast sensitivity appears to be associated with enhanced retinal perfusion or retinal ganglion cell function, but not dry eye syndrome. abstract_id: PUBMED:35737419 Vision-Related Quality of Life and Seasonal Affective Disorder in Patients with Glaucoma and Macular Degeneration. Seasonal affective disorder (SAD) is characterized by depressive episodes related to changes in the seasons. Patients with severe vision loss are at an increased risk of SAD. This study seeks to determine the extent to which patients with moderate vision loss report symptoms of SAD. In this cross-sectional, comparative case series, the Seasonal Pattern Assessment Questionnaire (SPAQ) and the National Eye Institute Visual Function Questionnaire (VFQ-39) were used to screen 111 patients with age-related macular degeneration (AMD) and/or primary open-angle glaucoma (POAG). A multiple regression analysis was performed to create a predictive model for SAD based on the Global Seasonality Score (GSS) using the VFQ-39. Subjects who reported symptoms of SAD (GSS &gt; 8) had lower vision-related quality of life (composite score: 57.2 versus 73.2, p &lt; 0.001). Exploratory factor analysis revealed that the items on the VFQ-39 split into two distinct dimensions that together accounted for 63.2% of the total variance in the GSS. One group of questions addressed vision-related problems; the other group comprised questions related to the quality of life. Whereas this model successfully identified patients with vision loss at risk of SAD, a model restricted to the questions available on the shorter, widely used VFQ-25 instrument did not reliably identify patients at risk of SAD. abstract_id: PUBMED:36175783 Vision-related quality of life amongst patients with low vision and blindness in a resource-limited country. Purpose: To evaluate the significance of sociodemographic and clinical variables on the quality of life of low-vision patients in a tertiary hospital in Enugu, Nigeria, and also determine the causative ocular pathologies and their vision-related quality of life. Methods: A cross-sectional study of adults 18 years and above with low vision/blindness. They were consecutively enrolled and clinically evaluated. The WHO-NEI-VFQ-25 questionnaire for quality of life was administered. Descriptive statistics were used to summarize variables. Kruskal-Wallis test, analysis of variance, and post hoc tests were done where appropriate. Results: Of the 400 participants, 81% were above 40 years. Mean age was 57.27 ± 16.34 years. Males (52.0%) and females (48.0%). 225 (56.25%) had Moderately Severe Visual Impairment. Low vision/blindness negatively affected the different domains of quality of life significantly, except ocular pain and dependency. Glaucoma and Cataract were the major disease conditions that affected most of the domains significantly. These conditions when poorly managed, due to poor and limited health resources, progress to low vision/blindness. Of all the socio-demographic factors and clinical variables studied, only age and severity of low vision were significant predictors of quality of life. Conclusion: There is the need for prompt and proper treatment of low-vision patients and advocacy for the establishment of adequate and functional rehabilitation services. Quality of life which measures the overall impact of a disease on an individual was low in people with low vision and blindness. Low-vision services, which include rehabilitation, will provide comprehensive eye-care services and thus improve their quality of life. abstract_id: PUBMED:36698850 Vision-Related Quality of Life and Associated Factors Among an Adult Population with Glaucoma Attending a Comprehensive Specialized Hospital, Northwest Ethiopia. Background: Glaucoma is a group of ocular conditions that leads to irreversible blindness. It can affect the vision-related quality of life in many ways. In Ethiopia, limited information is available on how and to what extent the vision-related quality of life of the glaucoma population has been affected. Objective: This study attempted to assess the vision-related quality of life and associated factors among an adult population with glaucoma attending Felege Hiwot Comprehensive Specialized Hospital, Northwest Ethiopia. Methods: The cross-sectional study was conducted on the adult population with glaucoma attending Felege Hiwot Comprehensive Specialized Hospital. Systematic random sampling was employed to select 319 study participants. Descriptive and summary statistics were calculated. Simple and multiple linear regressions were performed to determine factors associated with vision-related quality of life. A p-value of &lt;0.05 in multivariable regression was considered statistically significant at a 95% confidence level. Results: The mean glaucoma quality of life-15 score in this study was 47.85 ± 15.41. Below primary education, primary education, visual acuity of the better eye, visual acuity of the worse eye, and severe glaucoma were significantly associated with the glaucoma quality of life-15 score. Conclusion And Recommendation: The study observed poorer vision-related quality of life in the glaucoma population than previously reported in Ethiopia and around the globe. The vision-related quality of life was significantly associated with educational status, visual acuity of the better and worse eye, and stage of glaucoma in the better eye. Educating the glaucoma population on the nature of the disease, advice on early presentation, and better-coping strategies for the condition are warranted. abstract_id: PUBMED:32172234 Optical Quality Assessed by Optical Quality Analysis System in Chinese Primary Open-Angle Glaucoma Patients and Its Correlations with Psychological Disturbances and Vision-Related Quality of Life. Purpose: To investigate if optical quality assessed by Optical Quality Analysis System (OQAS) is a new factor for psychological disturbance and vision-related quality of life (QoL) in Chinese patients with primary open-angle glaucoma (POAG). Methods: This is a cross-sectional study. OQAS, automated static perimetry, the Hospital Anxiety and Depression Scale (HADS) Questionnaire, and the 25-item National Eye Institute Visual Function Questionnaire (NEI VFQ-25) were used to assess optical quality, visual field, psychological health, and QoL in 64 POAG patients. Correlation between the OQAS parameters and QoL or psychological health was analyzed by linear regression model. Results: OQAS parameters, the modulation transfer function cutoff frequency (MTF cutoff) and the mean OQAS values (mean OV) were significantly related to mean deviation (MD). Statistically significant correlation was also found between MTF cutoff, the mean OV, and QoL in more than half of the NEI VFQ-25 subscale scores and the composite. The correlation coefficients between OQAS parameters and QoL were similar to those between MD and QoL in most of the subscale scores. HADS-Depression and HADS-Anxiety scores significantly correlated with MD in the worse eye, the composite and 3 subscales of NEI VFQ-25, general health, social function, and mental health, but were not correlated with optical quality parameters measured by OQAS. Conclusions: OQAS-assessed optical quality is negatively related to MD and is a valid indicator for vision-related QoL, but not for psychological health in our study group. OQAS might be an alternative or supplement to perimeter in POAG patients. abstract_id: PUBMED:37451656 Evaluation of effect of covıd-19 pandemic on anatomical and functional changes and vision-related quality of life in patients with glaucoma. Background: To evaluate anatomical and functional changes and vision-related quality of life in patients whose glaucoma follow-up was disrupted by the COVID-19 restrictions. Methods: This retrospective observational study included 100 patients who were followed up at the glaucoma unit. For the patients whose follow-up evaluations were postponed due to COVID-19 restrictions, visual acuity (VA), intraocular pressure (IOP), retinal nerve fiber layer (RNFL) thickness, visual field parameters [mean deviation (MD), pattern standard deviation (PSD), and visual field index (VFI)], and the National Institute of Ophthalmology Visual Function Scale-25 (NEI-VFQ-25) score were evaluated based on the measurements performed at the last visit before COVID-19 (V1) and at the first visit after the removal of COVID-19 restrictions (V2). Results: The mean age of the patients was 61.1 ± 13.4 years, the mean follow-up time was 11.4 ± 4.2 months, and the mean interval between the last two visits was 7.2 ± 2.7 months. In the evaluation of the last two visits, VA was lower and IOP was higher at V2, and there was progression in the MD, PSD, and VFI values (p&lt;0.05, for all). RNFL thickness progression was seen in 13-23% of the patients. According to the NEI-VFQ-25 evaluation, except for peripheral vision and near vision, all the remaining subscale scores and the total score were lower at V2 (p&lt;0.05, for all). Conclusions: This study demonstrates the devastating impact of the COVID-19 pandemic on the anatomical and functional changes and vision-related quality of life together in patients with glaucoma. abstract_id: PUBMED:30624385 Vision-related Quality of Life in Glaucoma Patients and its Correlations With Psychological Disturbances and Visual Function Indices. Purpose: The purpose of this study was to assess the vision-related quality of life (VR-QoL) in glaucoma patients and its correlations with psychological disturbances and visual function components. Materials And Methods: The 25-item National Eye Institute Visual Functioning Questionnaire (NEI VFQ-25) and Hospital Anxiety and Depression Scale (HADS) questionnaires were administered to 428 Chinese glaucoma patients to evaluate their VR-QoL and anxiety and depression disorders, respectively. Sociodemographical and clinical factors were collected at the same time. Univariate analyses were used to investigate the associations between the variables and the VR-QoL. Multivariate linear regression analyses were used to identify the independent psychological and visual functional predictors of the VR-QoL. Standardized partial regression analyses were used to reveal the variables that mostly relevant to the VR-QoL. Results: The composite score (mean±SD) was 71.88±14.44 for NEI VFQ-25 and 13.17±6.56 for HADS. Visual function indices, including best-corrected visual acuity and mean deviation of both eyes in addition to psychological symptoms including anxiety and depression were both correlated with VR-QoL significantly, even after adjusting for sociodemographical and clinical factors. Standardized partial regression analyses further suggested that psychological disorders, especially anxiety rather than visual function components, were mostly relevant to VR-QoL. Conclusions: Deterioration of vision impairment and visual field defects in addition to increased recognition of psychological disturbances reduce the VR-QoL of glaucoma patients significantly. Alleviating psychological symptoms, especially anxiety, perhaps have a greater influence on the improvement of VR-QoL. abstract_id: PUBMED:37429533 Charles Bonnet Syndrome Adversely Affects Vision-Related Quality of Life in Patients with Glaucoma. Purpose: To investigate the impact of Charles Bonnet syndrome (CBS) on vision-related quality of life (VRQoL) in patients with glaucoma. Design: Cross-sectional cohort study. Participants: Twenty-four patients with CBS and 42 matched controls without CBS out of 337 patients with open-angle glaucoma (OAG) with visual field (VF) loss. Methods: A matching technique was used to identify control patients with similar disease stage, best-corrected visual acuity (BCVA) and age to patients with CBS. Patients' VRQoL was determined using the National Eye Institute Visual Function Questionnaire 25 (NEI VFQ-25). Rasch-calibrated NEI VFQ-25 scores of the CBS group and the control group were compared. Uni- and multivariable regression analysis was used to evaluate the impact of different factors on VRQoL. Main Outcome Measures: Vision-related quality of life in patients with glaucoma with CBS and without CBS. Results: Vision-related quality of life scores were significantly lower in the CBS group than in the control group on both the visual functioning scale with 39 points (95% confidence interval (CI): 30-48) vs. 52 points (95% CI: 46-58) (P = 0.013) and on the socioemotional scale with 45 points (95% CI: 37-53) vs. 58 points (95% CI: 51-65) (P = 0.015). Univariable regression analysis showed that integrated visual field mean deviation (IVF-MD) (r2 = 0.334, P &lt; 0.001), BCVA in the better eye (r2 = 0.117, P = 0.003), and the presence of CBS (r2 = 0.078, P = 0.013) were significantly correlated to VRQoL scores on the visual functioning scale. Integrated visual field mean deviation (r2 = 0.281, P &lt; 0.001), age (r2 = 0.048, P = 0.042), and the presence of CBS (r2 = 0.076, P = 0.015) were significantly correlated to VRQoL scores on the socioemotional scale. Multivariable regression analysis showed that IVF-MD and the presence of CBS accounted for nearly 40% of the VRQoL score on the visual functioning scale (R2 = 0.393, P &lt; 0.001) and for 34% of the VRQoL score on the socioemotional scale (R2 = 0.339, P &lt; 0.001). Conclusions: Charles Bonnet syndrome had a significant negative association to VRQoL in patients with glaucoma. Presence of CBS should be considered when evaluating VRQoL in patients with glaucoma. Financial Disclosure(s): The author(s) have no proprietary or commercial interest in any materials discussed in this article. Answer: Yes, glaucoma comprehension is associated with psychological disturbance and vision-related quality of life (QoL) for patients with glaucoma. A study conducted in Shanghai, China, found that the level of understanding about glaucoma is an independent factor negatively associated with psychological disturbance and positively associated with QoL in patients with glaucoma. The total score and the subscores of knowledge of glaucoma, compliance, and habits were all negatively correlated with the Hospital Anxiety and Depression Scale (HADS) scores for depression and anxiety. Additionally, for the 25-item National Eye Institute Visual Functioning Questionnaire (NEI VFQ-25), the total comprehension score was positively related to general vision, near activity, distance activity, social function, color vision, and the composite score. This suggests that imparting knowledge about glaucoma and establishing appropriate habits might help patients alleviate psychological disturbance and enhance their QoL (PUBMED:24861547). Moreover, other studies have shown that vision-related quality of life in glaucoma patients is affected by various factors, including worse visual acuity, longer duration of glaucoma, more glaucoma medications, and sociodemographic factors such as gender and residence area (PUBMED:36404962). Additionally, interventions like the administration of red ginseng have been found to improve contrast sensitivity and ocular pain, which could potentially enhance vision-related QoL in glaucoma patients (PUBMED:34764722). Furthermore, the presence of Charles Bonnet Syndrome (CBS) in patients with glaucoma has been shown to significantly lower vision-related quality of life scores, indicating that psychological factors can have a substantial impact on the well-being of glaucoma patients (PUBMED:37429533). Overall, these findings underscore the importance of comprehensive patient education and psychological support as part of glaucoma management to improve both the psychological well-being and the vision-related quality of life of patients with this condition.
Instruction: Can incorporating inpatient overnight work hours into a pediatric clerkship improve the clerkship experience for students? Abstracts: abstract_id: PUBMED:23348086 Can incorporating inpatient overnight work hours into a pediatric clerkship improve the clerkship experience for students? Purpose: After incorporating medical students into pediatric resident night-float teams, the authors studied the effects of the new schedule on (1) cognitive performance, (2) number of new admissions, (3) clerkship satisfaction, and (4) amount and quality of resident teaching. Method: Part 1 was a retrospective historical controls study. The intervention was a schedule change that eliminated inpatient call. The historical control group had a four-week inpatient schedule of daytime hours plus five calls (DT+C). The comparison group had a schedule of three weeks of daytime hours plus five consecutive overnight shifts (DT+OS). National Board of Medical Examiners (NBME) Pediatrics Subject Exam scores, number of admission history and physicals (HPEs), and clerkship satisfaction data from both groups were compared. Part 2 was a two-item survey with open-ended comments that measured perceptions of resident teaching time and quality of resident teaching (QRT) from students on the DT+OS schedule. Results: DT+OS students had a significantly increased number of HPEs (t=2.17; P=.03) compared with the DT+C group (mean=7.49, standard deviation [SD]=3.34 in DT+OS versus mean=6.11, SD=2.95 in DT+C). The paired samples t test showed that students rated QRT significantly higher when on overnights than when they were on daytime hours (t=2.47; P=.02). There were no differences in satisfaction or NBME scores. Conclusion: Overnight work hours for medical students increased clerkship capacity while maintaining student satisfaction and cognitive performance. Added benefits included increased clinical experience and improved QRT. abstract_id: PUBMED:30739737 All work and no play: Addressing medical students' concerns about duty hours on the surgical clerkship. Background: Given the option of preferencing rotations for a 3rd year core surgery clerkship, we observed students often requested services perceived as less time-intensive. We compare self-reported duty hours with academic outcomes. Methods: We examined duty hours from 165 third-year medical students on a surgery clerkship at a single institution for academic year 2016-2017. Partial correlations and logistic regression modeling were used to assess the number of hours medical students worked on academic outcomes. Results: Medical student duty hours did not significantly correlate with the NBME Surgery Subject examination score (r = 0.08; p = 0.34), CPE score (r = 0.14; p = 0.09) or a clerkship grade of Honors (OR 0.993; CI 0.925-1.065). Prior completion of an internal medicine clerkship was correlated with a higher NBME Surgery Subject examination score (r = 0.27; p &lt; 0.001). Conclusion: This analysis demonstrates duty hours on a surgical clerkship do not correlate with academic performance. These data can be used to counsel students on career planning and choosing surgical rotations based on interest and not perceived workload. abstract_id: PUBMED:36978085 Needs assessment for enhancing pediatric clerkship readiness. Background: Many students report feeling inadequately prepared for their clinical experiences in pediatrics. There is striking variability on how pediatric clinical skills are taught in pre-clerkship curricula. Methods: We asked students who completed their clerkships in pediatrics, family medicine, surgery, obstetrics-gynecology and internal medicine to rate their pre-clinical training in preparing them for each clerkship, specifically asking about medical knowledge, communication, and physical exam skills. Based on these results, we surveyed pediatric clerkship and clinical skills course directors at North American medical schools to describe the competence students should have in the pediatric physical exam prior to their pediatric clerkship. Results: Close to 1/3 of students reported not feeling adequately prepared for their pediatrics, obstetrics-gynecology, or surgery clerkship. Students felt less prepared to perform pediatric physical exam skills compared to physical exam skills in all other clerkships. Pediatric clerkship directors and clinical skills course directors felt students should have knowledge of and some ability to perform a wide spectrum of physical exam skills on children. There were no differences between the two groups except that clinical skills educators identified a slightly higher expected competence for development assessment skills compared to pediatric clerkship directors. Conclusions: As medical schools undergo cycles of curricular reform, it may be beneficial to integrate more pre-clerkship exposure to pediatric topics and skills. Further exploration and collaboration establishing how and when to incorporate this learning could serve as a starting point for curricular improvements, with evaluation of effects on student experience and performance. A challenge is identifying infants and children for physical exam skills practice. abstract_id: PUBMED:36167525 Students' perception and learning experience in the first medical clerkship. Background: The German clerkship ("Famulatur") is the first phase in medical education, in which students learn from a physician's perspective. According to the German Licensing Regulations for Physicians, students shall "familiarise" with providing care. However, specific learning objectives for the clerkship are not defined, although the acquisition of different competencies is implicitly demanded. Therefore, an additional understanding of the clerkship students' learning experience is needed. The goal of this study is to explore the student's learning perspective and experiences in the clerkship. Methods: Twelve guideline-based interviews were conducted with third year medical students. All participants completed their first clerkship. A qualitative content analysis was performed. The inductively identified categories were transferred into a quantitative questionnaire using a 5-point Likert-scale to explore their relevance in a validation cohort. The questionnaire was completed by 222 clinical students of the Otto-von-Guericke-Universität Magdeburg. Results: The qualitative analysis led to 26 individual items assigned to 4 main categories that describe the clerkship experience: 1) "coping with insecurities", 2) "the clerkship as a social arrangement", 3) "the clerkship as a learning opportunity" and 4) "the clerkship as a teaching opportunity". In the quantitative validation cohort, category one yielded a well-balanced result (median 3 = "neither agree nor disagree"; IQR 2-4), items addressed in categories 2-4 were generally supported by the students, predominantly selecting "strongly agree" or "agree" (Median 2; IQR 1-2 for each category). Students rated the role of the clinical team as especially important for their learning success and feared exclusion or negative reactions. Conclusions: The medical clerkship provides an institutional, professional, and social framework, in which students are learning. Insecurities arose from curricular inconsistencies, a high dependency on the clinical team as well as the absence of specific learning objectives. Therefore, a better curricular integration regarding the semester structure and the learning objectives of the German clerkship is needed. abstract_id: PUBMED:36625076 Academic Performance of Clerkship Students Working on a Longitudinal Inpatient Pediatrics Team. Objective: To determine if the academic performance of students who worked on a longitudinal inpatient team in the pediatric clerkship differed from students on traditional teams. We hypothesized that working on the longitudinal team would be associated with improved performance. Methods: We retrospectively identified students who rotated in the pediatric clerkship at a single institution from 2017 through 2021. We used multiple linear and multiple ordered logistic regression to examine whether working on a longitudinal inpatient team in which the majority of students work with the same senior resident and attending for the entire inpatient block and function without interns was associated with improved academic performance. Results: We included data from 463 students, 316 in the longitudinal team group and 147 in the traditional team group. Working on the longitudinal team was associated with a higher inpatient preceptor rating (adjusted mean rating 3, 95% confidence interval [CI] 2.97 to 3.03 vs 2.85, 95% CI 2.81 to 2.90; P = .02; on a scale of 0 to 4) and an increased probability of achieving a higher final grade in the pediatric clerkship (adjusted probability of achieving honors 22%, 95% CI 17% to 28% vs 11%, 95% CI 6% to 16%; P = .003). These differences did not persist in the clerkship immediately after pediatrics. Conclusions: Compared with a traditional inpatient team, working on a longitudinal team was associated with achieving a higher preceptor rating and final pediatric clerkship grade. Implementing similar models within clinical clerkships may help foster optimal student performance. abstract_id: PUBMED:32490386 Wellness Curriculum in the Pediatric Clerkship. Psychological distress leading to burnout is an important issue during medical school. While studies have researched interventions in the pre-clerkship years, very few have targeted the clerkship years. To improve the wellness of third year medical students, the following interventions were implemented: (1) prompted students to identify two wellness goals in the areas of personal and physical well-being and (2) encouraged students to participate in meditation and chair yoga sessions during their pediatrics clerkship. Students completed pre- and post-clerkship wellness surveys. The interventions led to a small but significant improvement in the wellness of students, particularly in mental, physical, emotional, social, and spiritual well-being.Further expansion of a wellness curriculum to all clerkships during the entire third and fourth years may result in reduced burnout and sustained improvements in wellness during post graduate training and practice. abstract_id: PUBMED:26173530 Overnight Hospital Experiences for Medical Students: Results of the 2014 Clerkship Directors in Internal Medicine National Survey. Background: Since the 2011 Accreditation Council of Graduate Medical Education (ACGME) work hour rules for residents were implemented, 24-30 h call for interns has been replaced by shift work, including night-float. The impact of these changes on undergraduate medical education experiences in internal medicine has not been described. Objective: We aimed to determine the current status of medical students' overnight experiences in Internal Medicine clerkships and sub-internships, and to assess internal medicine educators' perceptions of the importance of overnight work during internal medicine rotations. Design And Participants: In May 2014, the Clerkship Directors in Internal Medicine (CDIM) conducted its annual survey. Twenty-eight questions about student participation in overnight work and perceptions of the importance of overnight work (rated on 1-5 Likert scale, 1 = very unimportant and 5 = ery important) were included. Descriptive statistics were used to summarize responses. Free text results were analyzed qualitatively. Key Results: The response rate was 78 %. A minority of respondents reported students having any overnight experience during the clerkship (38.7 %) or the sub-internship (40.7 %). Only 5 % of respondents reported having students assigned to night-float rotations outside of clerkships or sub-internships. Respondents agreed that overnight experiences were more important during the sub-internship than the clerkship, 4.0 ± 1.1 vs. 3.2 ± 1.2, p &lt; 0.001. Admitting new patients, following their course and responding to emergencies were rated as important overnight tasks for both clerkship and sub-internship students. Conclusions: Overnight experiences offer students additional educational opportunities. Clerkship directors felt that the overnight experience for the sub-intern in particular was an important chance to practice providing emergency cross coverage and other intern roles. In the era of ACGME duty hours, there is a need to further examine whether there is a role for increased overnight hospital experiences for medical students. abstract_id: PUBMED:27274332 Physician and medical student perceptions and expectations of the pediatric clerkship: a Qatar experience. Background: The average number of clerkship weeks required for the pediatric core rotation by the US medical schools is significantly lower than those required for internal medicine or general surgery. Objective: The objective behind conducting this survey study was to explore the perceptions and expectations of medical students and pediatric physicians about the third-year pediatric clerkship. Methods: An anonymous survey questionnaire was distributed to all general pediatric physicians at Hamad Medical Corporation and to students from Weill Cornell Medical College-Qatar. Results: Feedback was obtained from seven attending pediatricians (100% response rate), eight academic pediatric fellow physicians (100% response rate), 36 pediatric resident physicians (60% response rate), and 36 medical students (60% response rate). Qualitative and quantitative data values were expressed as frequencies along with percentages and mean ± standard deviation and median and range. A P-value &lt;0.05 from a 2-tailed t-test was considered to be statistically significant. Participants from both sides agreed that medical students receive &lt;4 hours per week of teaching, clinical rounds is the best environment for teaching, adequate bedside is provided, and that there is no adequate time for both groups to get acquainted to each other. On the other hand, respondents disagreed on the following topics: almost two-thirds of medical students perceive postgraduate year 1 and 2 pediatric residents as the best teachers, compared to 29.4% of physicians; 3 weeks of inpatient pediatric clerkship is enough for learning; the inpatient pediatric environment is safe and friendly; adequate feedback is provided by physicians to students; medical students have accessibility to physicians; students are encouraged to practice evidence-based medicine; and students get adequate exposure to multi-professional teams. Conclusion: Assigning devoted physicians for education, providing proper job description or definition of the roles of medical student and physician in the pediatric team, providing more consistent feedback, and extending the duration of the pediatric clerkship can diminish the gap of perceptions and expectations between pediatric physicians and medical students. abstract_id: PUBMED:16260233 Medical students self-reported work hours: perception versus reality. Objective: The objective of this study was to compare the students' actual work hours with their self-reported work hours during the obstetrics and gynecology clerkship, and to determine whether the number of hours worked correlate with the amount of "scut" reported or students' rating of the quality of the clerkship. Study Design: Students self-reported work hours were compared against their actual scheduled hours over 2 different academic years. Pearson's correlation was performed to correlate the actual hours with the amount of reported "scut" work and the overall rating of the quality of the clerkship. Results: The actual hours per week worked by students averaged 59 hours in 2003 and 48 hours in 2004. Students overestimated their work hours both years. Students who worked more hours rated the clerkship lower and the quality of the clerkship significantly improved from 2003 to 2004 (4.2 vs 3.8 P &lt; .03). Conclusion: The majority of third-year students overestimate their work hours in obstetrics and gynecology. The rating of the overall quality of the clerkship increase significantly with fewer hours worked, and it is not affected by the amount of "scut" work. abstract_id: PUBMED:37857141 Surgical Residents as Clerkship Scholars May Improve Student Perception of the Surgery Clerkship. Introduction: The surgical clerkship is a formative experience in the medical school curriculum and can leave a lasting impression on students' perception of surgery. Given the historical negative stereotypes of surgeons, the clerkship represents an opportunity to impact students in a meaningful way. Methods: Our institution developed a program in which research residents can serve as junior clerkship coordinators and educators; working closely with medical students on their surgery clerkship. At the end of their clerkship, students were administered a survey with Likert-scale and free text responses regarding satisfaction with the rotation, lectures, feedback, and value of the clerkship. Student survey results were compared before (2015-2016) and after (2017-2019) the implementation of the scholar program with nonparametric statistical analysis and qualitative text analysis. Results: A total of 413 students responded to the survey with no significant difference in response rate by term (P = 0.88). We found no statistical difference with respect to overall course perception (92.3% versus 91.2%, P = 0.84), but a statistically significant difference was noted for the clarity of the provided written clerkship materials (80.3% versus 91.3%, P = 0.02) and usefulness of the feedback (57.5% versus 78.7%, P = 0.01). Qualitative analysis demonstrated an overall positive shift in perception of the clerkship, improvement in the course materials, and organization. Conclusions: The scholar program was overall well received by the students with improvements in certain aspects of the clerkship: organization, feedback, and course materials. This program represents a potential strategy to improve certain portions of the medical school clerkship experience. Answer: Incorporating inpatient overnight work hours into a pediatric clerkship can improve the clerkship experience for students. A study that integrated medical students into pediatric resident night-float teams found that the new schedule increased the number of admission history and physicals (HPEs) that students performed, and students rated the quality of resident teaching (QRT) significantly higher during overnight shifts compared to daytime hours. There were no differences in satisfaction or National Board of Medical Examiners (NBME) Pediatrics Subject Exam scores, suggesting that cognitive performance was maintained. The study concluded that overnight work hours increased clerkship capacity while maintaining student satisfaction and cognitive performance, with the added benefits of increased clinical experience and improved QRT (PUBMED:23348086). This finding is consistent with the notion that overnight hospital experiences offer additional educational opportunities for medical students, as they can practice admitting new patients, following their course, and responding to emergencies, which are important tasks for both clerkship and sub-internship students (PUBMED:26173530). However, it is important to note that the impact of duty hours on academic performance may vary across different clerkships. For instance, in a surgical clerkship, medical student duty hours did not significantly correlate with academic outcomes, suggesting that the relationship between work hours and academic performance is not straightforward and may depend on the context of the clerkship (PUBMED:30739737). Overall, the evidence suggests that incorporating overnight work into a pediatric clerkship can enhance the educational experience without negatively impacting academic performance or student satisfaction.
Instruction: Do fresh osteochondral allografts successfully treat femoral condyle lesions? Abstracts: abstract_id: PUBMED:33100019 Vitrification of Intact Porcine Femoral Condyle Allografts Using an Optimized Approach. Objective: Successful preservation of articular cartilage will increase the availability of osteochondral allografts to treat articular cartilage defects. We compared the effects of 2 methods for storing cartilage tissues using 10-mm diameter osteochondral dowels or femoral condyles at -196°C: (a) storage with a surrounding vitrification solution versus (b) storage without a surrounding vitrification solution. We investigated the effects of 2 additives (chondroitin sulfate and ascorbic acid) for vitrification of articular cartilage. Design: Healthy porcine stifle joints (n = 11) from sexually mature pigs were collected from a slaughterhouse within 6 hours after slaughtering. Dimethyl sulfoxide, ethylene glycol, and propylene glycol were permeated into porcine articular cartilage using an optimized 7-hour 3-step cryoprotectant permeation protocol. Chondrocyte viability was assessed by a cell membrane integrity stain and chondrocyte metabolic function was assessed by alamarBlue assay. Femoral condyles after vitrification were assessed by gross morphology for cartilage fractures. Results: There were no differences in the chondrocyte viability (~70%) of 10-mm osteochondral dowels after vitrification with or without the surrounding vitrification solution. Chondrocyte viability in porcine femoral condyles was significantly higher after vitrification without the surrounding vitrification solution (~70%) compared to those with the surrounding vitrification solution (8% to 36%). Moreover, articular cartilage fractures were not seen in femoral condyles vitrified without surrounding vitrification solution compared to fractures seen in condyles with surrounding vitrification solution. Conclusions: Vitrification of femoral condyle allografts can be achieved by our optimized approach. Removing the surrounding vitrification solution is advantageous for vitrification outcomes of large size osteochondral allografts. abstract_id: PUBMED:22961315 Do fresh osteochondral allografts successfully treat femoral condyle lesions? Background: Fresh osteochondral allograft transplantation is an increasingly common treatment option for chondral and osteochondral lesions in the knee, but the long-term outcome is unknown. Questions/purposes: We determined (1) pain and function, (2) frequency and types of reoperations, (3) survivorship at a median of 13.5 years, and (4) predictors of osteochondral allograft failure in the distal femur. Methods: We evaluated 122 patients (129 knees) who underwent osteochondral allograft transplantation of the femoral condyle. Mean age was 33 years and 53% were male. Clinical evaluation included the modified Merle d'Aubigné-Postel (18-point), IKDC, and Knee Society function (KS-F) scores. We defined graft failure as revision osteochondral allografting or conversion to arthroplasty. We determined whether patient characteristics or attributes of the graft influenced failure. Minimum followup was 2.4 years (median, 13.5 years); 91% had more than 10 years of followup. Results: Mean modified Merle d'Aubigné-Postel score improved from 12.1 to 16, mean IKDC pain score from 7.0 to 3.8, mean IKDC function score from 3.4 to 7.2, and mean KS-F score from 65.6 to 82.5. Sixty-one knees (47%) underwent reoperations. Thirty-one knees (24%) failed at a mean of 7.2 years. Survivorship was 82% at 10 years, 74% at 15 years, and 66% at 20 years. Age of more than 30 years at time of surgery and having two or more previous surgeries for the operated knee were associated with allograft failure. Conclusions: Followup of femoral condyle osteochondral allografting demonstrated durable improvement in pain and function, with graft survivorship of 82% at 10 years. abstract_id: PUBMED:26919759 Large osteochondral lesions of the femoral condyles: Treatment with fresh frozen and irradiated allograft using the Mega OATS technique. Background: The purpose of this study was to review the clinical results of irradiated fresh frozen osteochondral allografts for large osteochondral defects of the knee using the Mega-OATS technique. Methods: Nine patients with a mean age of 32.1±6.6 (18 to 44) underwent Mega-OATS transplantation with irradiated (2.5Mrad), fresh frozen distal femur allograft. Three patients also underwent ACL-reconstruction; one patient a high tibial osteotomy. The underlying indication was osteochondrosis dissecans in four and trauma in five patients. The defect size was 25×25mm in three patients and 30×30 in six patients and the depth ranged from eight to 14mm. All OCD lesions were located on the medial femoral condyle; two of the traumatic lesions were located on the lateral femoral condyle. Clinical outcome was assessed using the Lysholm and IKDC scores. Radiographic incorporation was evaluated using serial radiographs and MR imaging at one year post surgery. All patients were reviewed at three, six, 12, and 24months following surgery. Results: The Lysholm (IKDC) score improved significantly (p=0.02 resp. p=0.007) within and between patients during the follow-up period from 40.9 (37) to 90.9 (87.1) at 2years. Radiographic union was observed in all patients at three months; on MR imaging at one year osseous integration was observed in eight patients. Graft subsidence with loss of the overlying cartilage was observed in one and subchondral cystic changes at the implantation side were seen in another patient. Conclusion: The results of this case series suggest that irradiated osteochondral allograft provides significant medium-term clinical improvement in patients treated for large osteochondral lesions of the femoral condyles. Level Of Evidence: IV, case series. abstract_id: PUBMED:27793804 Osteochondral Allograft Donor-Host Matching by the Femoral Condyle Radius of Curvature. Background: Conventional osteochondral allograft (OCA) matching, requiring orthotopic, size-matched condyles, and narrow surgical time windows often prohibit timely transplantation. Hypothesis: The femoral condyle radius of curvature (RoC) is an appropriate, isolated criterion for donor-host matching in fresh OCAs, potentially enhancing matching efficiency when compared with conventional matching techniques. Study Design: Descriptive laboratory study. Methods: In part 1 of this study, 3-dimensional digital reconstructions of 14 randomly selected, cadaveric distal femoral hemicondyles were performed. Each condyle was divided into anterior, middle, and posterior zones. A virtual best-fit grid was applied to each, and each zone's sagittal- and coronal-plane RoCs were determined. Seven nonorthotopic OCA transplantations were performed based on RoC matching with 1-mm tolerance, and the preoperative and postoperative surface geometry were quantified to assess the accuracy of articular surface restoration. Of note, each donor-host pair did not match by the conventional method. In part 2 of this study, 12 cadaveric distal femora were categorized by size and digitized in the aforementioned manner. Simulated circular defects measuring 20, 25, and 30 mm in diameter were introduced into each zone. OCA matches were determined based on donor and host RoCs, and the total number of potential matches (of 71 total comparisons) was recorded as a percentage for each simulated defect. Finally, the results of RoC matching were compared with the conventional method for simulated defects in all zones of both the medial and lateral femoral condyles. Results: Part 1: The mean surface deviation after OCA transplantation was -0.09 mm, with a mean maximum protrusion at any point of 0.59 mm. Part 2: Using the RoC, 20-mm defects had a 100% chance of being matched. Defects of 25 and 30 mm had a 91% and 64% chance of being matched, respectively. Compared with the conventional method, the RoC method yielded a 3.2-fold greater match rate for lesions of the medial and lateral femoral condyles ( P = .02). Conclusion: This investigation shows that femoral condyle RoCs in the sagittal and coronal planes may be useful, alternative matching criteria, expanding on current standards. Clinical Relevance: These matching criteria may increase the number of available matches, reduce wait times for patients, and reduce the number of wasted grafts. abstract_id: PUBMED:32064295 Contralateral Lateral Femoral Condyle Allografts Provide an Acceptable Surface Match for Simulated Classic Osteochondritis Dissecans Lesions of the Medial Femoral Condyle. Background: Osteochondral allograft transplantation is an effective technique for repairing large lesions of the medial femoral condyle (MFC), but its use is limited by graft availability. Purpose/hypothesis: The present study aimed to determine whether contralateral lateral femoral condyle (LFC) allografts can provide an acceptable surface match for posterolateral MFC lesions characteristic of classic osteochondritis dissecans (OCD). The hypothesis was that LFC and MFC allografts will provide similar surface contour matches in all 4 quadrants of the graft for posterolateral MFC lesions characteristic of OCD. Study Design: Controlled laboratory study. Methods: Ten fresh-frozen recipient human MFCs were each size-matched to 1 ipsilateral medial and 1 contralateral LFC donor (N = 30 condyles). After a nano-computed tomography (nano-CT) scan of the native recipient condyle, a 20-mm circular osteochondral "defect" was created 1 cm posterior and 1 cm medial to the roof of the intercondylar notch (n = 10). A size-matched, random-order donor MFC or LFC plug was then harvested, transplanted, and scanned with nano-CT. Nano-CT scans were then reconstructed, registered to the initial scan of the recipient MFC, and processed in MATLAB to determine the height deviation (dRMS) between the native and donor surfaces and percentage area unacceptably (&gt;1 mm) proud (%Aproud) and sunken (%Asunk). Circumferential step-off height (hRMS) and percentage circumference unacceptably (&gt;1 mm) proud (%Cproud) and sunken (%Csunk) were measured using DragonFly software. The process was then repeated for the other allograft plug. Results: Both MFC and LFC plugs showed acceptable step-off heights in all 4 quadrants (range, 0.53-0.94 mm). Neither allograft type nor location within the defect had a significant effect on step-off height (hRMS), surface deviation (dRMS), %Aproud, or %Asunk. In general, plugs were more unacceptably sunken than proud (MFC, 13.4% vs 2.4%; LFC, 13.2% vs 8.1%), although no significant differences in %Csunk were seen between allograft types or locations within the defect. In LFC plugs, %Cproud in the lateral quadrant (28.0% ± 26.1%) was significantly greater compared with all other quadrants (P = .0002). Conclusion: The present study demonstrates that 20-mm contralateral LFC allografts provide an acceptable surface match for posterolateral MFC lesions characteristic of OCD. Clinical Relevance: With comparable surface matching, MFC and LFC allografts can be expected to present similar stresses on the knee joint and achieve predictably positive clinical outcomes, thus improving donor availability and reducing surgical wait times for matches. abstract_id: PUBMED:34530131 Good medium-term functional results in reconstruction of lateral femoral condyle osteochondral lesions. Introduction: Results of autologous osteochondral graft (mosaicplasty) are well-reported for the medial femoral condyle but much less for the lateral femoral condyle. The main aim of the present study was to assess results specifically for lateral condyle reconstruction. The secondary objective was to compare results according to the two main etiologies: osteochondritis dissecans (OCD) and osteochondral fracture (OCF). Hypothesis: The study hypothesis was that medium-term functional and radiological results of lateral femoral condyle autologous osteochondral graft are good. Material And Methods: A single-center retrospective continuous study included 24 patients with symptomatic focal lateral femoral condyle osteochondral lesion treated by osteochondral autograft, at a minimum 24 months' follow-up. Mosaicplasty was performed for lesions &lt;200 mm2, with associated fixation for lesions &gt;200 mm2 with viable osteochondral fragments. IKDC, Lysholm and KOOS functional scores and pain on visual analog scale (VAS) were collected at last follow-up. Progression toward osteoarthritis was assessed on comparative X-ray. Results: Twenty patients were analyzed (11 OCD, 9 OCF) at a mean of 66.4±44 months. Mean pain on VAS was 1.8±2 out of 10. Mean subjective IKDC, Lysholm and global KOOS scores were respectively 68.1±26, 76.5±22 and 73.9±21. Two patients showed Ahlbäck grade 2 lateral femorotibial osteoarthritis. Functional results were comparable between OCD and OCF. Conclusion: Reconstruction of lateral femoral condyle osteochondral lesion by osteochondral autograft gave good medium-term functional results, whatever the etiology. Longer-term studies are needed to assess progression toward osteoarthritis and functional deterioration over time. Level Of Evidence: IV; retrospective study. abstract_id: PUBMED:29138892 Partial and complete joint transplantation with fresh osteochondral allografts-the FLOCSAT concept Cartilage defects in adult patients do not heal well. Fresh osteochondral allograft (OCA) transplantation is based on mature, living, mechanically sound hyaline cartilage attached to a bone interface, which is brought into an osteochondral defect, where it becomes osseointegrated. According to current knowledge, intact hyaline cartilage tissue is immune privileged and does not, in contrast to bone, meniscus or ligaments, cause an immune reaction. The technique has the unique advantage of transplanting viable, mature and mechanically stable hyaline cartilage into the affected area. An OCA is the only biological surgical technique for chondral and osteochondral lesions after failed cell-based techniques or autologous osteochondral transplantation.Fresh osteochondral allografts with mainly small cylindrical transplants show survival of 20 years and more. Based on this experience the FLOCSAT (fresh large [&gt; 10 cm2] osteochondral shell allograft transplantation) concept with the thinnest possible (&lt; 6-8 mm) bone thickness has been developed. Cells survive in special media at 4 °C for 4 weeks or more and are transplanted with a minimum of 70% living cells in a live/dead assay and cell density &gt; 200 cells/mm2.FLOCSATs can replace parts or entire joints as uniplanar or multiplanar(n-planar)-FLOCSAT, as unipolar, bipolar or tripolar(n-polar)-FLOCSAT, and in combination with meniscus and/or ligaments (nMnL)-FLOCSAT (n number of structures).The FLOCSAT concept was applied successfully to knee, ankle and elbow joints. All transplants showed sound osseointegration. Cartilage failure was the reason for unsuccessful outcome. Challenges remain regarding graft availability, precise size matching, complex logistics, demanding surgical technique in complex geometries, and open questions in immunology and chimerism. abstract_id: PUBMED:25478378 Transient patellar dislocation resulting in simultaneous osteochondral fractures of patella and lateral femoral condyle - a case report. Transient Patellar dislocations are commonly associated with bony contusions or osteochondral fractures involving the medial facet of patella or lateral femoral condyle. Simultaneous osteochondral fractures are rare and have not been reported in the adult. The authors report a case of combined osteochondral fracture of patella and lateral femoral condyle following acute patellar dislocation in an adult, which was misdiagnosed as meniscal injury. Both the osteochondral fracture fragments were rigidly fixed with headless compression screws and repair of the medial retinaculum was done. At latest follow up after two years, patient had regained full range of motion with no further episodes of patellar dislocation. The rarity of this combination of injury along with difficulty in interpreting radiographs makes this case interesting. abstract_id: PUBMED:29202991 Osteochondral Lesions of the Talus Treatment With Fresh Frozen Allograft. Introduction: Symptomatic talar osteochondral lesions are about 50% refractory to conservative treatment requiring a surgical solution. In the case of large chronic lesions, the use of bone graft taken from tissue bank is an alternative that enables to fill the defect without causing donor site morbidity. Material And Methods: Eight patients treated with talar osteochondral allograft in lesions greater than 20mm in diameter were analyzed - 4 males and 4 females aging 39.5 years old on average. Evaluation was performed according to AOFAS scale and VAS as well as incorporation and continuation evaluations according to CT and MRI studies. A follow-up of 46.8 months on average was done. Results: A 34.6-point improvement on average according to AOFAS. A 6.7-point pain improvement on average according to VAS. Incorporation in 100% of the cases. Two cases showed partial resorption and one case showed peri-graft lysis less than 30%. There was no collapse. Conclusions: Fresh frozen osteochondral allografts are a viable alternative when treating large osteochondral lesions, thereby avoiding morbidity of autologous donor areas or arthrodesis procedures. abstract_id: PUBMED:12152982 Fresh osteochondral allografts. Fresh osteochondral allografts have a long clinical history and have demonstrated use in a wide spectrum of knee joint pathology. The allografting procedure takes advantage of the unique characteristics of osseous and chondral tissue components. Transplanted bone is readily incorporated by the host while the articular cartilage survives transplantation. Allografts have demonstrated &gt;75% clinical success in the treatment of focal femoral condyle lesions due to trauma, chondral injury, osteochondral trauma, osteochondritis dissecans, avascular necrosis, and post-traumatic reconstruction. Fresh allografts also are finding an increasing role in the salvage of difficult cases that have failed other cartilage procedures, and particularly in individuals who are believed to be too young and active for joint arthroplasty. Further refinements in the technical aspects of the allografting procedure, as well as further understanding of the biology of osteochondral allografts, should lead to improved clinical outcomes. Answer: Yes, fresh osteochondral allografts (OCAs) have been shown to successfully treat femoral condyle lesions. A study with a median follow-up of 13.5 years found that femoral condyle osteochondral allografting demonstrated durable improvement in pain and function, with graft survivorship of 82% at 10 years (PUBMED:22961315). Another study reported good medium-term functional results in the reconstruction of lateral femoral condyle osteochondral lesions, indicating that osteochondral autografts can provide good functional outcomes regardless of the etiology of the lesion (PUBMED:34530131). Additionally, fresh osteochondral allografts have been used for a wide spectrum of knee joint pathology with more than 75% clinical success in the treatment of focal femoral condyle lesions due to various causes (PUBMED:12152982). Furthermore, the use of fresh frozen and irradiated allografts for large osteochondral defects of the knee using the Mega-OATS technique has shown significant medium-term clinical improvement in patients (PUBMED:26919759). The FLOCSAT (fresh large osteochondral shell allograft transplantation) concept has also been applied successfully to knee, ankle, and elbow joints, with all transplants showing sound osseointegration (PUBMED:29138892). In summary, the evidence suggests that fresh osteochondral allografts are an effective treatment option for femoral condyle lesions, providing significant improvements in pain, function, and graft survivorship over medium to long-term follow-up periods.
Instruction: Is it necessary to retain the tensor tympani tendon in tympanoplasty? Abstracts: abstract_id: PUBMED:25778997 Is it necessary to retain the tensor tympani tendon in tympanoplasty? Objectives/hypothesis: To evaluate the acoustic effect after canal wall-down tympanoplasty with sectioning of the tensor tympani muscle (TTM) tendon in patients with chronic otitis media. Study Design: Prospective, controlled clinical trial. Methods: From January 2010 to April 2014, 212 patients underwent canal wall-down tympanoplasty with ossicular chain reconstruction in one-stage surgery. For this study, the patients were assigned to two groups. In group 1 (107 patients), the TTM tensor would be removed during the surgery, whereas it would be retained in group 2 (105 patients). Pre- and postoperative air conduction and bone conduction thresholds at 0.5, 1, 2, and 4 kHz were reported, and the air-bone gaps (ABGs) were measured. Results: Mean postoperative ABG was 16.0 dB in group 1 and 17.0 dB in group 2 and the ABG closure was 11.9 ± 8.5 in group 1 and 11.8 ± 9.7 in group 2. Both were not statistically different (P = .2381 and P = .9625). Overall, 84.1% of patients in group 1 and 75.2% of patients in group 2 attained a postoperative ABG of 20 dB or less. Success rate in group 1 is slightly higher than group 2, but not significant (P = .1255). Conclusions: The section of the tensor tympani muscle tendon in canal wall-down tympanoplasty with ossiculoplasty had no statistically significant influence on sound transmission and can be a safe maneuver in middle ear surgery. Level Of Evidence: 4 abstract_id: PUBMED:34886600 Postoperative hearing in tympanoplasty preserving the manubrium of malleus and tensor tympani muscle tendon Objective:To study the effect of retaining the manubrium of malleus and tensor tympani muscle tendon (TT) on postoperative hearing reconstruction in tympanoplasty. Methods:Ninety-seven patients underwent tympanoplasty and ossiculoplasty in Peking University Third Hospital from January 2012 to December 2017, their postoperative results of audiometry were analyzed and compared with the preoperative results. The patients were divided into two groups according to retaining the manubrium of malleus and TT or not during the operation. Retention group include the cases with the manubrium of malleus and TT retained, resection group include the cases with TT resected with the manubrium retained or resected. T test was used to analyze and compare the differences of air conduction threshold air-bone gap (ABG) and the postoperative improvement between the two groups. Results:One year after operation, the air conduction thresholds and ABG were lower in retention group (n= 44) than those in resection group (n= 53) at each frequency, and there were differences with statistically significant at 0.25, 0.5 and 1.0 kHz (P&lt;0.05); the postoperative improvement of hearing thresholds and ABG at above frequencies in retention group was better than that in resection group. In cases with canal-wall-up operations or partial ossicular prostheses implanted, the above differences still existed between the two groups with statistical significance (P&lt;0.05); while in cases with canal-wall-down operations or total ossicular prostheses implanted, there were no significant differences between the two groups (P&gt;0.05). Conclusion:The preservation of the manubrium of malleus and TT is significant for postoperative hearing improvement in tympanoplasty, especially in the canal-wall-up operation with partial ossicular prostheses. abstract_id: PUBMED:16372987 Reconstruction of the tensor tympani tendon. We describe a case in which reconstruction of the tendon of the tensor tympani muscle was necessary for the successful restoration of sound conduction. The right ear of a nine-year-old boy was treated for cholesteatoma with staged surgery. During the first operation, the tendon was cut to ensure good visibility in the tympanic cavity. Post-operatively, maintenance of aeration of the middle ear required ventilation tubes at first and Valsalva manoeuvres later on. The position of the reconstructed tympanic membrane varied a great deal, moving between the medial wall of the tympanic cavity and extreme bulging. This made exact measurement of a columella for ossicular reconstruction impossible. The preserved handle of the malleus was bound to the cochleariform process with ionomer cement, using a piece of surgical suture material as a substitute for the tendon. This arrangement prevented the tympanic membrane from undergoing excessive lateral movement after inflation and the ossicular chain was replaced with a successful ossiculoplasty with an autogenous bone 'drum to footplate' columella. The pre-operative 55.0 dB air-bone gap decreased immediately to 3.3 dB, widening after three years to 15.0 dB. abstract_id: PUBMED:27693602 Fetal development of the pulley for muscle insertion tendons: A review and new findings related to the tensor tympani tendon. The existence of hard tissue pulleys that act to change the direction of a muscle insertion tendon is well known in the human body. These include (1) the trochlea for the extraocular obliquus superior muscle, (2) the pterygoid hamulus for the tensor veli palatini muscle, (3) the deep sulcus on the plantar aspect of the cuboid bone for the peroneus longus tendon, (4) the lesser sciatic notch for the obturator internus muscle, and (5) the bony trochleariformis process for the tensor tympani muscle tendon. In addition, (6) the stapedius muscle tendon shows a lesser or greater angulation at the pyramidal eminence of the temporal bone. Our recent studies have shown that the development of pulleys Nos. 1 and 2 can be explained by a change in the topographical relationship between the pulley and the tendon, that of pulley No. 3 by the rapidly growing calcaneus pushing the tendon, and that of pulley No. 4 by migration of the insertion along the sciatic nerve and gluteus medius tendon. Therefore, in Nos. 1-4, an initially direct tendon curves secondarily and obtains an attachment to the pulley. In case No. 6, the terminal part of the stapedius tendon originates secondarily from the interzone mesenchymal tissue of the incudostapedial joint. In the case of pulley No. 5, we newly demonstrated that its initial phase of development was similar to No. 6, but the tensor tympani tendon achieved a right-angled turn under guidance by a specific fibrous tissue and it migrated along the growing malleus manubrium. abstract_id: PUBMED:34106408 The Tensor Tympani Tendon: A Hypothetical Site of Origin of Congenital Cholesteatoma. Multiple theories have been discussed about the etiopathogenesis of congenital middle ear cholesteatoma (CMEC) and its specific site of origin. The intraoperative identification of the precise location of the keratinous mass is important to guarantee its complete removal, in order to reduce the risk of recurrence. This study proposes the tensor tympani tendon (TTT) as a possible site of origin of CMEC. All CMECs treated between 2013 and 2019 were reviewed. Only Potsic stage I lesions were included. Preoperative radiologic images were compared to intraoperative findings. Three removed TTT were sent for histologic evaluation. Seven patients were included (M:F = 3:4). Preoperative CT images were classified as type A in 2 cases (28.6%) and type B in 5 cases (71.4%). At intraoperative evaluation all CMEC sacs were found pedunculated on the TTT. The histologic examinations confirmed the connection between the cholesteatomatous sac and the TTT. According to the correlation of imaging, intraoperative findings and histology, we proposed that the TTT could be the primary site from which CMEC originates. abstract_id: PUBMED:26697069 Effects of Sacrificing Tensor Tympani Muscle Tendon When Manubrium of Malleus Is Foreshortened in Type I Tympanoplasty. The current study aims at observing effects of sacrificing the tensor tympani tendon when manubrium of malleus is foreshortened or retracted on graft uptake, hearing improvement, and occurrence of complications if any during type I tympanoplasty surgery for central perforations. 42 patients were included in group A where the tensor tendon was sectioned and 42 patients were included in group B where the tensor tympani tendon was retained and kept intact. Graft uptake rates are very good in both groups but hearing improvement was found significantly better in group A than group B. No unusual or undesired complications were seen in any of the cases. Sectioning of tensor tympani tendon is safe and effective procedure in cases where manubrium is foreshortened. abstract_id: PUBMED:17553724 Experimental measurement and modeling analysis on mechanical properties of tensor tympani tendon. In this paper, we report mechanical properties of the tensor tympani tendon of human ear measured from uniaxial tensile, stress relaxation and failure tests. The hyperelastic Ogden model and digital image correlation method were employed to analyze experimental data. The constitutive equation of the tendon was derived through data iteration processes, and Young's modulus was presented as a function of stress. The viscoelastic property of the tendon was described by stress relaxation function and hysteresis. Furthermore, three-dimensional finite element analysis was carried out on five tendon models to investigate relationship between the structure and properties. The dimensions of the tendon were also measured by image processing techniques and presented with statistic significance. The structure and properties of the tensor tympani tendon reported in this study add new data into the study of ear tissue biomechanics. abstract_id: PUBMED:20017207 Topography of the chorda tympani nerve and the tensor tympani muscle in carnivores provides a new synapomorphy for Herpestidae (Carnivora, Mammalia). The topographical relationship of the chorda tympani nerve (chorda tympani) to the tensor tympani muscle in the middle ear of carnivores provides new phylogenetic information. The examination of histological serial sections of 16 carnivore species representing most families revealed two distinct character states concerning the course of the chorda tympani: a hypotensoric state with the nerve running below the insertion tendon of the tensor tympani muscle, and an epitensoric state with the nerve running above the tendon. The shift from the plesiomorphic hypotensoric chorda tympani to the apomorphic epitensoric condition occurred once in carnivore phylogeny: Only in the herpestid species under study does the chorda tympani cross above the tensor tympani muscle. Therefore, we introduce the epitensoric pattern as a new synapomorphy for herpestids. Within the herpestids we find the following structural distinctions: Herpestes javanicus and Galerella sanguinea have a chorda tympani running in a sulcus directly above the insertion of the tensor tympani muscle, whereas in the eusocial herpestid species Suricata suricatta and Mungos mungo the chorda tympani lies far above the insertion of the muscle. abstract_id: PUBMED:28057076 Audiometric findings with voluntary tensor tympani contraction. Background: Tensor tympani contraction may have a "signature" audiogram. This study demonstrates audiometric findings during voluntary tensor tympani contraction. Methods: Five volunteers possessing the ability to voluntarily contract their tensor tympani muscles were identified and enrolled. Tensor tympani contraction was confirmed with characteristic tympanometry findings. Study subjects underwent conventional audiometry. Air conduction and bone conduction threshold testing was performed with and without voluntary tensor tympani contraction. Main Outcome Measure: Changes in air conduction and bone conduction thresholds during voluntary tensor tympani contraction. Results: Audiometric results demonstrate a low frequency mixed hearing loss resulting from tensor tympani contraction. Specifically, at 250 Hz, air conduction thresholds increased by 22 dB and bone conduction thresholds increased by 10 dB. Conclusions: Previous research has demonstrated a low frequency conductive hearing loss in the setting of tensor tympanic contraction. This is the first study to demonstrate a low frequency mixed hearing loss associated with tensor tympani contraction. This finding may aid in the diagnosis of disorders resulting from abnormal tensor tympani function. Tensor tympani contraction should be included on the differential for low frequency mixed hearing loss. abstract_id: PUBMED:2350809 The tensor tympani muscle reflex in nasopharyngeal carcinoma The initial stages of carcinoma of the nasopharynx may be manifested by different types of impaired hearing. The Eustachian tube may be obstructed by a tumour or its orifice compressed, sometimes the tumour penetrates into the middle ear. The authors discuss the position which may develop in infiltration of the semicanalis m. tensoris tympani and how functional disorders of this muscle may be manifested. They describe the provoked reflex of this muscle by trimolar acetic asid and investigate tympanometric reactions of this artificially provoked contraction of the m. tensor tympani in patients with carcinoma of the nasopharynx. Answer: The necessity of retaining the tensor tympani tendon (TTT) in tympanoplasty is a subject of debate, with studies showing varying results. One study found that sectioning the TTT during canal wall-down tympanoplasty with ossiculoplasty had no statistically significant influence on sound transmission, suggesting that it can be a safe maneuver in middle ear surgery (PUBMED:25778997). However, another study reported that preserving the manubrium of malleus and TTT is significant for postoperative hearing improvement in tympanoplasty, especially in canal-wall-up operations with partial ossicular prostheses (PUBMED:34886600). In a case where reconstruction of the TTT was necessary for the successful restoration of sound conduction, the postoperative air-bone gap decreased significantly, indicating the importance of the TTT in certain situations (PUBMED:16372987). Additionally, the TTT has been proposed as a possible site of origin of congenital cholesteatoma, highlighting its relevance in middle ear pathology (PUBMED:34106408). Another study observed that sacrificing the TTT when the manubrium of malleus is foreshortened in type I tympanoplasty resulted in better hearing improvement and no unusual complications, suggesting that sectioning of the TTT can be safe and effective in specific cases (PUBMED:26697069). The mechanical properties of the TTT have been studied, adding data to the understanding of ear tissue biomechanics, which may influence surgical decisions (PUBMED:17553724). Furthermore, the relationship of the TTT with other anatomical structures, such as the chorda tympani nerve, has been used to provide phylogenetic information in the study of carnivores (PUBMED:20017207). In summary, the decision to retain or section the TTT in tympanoplasty may depend on the specific circumstances of the surgery, such as the type of tympanoplasty being performed, the condition of the manubrium of malleus, and the individual patient's anatomy and pathology. Some studies suggest that retaining the TTT can be beneficial for postoperative hearing, while others indicate that sectioning it may be safe and even advantageous in certain cases.
Instruction: Are encapsulated anterior glass-ionomer restoratives better than their hand-mixed equivalents? Abstracts: abstract_id: PUBMED:19059689 Are encapsulated anterior glass-ionomer restoratives better than their hand-mixed equivalents? Objectives: The performance of encapsulated anterior GI restoratives were compared with their hand-mixed equivalents for the range of powder to liquid mixing ratios routinely encountered clinically. The clinically induced variability of powder to liquid mixing variations of an anhydrous GI restorative formulation was also compared with conventional GI restorative formulations that contained a polyalkenoic acidic liquid. Methods: Mean compressive fracture strengths, mean elastic moduli and mean total volumetric wear were determined for the encapsulated anterior GI restoratives mechanically mixed in a Capmix or Rotomix machine and the hand-mixed GI restoratives prepared with powder contents reduced from that recommended by the manufacturer (100%) in 10% increments to 50% for a constant weight of liquid. Multiple comparisons of the group means were made using a one-way analysis of variance (ANOVA) and Tukey's multiple range tests employed at P&lt;0.05. Results: For the encapsulated GI restoratives, the mean compressive fracture strength, mean elastic modulus and in-vitro wear resistance were significantly increased compared with their hand-mixed equivalents prepared with powder contents below that recommended by the manufacturers. The conventional GI restoratives resulted in a linear deterioration (R2&gt;0.95) of the mean compressive fracture strength and mean elastic modulus with powder content compared with the bi-modal deterioration for the anhydrous GI restorative. Conclusions: Encapsulated anterior GI restoratives outperform their hand-mixed equivalents for the range of powder to liquid mixing ratios routinely encountered clinically such that they are advocated for use in clinical practice. Anhydrous GI restorative formulations are more susceptible to clinically induced variability on mixing compared with conventional GI restorative formulations that contained a polyalkenoic acidic liquid. abstract_id: PUBMED:12670073 An assessment of encapsulated versus hand-mixed glass ionomer restoratives. Capsulation should enable uniform proportioning and mixing of dental restoratives so that functional properties of the cementitious mass will not be susceptible to clinically induced variability. Mechanical mixing induces a definite pore distribution determined by the viscosity of the system. This study evaluated the mixing process on the performance of a range of glass ionomer dental restoratives. Mean compressive fracture strengths and standard deviations and the associated Weibull Moduli (m) were determined for six glass ionomer restoratives that were either encapsulated or mixed by hand. Working characteristics were assessed using an oscillating rheometer. Scanning electron microscopy and image analysis was used to investigate the influence of the mixing method on pore distribution. The fracture strength data for some encapsulated restoratives resulted in significant differences compared with hand-mixing. Rotomix (compared with the Capmix mechanical agitator) resulted in increased Weibull moduli and 10% failure stress for the two restoratives that were investigated. Encapsulated restoratives that were prepared utilizing Rotomix or Capmix resulted in no significant differences for working characteristics; however, the setting time for the ChemFlex in Caps was extended compared with the hand-mixed ChemFlex. Not all restoratives had reduced porosity and improved performance following mixing with a Rotomix. This suggested that optimization of the initial viscosity of the system by manipulating the individual proportions of the constituents may not have been appropriate for all the restoratives investigated. The increased viscosity for hand-mixed ChemFlex prepared to a consistency of 3.8 g/ml compared with encapsulated ChemFlex in Caps prepared to a consistency of 3.5 g/ml was responsible for the reduced setting time. abstract_id: PUBMED:18177702 Is encapsulation of posterior glass-ionomer restoratives the solution to clinically induced variability introduced on mixing? Objectives: Three GI restorative systems were evaluated to determine if encapsulated GI restoratives performed more favorably than the hand-mixed equivalents prepared with powder contents progressively decreased from that recommended by manufacturers in 10% increments for a constant weight of liquid which are routinely employed in clinical practice. Methods: Mean compressive fracture strengths, associated Weibull moduli and mean elastic moduli were determined for series of 30 nominally identical cylindrical specimens for three hand-mixed GI restoratives prepared with 100-50% of the recommended powder content for a constant weight of liquid and the equivalent encapsulated GI restoratives mechanically mixed in the Capmix or Rotomix machines. Results: Decreasing the powder content of the three hand-mixed GI restoratives for a constant weight of liquid from that recommended by the manufacturers resulted in a progressive deterioration of the mean compressive fracture strengths and the mean elastic modulus. The elastic modulus was more sensitive than the compressive fracture strength to highlighting the impact of mixing ratio variations in the GI restoratives. There were no significant differences between mean compressive fracture strengths of mechanically mixed encapsulated GI restoratives, however, variations in mean elastic modulus were dependent upon the mixing time employed. Conclusions: The employment of an intrinsic material property was more sensitive than a non-intrinsic material property to highlight the impact of mixing ratio variations in hand-mixed restoratives and the effect that the method of mechanical mixing has on encapsulated GI restoratives. Encapsulated GI restoratives are a potential solution to the operator induced variability associated with hand-mixed GI restoratives. abstract_id: PUBMED:32821430 An in vitro assessment of the physical properties of manually- mixed and encapsulated glass-ionomer cements. Objectives: The last decade has seen a variety of modifications of glass-ionomer cements (GICs), such as inclusion of bioactive glass particles and dispensing systems. Hence, the aim was to systematically evaluate effect of mixing modes and presence of reactive glass additives on the physical properties of several GICs. Materials And Methods: The physical properties of eight commercial restorative GICs; Fuji IX GP Extra (C&amp;H), KetacTM Fill Plus Applicap (C&amp;H), Fuji II LC (C&amp;H), Glass Carbomer Cement and Equia® Forte Fil, capsulated versus manually mixed were assessed. 256 cylindrical specimens were prepared for compressive strength and microhardness, whilst 128 disc-shaped specimens were prepared for biaxial flexural strength tests. Fluid uptake and fluoride release were assessed. Data were analysed using one-way ANOVA and Games-Howell post-hoc tests (alpha = 0.05). Results: Both encapsulated GIC/RMGICs exhibited significantly improved mechanical properties in comparison to manually mixed equivalents, which in turn showed higher fluid uptake and early fluoride release (p &lt; 0.05). The glass carbomer cement exhibited improved mechanical properties post ageing and evidence of mineral deposits were apparent in the microstructure. Conclusions: The mixing mode and inclusion of reactive glass additives in cements had a statistically significant effect on physical properties of the selected GICs-RMGICs. abstract_id: PUBMED:25593050 A discriminatory mechanical testing performance indicator protocol for hand-mixed glass-ionomer restoratives. Objectives: To identify a reproducible and discriminatory mechanical testing methodology to act as a performance indicator for hand-mixed glass-ionomer (GI) restoratives. Methods: Groups of 20 (five batches of four) cylinders (6.0±0.1 mm height, 4.0±0.1 mm diameter) for compressive fracture strength (CFS) and compressive modulus (CM) testing, bars (25.0±0.1 mm length, 2.0±0.1 mm width, 2.0±0.1 mm thickness) for three-point flexure strength (TFS) and tensile flexural modulus (TFM) testing, discs (13.0±0.1 mm diameter, 1.0±0.1 mm thickness and 10.0±0.1 mm diameter, 3.10±0.03 mm thickness) for biaxial flexure strength (BFS) and Hertzian indentation (HI) testing, respectively, were prepared using a hand-mixed GI restorative manipulated with 100-20% (in 10% increments) of the manufacturers recommended powder content for a constant weight of liquid. Data were statistically analyzed at p&lt;0.05, the coefficient of variation (CoV) was assessed for the four tests at each powder:liquid mixing ratio investigated (n=9) and a Weibull analysis performed on the CFS, TFS and BFS data to assess the reliability of the data sets. The failure mode and fracture origin of the HI specimens was assessed by fractography. Results: For the hand-mixed GI restorative, a progressive reduction in the powder content (by 10% for a constant weight of liquid) resulted in a progressive linear deterioration (p&lt;0.001) in the CFS (R2=0.957), CM (R2=0.961) and TFM (R2=0.982) data. However, no linear deterioration (p&gt;0.05) was identified for the TFS (R2=0.572), BFS (R2=0.81) and HI (R2=0.234). The CoV and Weibull data identified distinct regions--three for the CFS and TFS data and two for the BFS data sets, within the range of powder:liquid mixing ratios investigated. Fractographic analysis of HI specimens revealed a transition in failure mode from bottom-initiated radial cracking to top-initiated cone cracking on reducing the powder content for a constant weight of liquid. Significance: The CFS test is the only discriminatory performance indicator for hand-mixed GIs from amongst the four mechanical testing approaches (CFS, TFS, BFS and HI) investigated. The CM and TFM represent an intrinsic material property independent of specimen dimensions and may be used as an adjunct to a mechanical testing approach when investigating hand-mixed GIs. abstract_id: PUBMED:7576384 Effect of glass ionomer manipulation on early fluoride release. Purpose: To determine the amount of fluoride released from several glass ionomer products [Fuji Cap II, Fuji Ionomer Type II, Ketac-Silver, Chelon-Silver, and Miracle Mix (hand-mixed and encapsulated)], and to compare them between triturated (5, 10, 15 seconds) encapsulated and hand-mixed glass ionomers. Materials And Methods: The amount of fluoride released at 1, 2, 3, 7, 14, 21 and 28 days after the start of immersion in deionized-distilled water at 37 degrees C was determined with a fluoride ion selective electrode. Rates of fluoride release and cumulative amounts of fluoride release were calculated. Results: Each glass ionomer released the greatest amount of fluoride during the first 24-hour period. Although there was some variation between products, generally the hand-mixed specimens released significantly (P &lt; 0.05) less fluoride than those prepared by mechanical trituration (10 and 15 seconds). The cumulative amounts of fluoride released from the mechanically triturated glass ionomers were significantly (P &lt; 0.05) greater than the amounts released from the hand-mixed glass ionomers. No significant differences in the cumulative amounts of fluoride released from triturated materials were found among the different trituration times. abstract_id: PUBMED:26003230 Fracture toughness testing: A discriminatory mechanical testing performance indicator for glass-ionomer restoratives? Objectives: To investigate the single-edge notched (SEN) bend fracture toughness (KIC) testing methodology as a reproducible and discriminatory mechanical testing protocol for encapsulated and hand-mixed glass-ionomers (GI). Methods: SEN bend test-pieces (35.0±0.1mm length, 6.0±0.1mm width, 3.0±0.1mm thickness with a sharp notch formed at mid-length by embedding a scalpel blade) were prepared for KIC testing using three encapsulated GI products (Chemfil Rock, Fuji IXGP Fast Capsule and Ionofil Molar AC). In addition, test-pieces were prepared from a hand-mixed GI product (Ionofil Molar) which contained between 100% and 20% of the manufacturer's recommended powder content (in 10% decrements) for a constant weight of liquid. Groups of 20 test-pieces were prepared for each encapsulated GI product (n=3) and hand-mixed GI powder:liquid mixing ratio (n=9). Data were statistically analyzed and the coefficients of variation (CoV) determined for each encapsulated GI product and hand-mixed GI powder:liquid mixing ratio. Results: The KIC testing methodology failed to discriminate between the encapsulated GI products that were investigated (p=0.225). For the hand-mixed GI, the KIC testing methodology also failed to discriminate between the powder:liquid mixing ratios investigated (R(2)=0.576). The pooled CoV (10%) for the encapsulated GI products and for the powder:liquid mixing ratio groups (12%) identified the reproducibility of the test for this experiment. For the hand-mixed GI mixing ratio groups with between 100% to 50% of the recommended powder content, no trend could be discerned. Significance: The KIC testing methodology failed to discriminate between different encapsulated GI products and hand-mixed GI powder:liquid mixing ratio groups investigated, despite KIC being an intrinsic material property and the coefficient of variation being acceptable. abstract_id: PUBMED:26062855 Comparative study of radiopacity of resin-based and glass ionomer-based bulk-fill restoratives using digital radiography. This study investigated the radiopacity values of glass ionomer- and resin-based bulk-fill restoratives of different thicknesses using digital radiography. Two glass ionomer-based and three resin-based bulk-fill restoratives, and a conventional composite were studied. Five disc-shaped specimens were prepared from each of these materials at three different thicknesses; specimens of enamel and dentin with the same thicknesses were also prepared. Materials were placed over a complementary metal oxide-semiconductor sensor together with the tooth specimen and an aluminum step-wedge, and then exposed using a dental X-ray unit. The images were analyzed using a software program to measure the mean gray values (MGVs), which were converted to equivalent aluminum thicknesses. Two-way ANOVA was used to investigate the significance of differences among the groups. The GCP Glass Fill specimens showed the lowest radiopacity values, and the Quixfil specimens had the highest values. All materials had higher radiopacity values than enamel and dentin, except for GCP Glass Fill, which had a radiopacity similar to that of enamel. The resin-based bulk-fill restoratives had significantly higher radiopacity values than glass ionomer-based restoratives. All of the tested materials showed radiopacity values higher than that of dentin, as recommended by the ISO. abstract_id: PUBMED:29364343 Randomized clinical trial of encapsulated and hand-mixed glass-ionomer ART restorations: one-year follow-up. This prospective, randomized, split-mouth clinical trial evaluated the clinical performance of conventional glass ionomer cement (GIC; Riva Self-Cure, SDI), supplied in capsules or in powder/liquid kits and placed in Class I cavities in permanent molars by the Atraumatic Restorative Treatment (ART) approach. A total of 80 restorations were randomly placed in 40 patients aged 11-15 years. Each patient received one restoration with each type of GIC. The restorations were evaluated after periods of 15 days (baseline), 6 months, and 1 year, according to ART criteria. Wilcoxon matched pairs, multivariate logistic regression, and Gehan-Wilcoxon tests were used for statistical analysis. Patients were evaluated after 15 days (n=40), 6 months (n=34), and 1 year (n=29). Encapsulated GICs showed significantly superior clinical performance compared with hand-mixed GICs at baseline (p=0.017), 6 months (p=0.001), and 1 year (p=0.026). For hand-mixed GIC, a statistically significant difference was only observed over the period of baseline to 1 year (p=0.001). Encapsulated GIC presented statistically significant differences for the following periods: 6 months to 1 year (p=0.028) and baseline to 1 year (p=0.002). Encapsulated GIC presented superior cumulative survival rate than hand-mixed GIC over one year. Importantly, both GICs exhibited decreased survival over time. Encapsulated GIC promoted better ART performance, with an annual failure rate of 24%; in contrast, hand-mixed GIC demonstrated a failure rate of 42%. abstract_id: PUBMED:10029460 Glass-ionomer restoratives: a systematic review of a secondary caries treatment effect. It is generally accepted that glass ionomers inhibit secondary caries in vivo, and data from in vitro studies support this effect. The aim of this review was a systematic assessment, from the literature, of clinical evidence for the ability of glass-ionomer restoratives to inhibit secondary caries at the restoration margin. Inclusion and exclusion criteria for selection of the review papers were established prior to commencement of the literature search. Papers which conformed to these criteria, and reported on secondary caries as an outcome, were selected (N = 52). Primary and secondary lists of systematic criteria for use in the assessment of the papers were drawn up. The primary list of 14 criteria was applied to each paper. No paper fulfilled all these criteria, necessitating the use of the secondary measures: (i) a prospective study and (ii) use of an appropriate control. This yielded 28 papers. Tabulation of these papers by occurrence of secondary caries in the glass-ionomer or control groups demonstrated an even distribution between positive and negative outcomes. Valid evidence is considered to be best obtained from randomized, controlled studies of sufficient sample size. No conclusive evidence for or against a treatment effect of inhibition of secondary caries by the glass-ionomer restoratives was obtained from the systematic review. There is a need for appraisal of the methods currently adopted for the clinical evaluation of glass-ionomer restorative materials, and for further development of the methodology to support future systematic reviews. Answer: Yes, encapsulated anterior glass-ionomer (GI) restoratives are generally considered to be better than their hand-mixed equivalents. Studies have shown that encapsulated GI restoratives have significantly increased mean compressive fracture strength, mean elastic modulus, and in-vitro wear resistance compared to hand-mixed GI restoratives prepared with powder contents below the manufacturer's recommendations (PUBMED:19059689). Additionally, encapsulated GI restoratives are less susceptible to clinically induced variability on mixing compared with conventional GI restorative formulations (PUBMED:19059689). Encapsulated restoratives prepared using mechanical mixers like Rotomix or Capmix have shown improved performance characteristics such as increased Weibull moduli and failure stress, compared to hand-mixed restoratives (PUBMED:12670073). Moreover, encapsulated GI restoratives are a potential solution to the operator-induced variability associated with hand-mixed GI restoratives (PUBMED:18177702). Furthermore, encapsulated GICs/RMGICs exhibit significantly improved mechanical properties in comparison to manually mixed equivalents, which in turn show higher fluid uptake and early fluoride release (PUBMED:32821430). Encapsulated GICs have also demonstrated superior clinical performance compared with hand-mixed GICs in a randomized clinical trial evaluating ART restorations (PUBMED:29364343). In summary, encapsulated anterior GI restoratives are advocated for use in clinical practice due to their superior mechanical properties and consistent performance, which are less affected by the variability that can be introduced during manual mixing (PUBMED:19059689).
Instruction: High prevalence of coeliac disease in a population-based study from Western Australia: a case for screening? Abstracts: abstract_id: PUBMED:11587254 High prevalence of coeliac disease in a population-based study from Western Australia: a case for screening? Objectives: To determine the prevalence of coeliac disease in an Australian rural community. Design: Retrospective analysis of stored serum samples from 3,011 random subjects from the Busselton Health Study. IgA antiendomysial antibodies (AEA) were detected by indirect immunofluorescence, and subjects testing positive were contacted and offered small-bowel biopsy. Main Outcome Measures: Prevalence of AEA positivity and biopsy-proven coeliac disease in the community with reference to the proportion of symptomatic to asymptomatic patients. Results: 10 of 3,011 subjects were AEA positive. One subject had died, one subject could not be traced and one refused small-bowel biopsy. All subjects with detectable AEA who consented to biopsy had pathological changes consistent with coeliac disease. The prevalence of newly diagnosed biopsyproven coeliac disease is 7 in 3,011 (1 in 430). Two further subjects had a diagnosis of coeliac disease before this study. When all AEA-positive patients and those previously diagnosed are included, the prevalence is 12/3,011 (1 in 251). There was a significant clustering of cases in the 30-50-years age range, with 10/12 (83%; 95% CI, 52%-98%) aged between 30 and 50 years, compared with 1,092/3,011 (36%; 95% CI, 35%-38%) of the total population (P&lt;0.03). Of the eight AEA-positive subjects who could be contacted, four had symptoms consistent with coeliac disease and four were asymptomatic. Three subjects were iron-deficient, four subjects had first-degree relatives with coeliac disease and one subject had type 1 diabetes mellitus. Conclusions: The prevalence of coeliac disease is high in a rural Australian community. Most patients are undiagnosed, and asymptomatic. abstract_id: PUBMED:11569698 Prevalence of celiac disease in Argentina: screening of an adult population in the La Plata area. Objectives: Up to now, the epidemiological characteristic of celiac disease among adults in South America remains unknown. The present prospective screening was designed to determine the prevalence of celiac disease in adults from the general population in an urban area of Argentina. Methods: Between January. 1998, and May, 2000, all couples attending a centralized laboratory for an obligatory prenuptial examination in the La Plata area were offered participation in a screening program for celiac disease. The study included 2000 subjects (996 women; median age 29 yr, range 16-79 yr). All individuals completed a clinical questionnaire at the time that serum samples were obtained. A three-step screening protocol was used, as follows: 1) all samples were tested for antigliadin antibodies (AGAs) (type IgA and IgG); 2) samples that were IgA AGA positive were tested for antiendomysial antibody (EmA type IgA); samples that were positive for AGA-G but negative for IgA AGAs were tested for total IgA serum levels and EmA type IgG; and 3) subjects who were EmA-positive were referred for intestinal biopsy. Results: At the end of the screening we detected 10 subjects who were EmA-A positive and two others who were IgA-deficient (both were EmA-G positive). Up to now, 11 of the 12 subjects (including nine EmA-positive and two IgA-deficient subjects) had endoscopic intestinal biopsies showing the characteristic celiac histology. The remaining EmA-positive individual was considered to be affected by celiac disease. The overall prevalence assessed was 1:167 (6.0 x 1000 subjects; 95% CI = 3.1-10.5). Eight of the 12 (67%) subjects were female (1:124; 8.0 x 1000; 95% CI = 3.5-15.8) and four (33%) were male (1:251; 4.0 x 1000; 95% Cl = 1.1-10.2). Although eight new patients were considered to be asymptomatic, three presented with a subclinical course and one was classically symptomatic. Only one patient had been previously diagnosed with celiac disease. Conclusions: Our screening protocol showed a very high prevalence of celiac disease for an urban area of Argentina that is ethnically similar to 90% of the general population of the country. The prevalence among women was double that for men, and the heterogeneous clinical picture of new patients showed predominance of asymptomatic cases. abstract_id: PUBMED:10095818 High prevalence of undiagnosed coeliac disease in adults: a Swedish population-based study. Objective: To determine the prevalence of coeliac disease in a population-based sample of Swedish adults. Design: Population-based cross-sectional study. Setting: Northern Sweden. Subjects: A total of 1894 adults (76%) out of 2500 invited, randomly selected from the population register after stratification for age and sex. Main Outcome Measures: Prevalence of biopsy verified coeliac disease, symptoms of undiagnosed cases, and results of antiendomysium and antigliadin serum antibody tests. Results: Coeliac disease was confirmed by intestinal biopsy showing enteropathy in 10 individuals (seven women and three men), corresponding to a prevalence of 5.3 per 1000 (95% CI = 2.5-9.7). The majority of cases (eight out of 10) had not been diagnosed prior to the screening, although many had symptoms compatible with coeliac disease. All individuals with antiendomysium antibody positivity who were subjected to a small intestinal biopsy had enteropathy. Furthermore, all of them also had elevated levels of antigliadin antibodies type IgA and/or IgG. Conclusions: Coeliac disease is common, albeit mostly undiagnosed, in Swedish adults. It is likely that the situation is no better in other countries. This highlights the importance of keeping coeliac disease in mind, and of promptly investigating individuals with unexplained, even mild, symptoms compatible with the disease. Serological markers, e.g. antigliadin and antiendomysium antibodies, are useful tools within this active case-finding strategy, although the final diagnosis should be based on an intestinal biopsy demonstrating enteropathy. abstract_id: PUBMED:19558729 Increasing prevalence and high incidence of celiac disease in elderly people: a population-based study. Background: Celiac disease may emerge at any age, but little is known of its appearance in elderly people. We evaluated the prevalence of the condition in individuals over 55 years of age, and determined the incidence of biopsy-proven celiac disease (CDb) and celiac disease including seropositive subjects for anti-tissue transglutaminase antibodies (CDb+s). Methods: The study based on prevalence figures in 2815 randomly selected subjects who had undergone a clinical examination and serologic screening for celiac disease in 2002. A second screening in the same population was carried out in 2005, comprising now 2216 individuals. Positive tissue transglutaminase antibodies were confirmed with small bowel biopsy. Results: Within three years the prevalence of CDb increased from 2.13 to 2.34%, and that of CDb+s from 2.45 to 2.70%. Five new cases were found among patients previously seronegative; two had minor abdominal symptoms and three were asymptomatic. The incidence of celiac disease in 2002-2005 was 0.23%, giving an annual incidence of 0.08% in this population. Conclusion: The prevalence of celiac disease was high in elderly people, but the symptoms were subtle. Repeated screening detected five biopsy-proven cases in three years, indicating that the disorder may develop even in the elderly. Increased alertness to the disorder is therefore warranted. abstract_id: PUBMED:23600795 Screening for coeliac disease in preschool Greek children: the feasibility study of a community-based project. Aim: Evaluation of the prevalence of coeliac disease (CD) in Greek paediatric population. Methods: The project consists of two parts: (i) a pilot study of preschool children aged 2-6 years to test the feasibility and diagnostic accuracy of community-based screening and (ii) a CD prevalence study, by random clustered sampling and proportionate stratification of various geographical areas in Greece. Trained nonmedical staff performed a rapid immunochromatographic test to detect IgA antibodies to tTG-IgA and IgA deficiency. Toddlers with positive results were referred to a paediatric gastroenterologist for further assessment with serum anti-tTG IgA and EMA-IgA. Children with positive serum anti-tTG and anti-EMA underwent upper gastrointestinal tract endoscopy and small bowel biopsy and were subsequently in gluten-free diet. Results: In this project participated 1136 toddlers, who were tested at school. The prevalence of positive rapid anti-tTG screening was 1:154, of IgA deficiency 1:120 and of biopsy-proven CD 1:154. The prevalence of CD from this pilot study served as expected prevalence value for sample size calculation for the main prevalence study. Conclusion: This protocol using rapid immunochromatographic test for the detection of both IgA deficiency and CD is easy to be performed by nonmedical staff in a community setting, enabling the accurate identification of new CD cases among asymptomatic population. abstract_id: PUBMED:33504878 Prevalence of celiac disease in low and high risk population in Asia-Pacific region: a systematic review and meta-analysis. This systematic review and meta-analysis study was conducted to estimate the pooled prevalence of CD in low and high risk groups in this region. Following keywords were searched in the Medline, PubMed, Scopus, Web of Science and Cochrane database according to the MeSH terms; celiac disease, prevalence, high risk population and Asian-Pacific region. Prevalence studies published from January 1991 to March 2018 were selected. Prevalence of CD with 95% confidence interval (CI) was calculated using STATA software, version 14. The pooled sero-prevalence of CD among low risk group in Asia-Pacific region was 1.2% (95% CI 0.8-1.7%) in 96,099 individuals based on positive anti-tissue transglutaminase (anti-t-TG Ab) and/or anti-endomysial antibodies (EMA). The pooled prevalence of biopsy proven CD in Asia-Pacific among high and low risk groups was 4.3% (95% CI 3.3-5.5%) and 0.61% (95% CI 0.4-0.8%) in 10,719 and 70,344 subjects, respectively. In addition, the pooled sero-prevalence and prevalence of CD in general population was significantly higher in children compared with adults and it was significantly greater in female vs. male (P &lt; 0.05). Our results suggest high risk individuals of CD are key group that should be specifically targeted for prevention and control measures, and screening may prove to have an optimal cost-benefit ratio. abstract_id: PUBMED:25922671 Screening for celiac disease in the general population and in high-risk groups. Background: Celiac disease (CD) occurs in approximately 1% of the Western population. It is a lifelong disorder that is associated with impaired quality of life (QOL) and an excessive risk of comorbidity and death. Objectives: To review the literature on screening for CD in relation to the current World Health Organization (WHO) criteria for mass screening. Methods: We performed a PubMed search to identify indexed papers on CD screening with a publication date from 1900 until 1 June 2014. When we deemed an abstract relevant, we read the corresponding paper in detail. Results: CD fulfills several WHO criteria for mass screening (high prevalence, available treatment and difficult clinical detection), but it has not yet been established that treatment of asymptomatic CD may reduce the excessive risk of severe complications, leading to higher QOL nor that it is cost-effective. Conclusions: Current evidence is not sufficient to support mass screening for CD, but active case-finding may be appropriate, as we recognize that most patients with CD will still be missed by this strategy. Although proof of benefit is still lacking, screening for CD may be appropriate in high-risk groups. abstract_id: PUBMED:34310439 High Prevalence of Celiac Disease Among Danish Adolescents: A Population-based Study. Objectives: The objective of this study was to establish an unselected cohort of Danish adolescents and estimate the prevalence of undiagnosed celiac disease (CeD). Methods: The Glutenfunen cohort participants were recruited from an unselected subsample of the Danish National Birth Cohort, defined as participants living in the Island of Funen, Denmark. We invited all 7431 eligible participants in the age range of 15 to 21 years to a clinical visit. CeD diagnosis was based on screening with IgA transglutaminase antibodies (TG2-IgA) and if positive, was followed by duodenal biopsies compatible with CeD (Marsh 2-3). We calculated the prevalence of CeD in the Glutenfunen cohort as the number of CeD cases diagnosed before and during the study divided by the number of participants in the Glutenfunen cohort. Results: We included 1266 participants in the Glutenfunen cohort (17%, 1266/7431). 1.1% (14 of 1266 participants) had CeD diagnosed before entering the cohort and based on the Danish National Patient Register, 0.2% of the nonparticipants (14 of 6165) had a diagnosis of CeD. In total, 2.6% (33 participants) had TG2 IgA above the upper limit of normal. Nineteen participants had duodenal biopsies compatible with CeD. The prevalence of CeD in the Glutenfunen cohort was 2.6% [(14 + 19)/1266]. Conclusions: Our study suggests that CeD is much more common than expected among Danish adolescents, comparable to other European countries, and that the majority were asymptomatic or oligosymptomatic and were only found because of the screening procedure. abstract_id: PUBMED:17625439 Prevalence of celiac disease in Tunisia: mass-screening study in schoolchildren. Background: Celiac disease is reported to be common among North Africans, particularly Tunisians. Nevertheless, the prevalence of coeliac disease in the general population has not been previously investigated. Objective: This study aimed to determine the prevalence of celiac disease among children in Tunisia and to describe the clinical profile of the screened patients. Methods: A mass screening study based on drawing lots was carried out on schoolchildren in Ariana, a Tunisian district. A participation agreement was obtained from 6286 children (3175 boys, age: 9.7+/-3 years). Two children of known celiac disease were present in this population. All participants were tested for IgA antitissue transglutaminase antibodies (IgA-tTG) by a commercial enzyme-linked immunosorbent assay (ELISA) and total IgA levels. Sera, found positive by the initial screening, were assessed by immunofluorescence for the presence of IgA antiendomysium antibodies (IgA-AE). Positive participants were also called in for serological control, intestinal biopsy, biological exploration (hemoglobin rate, calcemia and albuminemia) and bone mineral densitometry. Results: Among the 6284 participants, 139 (1/45) were positive for IgA-tTG. Forty-two of these had low-level IgA-tTG and no one had IgA deficiency. IgA-AE was detected in 40 participants. One hundred and seven children were called in, 28 had both positive tests (IgA-tTG +/IgA-AE+) and 79 were only positive for IgA-tTG (IgA-tTG +/IgA-AE-). Intestinal biopsy was performed in the 28 participants of the first group (IgA-tTG +/IgA-AE+) and confirmed celiac disease in 26 cases. In the second group (IgA-tTG +/IgA-AE-), intestinal biopsy was performed in 26 children and histological examination was normal in all cases. Among the 26 biopsy-proven celiac disease children, six (23%) had typical clinical symptoms of celiac disease, whereas the others had atypical forms with 11 (42%) asymptomatic. In 23 biopsy-proven celiac disease children, bone mineral density was significantly lower than that of a group of 109 normal children (0.850+/-0.06 g/cm2 versus 0.912+/-0.06 g/cm2, P&lt;0.05). Seven participants (30.4%) among the celiac disease children and six (7.5%) among the controls had a total-body Z score for bone mineral density of &lt;-2 (P&lt;0.001). Conclusion: The prevalence of celiac disease in Tunisian schoolchildren, estimated to be about 1/157, is close to the European prevalence. Most of the screened children showed an atypical and asymptomatic form, but even the typical forms were underdiagnosed. Ostopenia was frequently observed in celiac disease patients. abstract_id: PUBMED:15187786 Prospective population screening for celiac disease: high prevalence in the first 3 years of life. Background: Celiac disease (CD) is an autoimmune enteropathy that develops in genetically susceptible individuals exposed to gliadin. Early diagnosis of CD may reduce the risk of complications, and several studies have related the duration of gluten exposure to the risk of other autoimmune diseases. It has been proposed that silent CD be diagnosed as soon as possible to avoid potential complications. Objectives: The purpose of this study was to determine the prevalence of CD among children less than 3 years and to provide treatment to those patients diagnosed with CD. Patients And Methods: Parents of 1100 healthy children born between October 1998 and December 1999 were asked at the time of delivery to enroll their children in a program for the early diagnosis of CD. The parents of 830 children agreed to participate. Patients in the study were examined and anti-tissue transglutaminase antibody was first measured at about 1.5 years of age. A second antibody titer was obtained at about 2.5 years of age. Patients with detectable autoantibodies underwent intestinal biopsy for confirmation of CD. Results: Of the 830 children initially enrolled, 613 and 484 returned for the first and second visits, respectively. None had anti-tissue transglutaminase antibodies at the first visit, but 9 had anti-tissue transglutaminase immunoglobulins at the second visit. In 7 of these 9, intestinal biopsy confirmed the diagnosis of CD which suggests a minimum prevalence of CD of 1 per 118 healthy newborns. Conclusions: The authors observed a very high prevalence of CD, comparable to that observed in other European populations, which might even be higher if all of the children initially examined had returned for their second visit. If general screening for CD were accepted, the authors would recommend age 2-3 years as the best time for measuring tissue transglutaminase antibodies. Answer: The study conducted in Western Australia found a high prevalence of coeliac disease in a rural community, with most patients being undiagnosed and asymptomatic. The prevalence of newly diagnosed biopsy-proven coeliac disease was 1 in 430, and when including all AEA-positive patients and those previously diagnosed, the prevalence was 1 in 251. There was a significant clustering of cases in the 30-50 years age range. The study concluded that the prevalence of coeliac disease is high in this community, suggesting a potential case for screening (PUBMED:11587254).
Instruction: Does traumatic subarachnoid hemorrhage caused by diffuse brain injury cause delayed ischemic brain damage? Abstracts: abstract_id: PUBMED:9802847 Does traumatic subarachnoid hemorrhage caused by diffuse brain injury cause delayed ischemic brain damage? Comparison with subarachnoid hemorrhage caused by ruptured intracranial aneurysms. Objective: To examine whether traumatic subarachnoid hemorrhage (TSAH) caused by severe diffuse brain injury leads to delayed ischemic brain damage and secondary deterioration of outcome, as does aneurysmal subarachnoid hemorrhage (ASAH). Methods: We examined 99 patients with diffuse brain injury with TSAH and 114 patients with ASAH. Computed tomographic (CT) findings, cerebral blood flow, and neurological outcomes were assessed during the acute and subacute phases and were compared between the two groups. Results: The distribution of subarachnoid hemorrhage on the CT scans differed between the two groups. Unlike ASAH, TSAH was not limited to cisterns surrounding the circle of Willis but extended to supratentorial regions and interhemispheric fissures. Computed tomography-detected subarachnoid hemorrhage disappeared very early with TSAH and gradually with ASAH. In the ASAH group, mean cerebral blood flow decreased to 75% of normal during the acute phase and decreased a further 10% during the subacute phase. In the TSAH group, mean cerebral blood flow decreased to 85% of normal during the acute phase and increased slightly during the subacute phase. Neurological deterioration and in-hospital death peaked on Day 0 in association with TSAH and showed twin peaks in association with ASAH. The incidence of low-density areas on the CT scans was significantly higher with ASAH than with TSAH. All low-density areas on the CT scans of patients with ASAH corresponded to vascular territories, but low-density areas on the CT scans of patients with TSAH were rarely associated with vascular territories and contained deep-seated or gliding contusion types. Conclusion: The findings suggest that the incidence of vasospasm is low in association with TSAH and that the cause is different compared with ASAH. There is no evidence that the presence of TSAH in cases of diffuse brain injury leads to delayed ischemic brain damage and secondary deterioration of outcome. abstract_id: PUBMED:15251149 The value of autopsy in determining the cause of failure to respond to resuscitation at birth. Autopsy is invaluable in identifying the causes of severe depression and very low Apgar score after birth and in assessing contributory conditions. Brain scans are increasingly used in the care of neonates who fail to respond to resuscitation at birth but their interpretation depends on the information gained from sound neuropathological studies. Asphyxia, both acute intrapartum asphyxia and chronic asphyxia, is an important cause of low Apgar scores. The gestational age and the nature of the asphyxial insult both have a profound influence on the ultimate pattern of injury. Asphyxia in the preterm brain tends to damage preferentially the white matter but some white matter damage is also seen in many infants who have an hypoxia-ischaemic insult at term though the predominant site of injury is to the central grey matter. The nature of the cellular damage and reactive change seen at autopsy is described. There is an association between low Apgar scores and intrauterine exposure to infection and maternal pyrexia. Detailed autopsy examination should include the search for infection. The placenta, cord and membranes should be examined in view of the mounting evidence of the association between intrauterine infection of the placenta and fetal membranes and prenatal brain damage. Additionally, the presence of placental thrombosis and infarction should be sought in relation to focal and global injury in the full term infant. Acquired prepartum lesions rarely cause the infant to present with a low Apgar score. The exception to this is severe damage to the brainstem and basal ganglia. Traumatic injury to the brain is now much less common than in previous decades. Subdural haemorrhage occurs more frequently than intraventricular or subarachnoid haemorrhage. Instrumental and assisted deliveries are associated with an increased incidence of subdural haemorrhage though these rarely cause significant long term damage. Careful autopsy, particularly of the neck and paravertebral tissues, spinal cord, brainstem and nerve roots is important where trauma is suspected. Tearing of nerve roots or fibre bundles in the spinal cord is readily demonstrated under the microscope using immunocytochemistry to beta-amyloid precursor protein. Disorders of the spinal cord, peripheral nerve and muscle as well as some metabolic diseases may cause a baby to be both floppy and weak. Metabolic disease, including peroxisomal disorders, non-ketotic hyperglycinaemia, lipid and glycogen storage disorders and mitochondrial diseases may cause profound hypotonia and respiratory failure at birth or shortly afterwards. abstract_id: PUBMED:38396252 Similarities in the Electrographic Patterns of Delayed Cerebral Infarction and Brain Death After Aneurysmal and Traumatic Subarachnoid Hemorrhage. While subarachnoid hemorrhage is the second most common hemorrhagic stroke in epidemiologic studies, the recent DISCHARGE-1 trial has shown that in reality, three-quarters of focal brain damage after subarachnoid hemorrhage is ischemic. Two-fifths of these ischemic infarctions occur early and three-fifths are delayed. The vast majority are cortical infarcts whose pathomorphology corresponds to anemic infarcts. Therefore, we propose in this review that subarachnoid hemorrhage as an ischemic-hemorrhagic stroke is rather a third, separate entity in addition to purely ischemic or hemorrhagic strokes. Cumulative focal brain damage, determined by neuroimaging after the first 2 weeks, is the strongest known predictor of patient outcome half a year after the initial hemorrhage. Because of the unique ability to implant neuromonitoring probes at the brain surface before stroke onset and to perform longitudinal MRI scans before and after stroke, delayed cerebral ischemia is currently the stroke variant in humans whose pathophysiological details are by far the best characterized. Optoelectrodes located directly over newly developing delayed infarcts have shown that, as mechanistic correlates of infarct development, spreading depolarizations trigger (1) spreading ischemia, (2) severe hypoxia, (3) persistent activity depression, and (4) transition from clustered spreading depolarizations to a negative ultraslow potential. Furthermore, traumatic brain injury and subarachnoid hemorrhage are the second and third most common etiologies of brain death during continued systemic circulation. Here, we use examples to illustrate that although the pathophysiological cascades associated with brain death are global, they closely resemble the local cascades associated with the development of delayed cerebral infarcts. abstract_id: PUBMED:12378998 Role of hypothermia in the management of severe cases of subarachnoid hemorrhage. Mild hypothermia is thought to have a brain protective effect to pathophysiological conditions, which are caused by severe brain damage including brain injury and cerebral stroke. In this paper, general aspects of this treatment as history, pathophysiological effect, and problems are summarized. Also, the clinical effects of hypothermic therapy for a subarachnoid hemorrhage are reviewed. Main targets of the therapy for this disease are severe primary brain damage caused by the attack itself and secondary ischemic brain damage after delayed vasospasm. But even now, there are no fully established data about the effect of hypothermia at such conditions after subarachnoid hemorrhage. The results of our study of cerebral blood flow and cerebral oxygen metabolism using positron emission tomography are presented to show the physiological effect of hypothermia on human brain after severe brain damage caused by subarachnoid hemorrhage. In conclusion, effect of hypothermia on subarachnoid hemorrhage is not confirmed yet and reported data is limited, so that additional studies, especially controlled studies, would be recommended. abstract_id: PUBMED:34500101 Local Application of Magnesium Sulfate Solution Suppressed Cortical Spreading Ischemia and Reduced Brain Damage in a Rat Subarachnoid Hemorrhage-Mimicking Model. Objective: Cortical spreading depolarization (CSD), cortical spreading ischemia (CSI), and early brain injury are involved in the occurrence of delayed brain ischemia after subarachnoid hemorrhage (SAH). We tested whether local application of magnesium (Mg) sulfate solution suppressed CSD and CSI, and decreased brain damage in a rat SAH-mimicking model. Methods: Nitric oxide synthase inhibitor L-NG-nitroarginine methyl ester (L-NAME) and high concentration potassium solution were topically applied to simulate the environment after SAH. We irrigated the parietal cortex with artificial cerebrospinal fluid (ACSF), containing L-NAME (1 mM), K+ (35 mM), and Mg2+ (5 mM). Forty-five rats were divided into 3 groups: sham surgery (sham group), L-NAME + [K+]ACSF (control group), and L-NAME + [K+]ACSF + [Mg2+] (Mg group). CSD was induced by topical application with 1 M KCl solution in 3 groups. The effects of Mg administration on CSD and cerebral blood flow were evaluated. Histological brain tissue damage, body weight, and neurological score were assessed at 2 days after insult. Results: Mg solution significantly shortened the total depolarization time, and reduced CSI, histological brain damage, and brain edema compared with those of the control group (P &lt; 0.05). Body weight loss was significantly suppressed in the Mg group (P &lt; 0.05), but neurological score did not improve. Conclusions: Local application of Mg suppressed CSI and reduced brain damage in a rat SAH-mimicking model. Mg irrigation therapy may be beneficial to suppress brain damage due to CSI after SAH. abstract_id: PUBMED:35411920 Spreading depolarizations in ischaemia after subarachnoid haemorrhage, a diagnostic phase III study. Focal brain damage after aneurysmal subarachnoid haemorrhage predominantly results from intracerebral haemorrhage, and early and delayed cerebral ischaemia. The prospective, observational, multicentre, cohort, diagnostic phase III trial, DISCHARGE-1, primarily investigated whether the peak total spreading depolarization-induced depression duration of a recording day during delayed neuromonitoring (delayed depression duration) indicates delayed ipsilateral infarction. Consecutive patients (n = 205) who required neurosurgery were enrolled in six university hospitals from September 2009 to April 2018. Subdural electrodes for electrocorticography were implanted. Participants were excluded on the basis of exclusion criteria, technical problems in data quality, missing neuroimages or patient withdrawal (n = 25). Evaluators were blinded to other measures. Longitudinal MRI, and CT studies if clinically indicated, revealed that 162/180 patients developed focal brain damage during the first 2 weeks. During 4.5 years of cumulative recording, 6777 spreading depolarizations occurred in 161/180 patients and 238 electrographic seizures in 14/180. Ten patients died early; 90/170 developed delayed infarction ipsilateral to the electrodes. Primary objective was to investigate whether a 60-min delayed depression duration cut-off in a 24-h window predicts delayed infarction with &gt;0.60 sensitivity and &gt;0.80 specificity, and to estimate a new cut-off. The 60-min cut-off was too short. Sensitivity was sufficient [= 0.76 (95% confidence interval: 0.65-0.84), P = 0.0014] but specificity was 0.59 (0.47-0.70), i.e. &lt;0.80 (P &lt; 0.0001). Nevertheless, the area under the receiver operating characteristic (AUROC) curve of delayed depression duration was 0.76 (0.69-0.83, P &lt; 0.0001) for delayed infarction and 0.88 (0.81-0.94, P &lt; 0.0001) for delayed ischaemia (reversible delayed neurological deficit or infarction). In secondary analysis, a new 180-min cut-off indicated delayed infarction with a targeted 0.62 sensitivity and 0.83 specificity. In awake patients, the AUROC curve of delayed depression duration was 0.84 (0.70-0.97, P = 0.001) and the prespecified 60-min cut-off showed 0.71 sensitivity and 0.82 specificity for reversible neurological deficits. In multivariate analysis, delayed depression duration (β = 0.474, P &lt; 0.001), delayed median Glasgow Coma Score (β = -0.201, P = 0.005) and peak transcranial Doppler (β = 0.169, P = 0.016) explained 35% of variance in delayed infarction. Another key finding was that spreading depolarization-variables were included in every multiple regression model of early, delayed and total brain damage, patient outcome and death, strongly suggesting that they are an independent biomarker of progressive brain injury. While the 60-min cut-off of cumulative depression in a 24-h window indicated reversible delayed neurological deficit, only a 180-min cut-off indicated new infarction with &gt;0.60 sensitivity and &gt;0.80 specificity. Although spontaneous resolution of the neurological deficit is still possible, we recommend initiating rescue treatment at the 60-min rather than the 180-min cut-off if progression of injury to infarction is to be prevented. abstract_id: PUBMED:9254084 Cerebrospinal fluid interleukin-1 receptor antagonist and tumor necrosis factor-alpha following subarachnoid hemorrhage. Subarachnoid hemorrhage (SAH) causes an inflammatory reaction and may lead to ischemic brain damage. Experimental ischemia has been shown to be connected with the alarm-reaction cytokines interleukin-1 receptor antagonist (IL-1Ra) and tumor necrosis factor-alpha (TNF alpha). Increased levels of these cytokines, however, have not been detected thus far in patients following an SAH event. For this reason daily cerebrospinal fluid (CSF) samples were collected from 22 consecutively enrolled patients with SAH and from 10 non-SAH patients (controls). The CSF samples were studied using immunoassays for IL-1Ra and TNF alpha to investigate whether an SAH caused increased cytokine levels. The mean IL-1Ra levels were significantly higher in patients with SAH who were in poor clinical condition on admission than in those who were in good condition (318 pg/ml vs. 82 pg/ml, p &lt; 0.02). The IL-1Ra levels increased during delayed ischemic episodes and after surgery in patients who were in poor clinical condition. Significant increases in IL-1Ra and TNF alpha were detected during Days 4 through 10 in patients suffering from SAH who eventually had a poor outcome (p &lt; 0.05). Patients with good outcomes and control patients had low levels of these cytokines. The levels of IL-1Ra increased after surgery in patients with Hunt and Hess Grades III through V, but not in those with Grade I or II. This finding indicates that patients in poor clinical condition have a labile biochemical state in the brain that is reflected in increased cytokine levels following the surgical trauma. Both IL-1Ra and TNF alpha are known to induce fever, malaise, leukocytosis, and nitric oxide synthesis and to mediate ischemic and traumatic brain injuries. The present study shows that levels of these cytokines increase after SAH occurs and that high cytokine levels correlate with brain damage. It is therefore likely that fever, leukocytosis, and nitric oxide synthesis are also mediated by IL-1 in patients suffering from SAH and it is probable that the inflammatory mediators contribute to brain damage. abstract_id: PUBMED:847199 Evaluation of head trauma by computed tomography. A wide spectrum of traumatic abnormalities has been demonstrated by computed tomography. CT findings in acute trauma range from cerebral edema and contusion to intracerebral or extracerebral hematoma, subarachnoid and intraventricular hemorrhage and varying combinations thereof. Late traumatic changes shown by CT include chronic subdural hygroma, post-traumatic infarct, and porencephaly and hydrocephalus. The latter findings are indistinguishable from changes resulting from degenerative or ischemic brain disease. Follow-up CT scanning elucidates the natural course of various traumatic lesions. Occasionally, it may uncover occult, delayed sequelae which can be remedied surgically before irreparable brain damage occurs. abstract_id: PUBMED:8092584 Subarachnoid hemorrhage: cerebral damage, fluid balance, intracranial pressure and pressure-volume relation Changes in osmolality and electrolyte concentrations are observed frequently in patients with subarachnoid haemorrhage (SAH). Intracranial pressure (ICP) plays a determinant role in the development of secondary brain damage following SAH and may be caused by haemorrhage itself, oedema formation and disturbance of cerebrospinal fluid (CSF) dynamics. The relationships among these factors are the aim of this investigation. In 17 comatose SAH patients, ICP was monitored through a ventricular catheter; serial of pressure-volume index (PVI) and CSF formation and reabsorption were performed. Arterio-jugular differences for oxygen and lactate were measured. The average ICP recorded for each 12 hour interval was 18.9 mmHg (SD = 5.9); mean cerebral perfusion pressure (CPP) was 75 mmHg (SD = 13); the lowest CPP value was 30 mmHg. Mean PVI was 22.7 mL (SD = 7.4), ranging from 5 to 36. Eleven patients however, showed a PVI less than 15 mL at some point during testing. Values of CSF dynamics indicated disturbances of CSF reabsorption in 11 cases. When the cause of ICP rise was identified in CSF disturbances, treatment was successful, even in case of reduced PVI. Mean C(a-v)O2, corrected for a PaCO2 of 40 mmHg, was 3.7 mL.dL-1 (SD = 1.1) ranging from the extremely low value of 0.2 to 6.8 mL.L-1. Three patients with extremely low C(a-v)O2 values showed a cerebral production of lactate and developed areas of ischaemia on the CT scan. Hyponatraemia, considered as a sodium plasma concentration of less than 135 mmol.L-1, was detected in seven patients. Hyponatraemia was treated by infusion of hypertonic sodium solutions. Mannitol (1 g.kg-1.d-1 in four doses) was infused if the sodium plasma concentration was not corrected by the former treatment or if ICP exceeded 20 mmHg. Treatment was aimed at preserving cerebral perfusion by providing adequate pre-load, low viscosity (Ht 30%) and sustained arterial pressure. Correction of hyponatraemia was therefore achieved more through hypertonic fluids infusion than by using diuretics. abstract_id: PUBMED:11844260 Traumatic subarachnoid hemorrhage: demographic and clinical study of 750 patients from the European brain injury consortium survey of head injuries. Objective: Previous reports identified the presence of traumatic subarachnoid hemorrhage (tSAH) on admission computed tomographic (CT) scans as an independent prognostic factor in worsening outcomes. The mechanism underlying the link between tSAH and prognosis has not been clarified. The aim of this study was to investigate the association between CT evidence of tSAH and outcomes after moderate or severe head injuries. Methods: In a survey organized by the European Brain Injury Consortium, data on initial severity, treatment, and subsequent outcomes were prospectively collected for 1005 patients with moderate or severe head injuries who were admitted to one of the 67 European neurosurgical units during a 3-month period in 1995. The CT findings were classified according to the Traumatic Coma Data Bank classification system, and the presence or absence of tSAH was recorded separately in the initial CT scan forms. Results: Complete data on early clinical features, CT findings, and outcomes at 6 months were available for 750 patients, of whom 41% exhibited evidence of tSAH on admission CT scans. There was a strong, highly statistically significant association between the presence of tSAH and poor outcomes. In fact, 41% of patients without tSAH achieved the level of good recovery, whereas only 15% of patients with tSAH achieved this outcome. Patients with tSAH were significantly older (median age, 43 yr; standard deviation, 21.1 yr) than those without tSAH (median age, 32 yr; standard deviation, 19.5 yr), and there was a significant tendency for patients with tSAH to exhibit lower Glasgow Coma Scale scores at the time of admission. A logistic regression analysis of favorable/unfavorable outcomes demonstrated that there was still a very strong association between tSAH and outcomes after simultaneous adjustment for age, Glasgow Coma Scale Motor Scores, and admission CT findings (odds ratio, 2.49; 95% confidence interval, 1.74-3.55; P &lt; 0.001). Comparison of the time courses for 164 patients with early (within 14 d after injury) deaths demonstrated very similar patterns, with an early peak and a subsequent decline; there was no evidence of a delayed increase in mortality rates for either group of patients (with or without tSAH). Conclusion: These findings for an unselected series of patients confirm previous reports of the adverse prognostic significance of tSAH. The data support the view that death among patients with tSAH is related to the severity of the initial mechanical damage, rather than to the effects of delayed vasospasm and secondary ischemic brain damage. Answer: The evidence suggests that traumatic subarachnoid hemorrhage (TSAH) caused by severe diffuse brain injury does not lead to delayed ischemic brain damage and secondary deterioration of outcome in the same way as aneurysmal subarachnoid hemorrhage (ASAH) does. A study comparing 99 patients with diffuse brain injury with TSAH and 114 patients with ASAH found that the distribution of subarachnoid hemorrhage on CT scans differed between the two groups. TSAH was not limited to cisterns surrounding the circle of Willis but extended to supratentorial regions and interhemispheric fissures. CT-detected subarachnoid hemorrhage disappeared very early with TSAH and gradually with ASAH. In the ASAH group, mean cerebral blood flow decreased significantly during the acute phase and further during the subacute phase. In contrast, in the TSAH group, mean cerebral blood flow decreased to a lesser extent during the acute phase and increased slightly during the subacute phase. Neurological deterioration and in-hospital death peaked on Day 0 in association with TSAH and showed twin peaks in association with ASAH. The incidence of low-density areas on CT scans was significantly higher with ASAH than with TSAH, and all low-density areas on the CT scans of patients with ASAH corresponded to vascular territories. However, low-density areas on the CT scans of patients with TSAH were rarely associated with vascular territories and contained deep-seated or gliding contusion types. These findings suggest that the incidence of vasospasm is low in association with TSAH and that the cause is different compared with ASAH. There is no evidence that the presence of TSAH in cases of diffuse brain injury leads to delayed ischemic brain damage and secondary deterioration of outcome (PUBMED:9802847).
Instruction: Are there any specific ultrasound findings of nodular hyperplasia ("leave me alone" lesion) to differentiate it from follicular adenoma? Abstracts: abstract_id: PUBMED:19241229 Are there any specific ultrasound findings of nodular hyperplasia ("leave me alone" lesion) to differentiate it from follicular adenoma? Background: Among benign thyroid nodules, nodular hyperplasia (NH) is the most common and represents a "leave me alone" lesion with no requirement for further treatment, while follicular adenoma (FA) is a lesion that should potentially be removed due to the difficulty of differentiation from a carcinoma on a biopsy alone. Purpose: To evaluate whether there are specific ultrasound (US) findings for an NH to distinguish it from an FA. Material And Methods: Pathologically proven cases of benign thyroid nodules (95 cases: 53 NH, 42 FA) were reviewed retrospectively. The number of associated nodules, the nodule size, internal content, shape, margin, echogenicity, presence of peripheral halo, and calcification were analyzed using grayscale ultrasonography. Results: NHs were predominantly solid in 40 cases (75.5%) and predominantly cystic in 13 cases (24.5%), while FAs were predominantly solid in all cases (n=42, 100%) (P&lt;0.001). A spongiform appearance was present exclusively in NH (9/53, 17.0%). For NH, 83.0% of the lesions (44/53) showed an isoechoic pattern. For FA, the lesions showed a variable echoic pattern, including a marked hypoechoic pattern (5/42, 11.9%), a hypoechoic pattern (22/42, 52.4%), and an isoechoic pattern (15/42, 35.7%) (P&lt;0.001). The nodule size, shape, margin, presence of peripheral halo, and calcification did not show any difference between FA and NH. Conclusion: The ratio of solid to cystic content, spongiform appearance, and echogenicity is a combination of US findings that may be helpful in distinguishing an NH from an FA, and may thereby help to avoid unnecessary fine-needle aspirations for "leave me alone" lesions. abstract_id: PUBMED:33888443 Ultrasound criteria (EU-TIRADS) to identify thyroid nodule malignancy risk in adolescents. Correlation with cyto-histological findings. Introduction: Thyroid nodule (TN) harboring a thyroid carcinoma are more common in pediatric than adult patients. In pediatric population, the evaluation of a TN should require specific pediatric tools for its diagnostic and therapeutic management. High-resolution ultrasonography and cytological evaluation after fine-needle aspiration biopsy (FNAB) remain the cornerstones of evaluation of TN. Objectives: To evaluate in pediatric TN for the first time the usefulness and precision of the ultrasound criteria defined by the"Thyroid Imaging Reporting and Data System (EU-TIRADS) 2017 in adults" to establish the ultrasound indication for the practice of FNAB and stratify the risk of malignancy. Patients And Methods: 24 pediatric patients under age 18 years with thyroid nodules were attended in the last 15 years, 24 of them (31 nodules; age: 15.2 ± 2.2 years; 18 women) met the inclusion criteria: FNAB with Bethesda classification and ultrasound with EU-TIRADS score. EU-TIRADS score were evaluated retrospectively. Fourteen patients underwent surgery and the definitive histological diagnosis was obtained, this allowed the calculations of sensitivity, specificity and positive and negative predictive values of the EU-TIRADS and Bethesda classification. Data on the largest diameters of the nodules were collected. Results: Of the overall 31 nodules, the distribution by EU-TIRADS (T) category was: T1 (3.2%), T2: 2 (6.4%), T3: 7 (22.6%), T4: 16 (51.6%) and T5: 5 (16.1%). All malignant nodules were included in EU-TIRADS category 4 or 5. By the other hand, 13 of the 25 benign nodules were also included in the EU-TIRADS 4 category, and one in the 5. The distribution by categories of Bethesda's classification (B): BI: 6 (19.4%), BII: 14 (45.2%), BIII: 5 (16.1%), BIV: 2 (6.5%), BV: 0 and BVI: 4 (12.9%). The pathological diagnosis of the 14 patients who underwent surgery was: 6 papillary carcinomas and 8 with benign lesions: 6 nodular hyperplasia and 2 follicular adenoma. The percentage of malignancy was 42%. The sensitivity of the EU-TIRADS classification to detect malignant nodules was 100%, the specificity was 25%, PPV 44% and NPV 100%. The sensitivity of the Bethesda classification to detect malignant nodules was 86%, the specificity was 75%, PPV 67% and NPV 90%. The analysis of the largest diameter of the nodules did not show statistically significant differences between benign and malignant lesions. Conclusions: EU-TIRADS for ultrasonographic criteria classification in combination with the clinical history is an adequate and reproducible method to estimate suspicion of malignancy of pediatric TN. It is also a reliable diagnostic tool to decide which nodules will be candidates for FNAB. abstract_id: PUBMED:34924161 Ultrasound criteria (EU-TIRADS) to identify thyroid nodule malignancy risk in adolescents. Correlation with cyto-histological findings. Introduction: Thyroid nodule (TN) harbouring a thyroid carcinoma are more common in paediatric than adult patients. In paediatric population, the evaluation of a TN should require specific paediatric tools for its diagnostic and therapeutic management. High-resolution ultrasonography and cytological evaluation after fine-needle aspiration biopsy (FNAB) remain the cornerstones of evaluation of TN. Objectives: To evaluate in paediatric TN for the first time the usefulness and precision of the ultrasound criteria defined by the "Thyroid Imaging Reporting and Data System (EU-TIRADS) 2017 in adults" to establish the ultrasound indication for the practice of FNAB and stratify the risk of malignancy. Patients And Methods: 24 paediatric patients under age 18 years with thyroid nodules were attended in the last 15 years, 24 of them (31 nodules; age: 15.2 ± 2.2 years; 18 women) met the inclusion criteria: FNAB with Bethesda classification and ultrasound with EU-TIRADS score. EU-TIRADS score were evaluated retrospectively. Fourteen patients underwent surgery and the definitive histological diagnosis was obtained, this allowed the calculations of sensitivity, specificity and positive and negative predictive values of the EU-TIRADS and Bethesda classification. Data on the largest diameters of the nodules were collected. Results: Of the overall 31 nodules, the distribution by EU-TIRADS (T) category was: T1 (3.2%), T2: 2 (6.4%), T3: 7 (22.6%), T4: 16 (51.6%) and T5: 5 (16.1%). All malignant nodules were included in EU-TIRADS category 4 or 5. By the other hand, 13 of the 25 benign nodules were also included in the EU-TIRADS 4 category, and one in the 5. The distribution by categories of Bethesda's classification (B): BI: 6 (19.4%), BII: 14 (45.2%), BIII: 5 (16.1%), BIV: 2 (6.5%), BV: 0 and BVI: 4 (12.9%). The pathological diagnosis of the 14 patients who underwent surgery was: 6 papillary carcinomas and 8 with benign lesions: 6 nodular hyperplasia and 2 follicular adenoma. The percentage of malignancy was 42%. The sensitivity of the EU-TIRADS classification to detect malignant nodules was 100%, the specificity was 25%, PPV 44% and NPV 100%. The sensitivity of the Bethesda classification to detect malignant nodules was 86%, the specificity was 75%, PPV 67% and NPV 90%. The analysis of the largest diameter of the nodules did not show statistically significant differences between benign and malignant lesions. Conclusions: EU-TIRADS for ultrasonographic criteria classification in combination with the clinical history is an adequate and reproducible method to estimate suspicion of malignancy of paediatric TN. It is also a reliable diagnostic tool to decide which nodules will be candidates for FNAB. abstract_id: PUBMED:31051736 The impact of using the Bethesda System for reporting thyroid cytology diagnostic criteria on the follicular lesion of undetermined significance category. Introduction: The Bethesda System for Reporting Thyroid Cytology (TBSRTC) refines the definition of and provides specific diagnostic criteria for the category of atypia of undetermined significance/follicular lesion of undetermined significance (AUS/FLUS). This study was conducted to review our institutional experience with thyroid nodules interpreted as FLUS using TBSRTC diagnostic criteria. Materials And Methods: A SNOMED (Systemized Nomenclature of Medicine) search of the electronic pathology database in our institution for the period of January 2011 to June 2012 was conducted to identify thyroid aspirates previously interpreted as FLUS using TBSRTC diagnostic criteria. All cases were followed for at least 6 months. Follow-up information including clinical/imaging monitoring, repeat fine-needle aspiration, and/or subsequent surgical intervention, along with the corresponding cytologic diagnosis and/or histologic diagnosis were collected for each case. Cytology-histology concordance was evaluated for aspirates with surgical follow-up. Results: We identified a total of 122 FLUS cases and follow-up information was available in 100 cases. Among the 100 cases, 31 appeared clinically stable and showed no size change on ultrasonographic imaging; 9 were reclassified as benign non-neoplastic by a repeat fine-needle aspiration; and 60 received surgical treatments. The follow-up histology revealed 26.7% (16 of 60) papillary thyroid carcinoma, 25% (15 of 60) follicular adenoma, and 48.3% (29 of 60) non-neoplastic nodules (nodular hyperplasia or lymphocytic thyroiditis). Conclusions: Compared with historical control subjects from our institution, the current study demonstrates that adhering to TBSRTC diagnostic criteria yields a higher prediction of histology-proven neoplasia (25% versus 14.9%) and malignancy (26.7% versus 9.2%) for the FLUS category. abstract_id: PUBMED:16604553 Does Hurthle cell lesion/neoplasm predict malignancy more than follicular lesion/neoplasm on thyroid fine-needle aspiration? Thyroid fine-needle aspiration (FNA) is a standard procedure for the clinical triage of thyroid nodules. The diagnosis of an adequately sampled thyroid FNA is generally grouped into three categories: benign, malignant, and indeterminate. The latter group usually includes follicular neoplasm, follicular lesion, and sometimes a more specific diagnosis such as Hurthle cell neoplasm or follicular lesion/neoplasm with Hurthle cell change. Whether a FNA diagnosis of Hurthle cell lesion/neoplasm (HLN) denotes a worse clinical outcome than follicular lesion/neoplasm (FLN) remains controversial. A cohort of 303 thyroid FNA cases with follow-up thyroidectomy in our institutes was identified, with the follow-up excision diagnosis compared to the FNA diagnosis in order to address this issue. Of this cohort, 87 cases had an FNA diagnosis of HLN while 216 cases had a diagnosis of FLN. Upon excision, the FNA diagnosis of HLN group had 14 cases of goiter/nodular hyperplasia (16%), 46 cases of adenoma (12 follicular adenoma (14%) and 34 cases of Hurthle cell adenoma (39%)), and 27 cases of carcinoma (31%, 12 papillary carcinoma and 15 Hurthle cell carcinoma). The FLN group had 74 cases of goiter/nodular hyperplasia (34.3%), 8 cases of Hashimoto thyroiditis (3.7%), 73 cases of follicular adenoma (33.8%), one case of granular cell tumor, and 60 cases of carcinoma (27.8%, 46 papillary carcinoma, 12 follicular carcinoma, and 1 Hurthle cell carcinoma and 1 parathyroid carcinoma) upon excision. There is no significant difference in predicting cancer between the two cytology diagnosis groups (HLN versus FLN, 31% versus 27.8%, P = 0.5771). When sorting all the cases by the surgical diagnosis, while comparable for age at diagnosis, the cancer group having the higher proportion of male patients than the non-cancer group (28.7% versus 16.7%, P = 0.0259). Hurthle cell carcinoma patients are typically older than patients with other cancer diagnoses (59 versus 44, P = 0.0077). Our results suggest that an FNA diagnosis of HLN does not predict more malignancy than FLN. Males and older patients with a HLN FNA diagnosis carry a higher risk of Hurthle cell carcinoma upon thyroidectomy. abstract_id: PUBMED:27166018 Ultrasonographic Differentiation Between Nodular Hyperplasia and Neoplastic Follicular-Patterned Lesions of the Thyroid Gland. We evaluate the gray-scale ultrasonographic characteristics that differentiate between nodular hyperplasia (NH) and neoplastic follicular-patterned lesions (NFPLs) of the thyroid gland. Ultrasonographic features of 750 patients with 832 thyroid nodules (NH, n = 361; or NFPLs, follicular adenoma, n = 123; follicular carcinoma, n = 159; and follicular variant papillary carcinoma, n = 189) were analyzed. Except for echogenicity, over two-thirds of the cases of NH and NFPLs share the ultrasonographic characteristics of solid internal content, a well-defined smooth margin and round-to-ovoid shape. Independent predictors for NH were non-solid internal content (sensitivity 27.1%, specificity 90.2%), isoechogenicity (sensitivity 69.5%, specificity 63.5%) and an ill-defined margin (sensitivity 18.8%, specificity 94.5%). Independent predictors for NFPLs were hypoechogenicity (sensitivity 60.5%, specificity 70.4%), marked hypoechogenicity (sensitivity 2.8%, specificity 99.4%) and taller-than-wide shape (sensitivity 6.6%, specificity 98.1%). Although NH and NFPLs commonly share ultrasonographic characteristics, non-solid internal content and ill-defined margin are specific to NH and marked hypoechogenicity and taller-than-wide shape are specific to NFPLs. abstract_id: PUBMED:30594973 18F-choline PET/CT incidental thyroid uptake in patients studied for prostate cancer. Purpose: Thyroid incidental uptake is defined as a thyroid uptake incidentally detected by imaging examinations performed for non-thyroid disease. The aim of this study was to establish the prevalence and the pathological nature of focal thyroid incidental uptake (FTIU) among patients studied with 18F-choline-PET/CT. Materials And Methods: We retrospectively evaluated 368 patients who performed 18F-choline-PET/CT between June 2016 and August 2018. The PET images were analyzed visually and semi-quantitatively by measuring the maximum standardized uptake value (SUVmax) and the mean SUV (SUVmean) of the thyroid gland and of the FTIU; every focal thyroid uptake deviating from physiological distribution and background was considered FTIU. Final diagnosis of FTIU was obtained by cytological or histological examination after surgery. Results: The average SUVmax and SUVmean of thyroid gland in population were 3 and 1.8. Among 368 patients, FTIU was identified in nine cases (2.4%) and eight underwent further investigations to determine the nature. Two FTIU were classified as malignant (thyroid carcinoma), whereas five were benign (three nodular hyperplasia, one follicular adenoma, one Hurtle cell adenoma) and one indeterminate at cytological examination. In malignant lesions, average SUVmax was 9.6 and 4.5, respectively, while average SUVmean was 5.3 and 2.9, respectively. Average SUVmax and SUVmean of benign lesions were 4.9 and 3.2 and of the indeterminate lesion 5 and 3, respectively. Conclusions: 18F-choline-PET/CT FTIU may be a relevant diagnostic reality, which requires further investigations and affects management, especially considering that, despite being mainly benign, also malignancy is possible. abstract_id: PUBMED:19269016 BRAF mutation in solid cell nest hyperplasia associated with papillary thyroid carcinoma. A precursor lesion? We describe a case of solid cell nest hyperplasia associated with papillary thyroid carcinoma in a 48-year-old man with goiter. The entire gland was examined; in 1 section, the cells of 1 solid cell nest were in close contact with a follicular variant of papillary microcarcinoma. A second follicular variant of papillary microcarcinoma, 1 follicular adenoma, hyperplastic nodules, and some lymphoid aggregates were also found. Scattered p63-positive cells were found in the second papillary microcarcinoma. After microdissection, the same BRAF(V600E) mutation was found both in a pool of 5 solid cell nests and in the adjacent papillary microcarcinoma. BRAF(V600E) mutation and the previously unreported BRAF(G593D) mutation along with p.G606G silent change were found in the second papillary microcarcinoma, but no mutations were detected in the follicular adenoma or in the 2 other pools of solid cell nests screened for BRAF gene mutations. These findings support a histogenetic link between the main cells of solid cell nests and papillary thyroid carcinoma, and suggest solid cell nest hyperplasia as a precursor lesion of papillary thyroid carcinoma. abstract_id: PUBMED:10594284 Thyroid pathologic findings in patients with Cowden disease. We describe the histologic findings in thyroid glands from six female and five male patients with Cowden disease. The patients were aged 9 to 43 years (mean age, 26 years). The salient thyroid lesions in this syndrome were multicentric follicular adenomas and adenomatous (parenchymatous, hyperplastic) nodules showing a wide range of nonspecific cytoarchitectural patterns. Multiple tiny cellular foci, so-called microadenomas, were also a feature. Specific lesions composed of oxyphil or clear cells, a tumor with features of hyalinizing trabecular adenoma, and an adenolipoma also occurred. Two cases showed a follicular carcinoma in addition to multiple benign follicular cell proliferations. The follicular carcinomas occurred at an older age and were larger in size than the clinically significant benign nodular lesions, suggesting tumor progression. All tumors showed thyroglobulin immunoreactivity and were negative for calcitonin. The histologic findings of a multiple adenomatous goiter or multiple follicular adenomas, particularly in children and young adults, should alert the pathologist and physician to the possibility of an inherited trait, such as Cowden disease, with its implications for family screening. The tumors are usually benign and well demarcated, but, because of multicentricity and increased risk of recurrence or progression to carcinoma, total thyroidectomy should be advocated. abstract_id: PUBMED:21530955 Cystic change in thyroid nodules: a confounding factor for real-time qualitative thyroid ultrasound elastography. Objective: To evaluate real-time qualitative ultrasound elastography for focal thyroid masses undergoing fine-needle aspiration in a routine thyroid ultrasound clinic. Materials And Methods: Ninety-four thyroid nodules scheduled for fine-needle aspiration cytology in a thyroid ultrasound clinic also underwent real-time freehand elastography. Colour-scaled elastograms were graded visually on the stiffness of the solid component of nodules relative to thyroid parenchyma using an elastography score (ES) scale from 1 (soft) to 4 (stiff). The ES for benign and malignant nodules and the influence of cystic change on ES were analysed using Chi-square with trend and Fishers exact tests, with a p&lt;0.05 used to indicate statistical significance. Results: There were 19 papillary carcinomas, five metastases, 57 hyperplastic nodules, and four follicular adenomas based on definitive cytology (n=54) or histology (n=31). Nine nodules were excluded due to indeterminate cytology and no histology. Of malignancies (all solid), two were ES=1, four were ES=2, eight were ES=3, and 10 were ES=4. Of benign nodules, 17 were ES=1, 17 were ES=2, 16 were ES=3, and 11 were ES=4. An ES&gt;2 was more common in benign nodules with predominant cystic components (17/18) than mildly cystic (3/12) or completely solid (7/31) benign nodules (p=0.0004, p&lt;0.0001). The ES was not significantly different between benign and malignant nodules (p=0.09) unless partially cystic nodules were excluded (p=0.005). For solid nodules, an ES&gt;2 optimally predicted malignancy, achieving 74% sensitivity, 77% specificity, and 76% accuracy. Conclusion: Qualitative real-time thyroid elastography predicts malignancy only if predominantly cystic nodules are excluded, which may limit its utility in routine clinical practice. Answer: Yes, there are specific ultrasound (US) findings that can help differentiate nodular hyperplasia (NH), also known as "leave me alone" lesions, from follicular adenoma (FA). According to a retrospective review of pathologically proven cases of benign thyroid nodules, NHs were predominantly solid in 75.5% of cases and predominantly cystic in 24.5% of cases, while FAs were predominantly solid in all cases (100%) (PUBMED:19241229). Additionally, a spongiform appearance was present exclusively in NH (17.0% of cases), and for NH, 83.0% of the lesions showed an isoechoic pattern. In contrast, FAs showed a variable echoic pattern, including marked hypoechoic, hypoechoic, and isoechoic patterns (PUBMED:19241229). The ratio of solid to cystic content, spongiform appearance, and echogenicity are a combination of US findings that may be helpful in distinguishing NH from FA, potentially avoiding unnecessary fine-needle aspirations for NH lesions (PUBMED:19241229).
Instruction: Can progressive resistance training twice a week improve mobility, muscle strength, and quality of life in very elderly nursing-home residents with impaired mobility? Abstracts: abstract_id: PUBMED:23637524 Can progressive resistance training twice a week improve mobility, muscle strength, and quality of life in very elderly nursing-home residents with impaired mobility? A pilot study. Purpose: To determine the effects of progressive resistance training on mobility, muscle strength, and quality of life in nursing-home residents with impaired mobility. Methods: Nursing-home residents aged 77 years and older with impaired mobility were recruited in Berlin, Germany. The eight-week exercise program consisted of progressive resistance training twice a week. Mobility (primary outcome) was assessed with the Elderly Mobility Scale (zero = worst, 20 = best) at baseline and after 8 weeks. Muscle strength (secondary outcome) was determined by the eight-repetition maximum. The Short Form-36 Health Survey was used to assess quality of life. Results: Of the 15 participants (mean age 84 years, range 77-97 years), ten completed the 8-week program. Mobility (Elderly Mobility Scale mean ± standard deviation pre 14.1 ± 3.2 and post 17.5 ± 3.6; P = 0.005) as well as muscle strength of upper and lower limbs improved (from 62% at chest press up to 108% at leg extension machine), whereas most quality of life subscales did not show considerable change. Conclusion: Resistance training twice a week over 2 months seemed to considerably improve mobility and muscle strength in persons aged 77-97 years with impaired mobility. abstract_id: PUBMED:19883891 Functional mobility, depressive symptoms, level of independence, and quality of life of the elderly living at home and in the nursing home. Objectives: To compare functional mobility, depressive symptoms, level of independence, and quality of life of the elderly living at home and in the nursing home. Design: A prospectively designed, comparative study. Setting: A nursing home and a university hospital department. Participants: In this study, 33 elderly living in a nursing home and 25 elderly living at home, who fulfilled the inclusion criteria and volunteered to participate, were included. Measurements: Sociodemographic characteristics were recorded. Functional mobility (Timed Up &amp; Go Test), depressive symptoms (Geriatric Depression Scale), level of independence (Kahoku Aging Longitudinal Study Scale), and quality of life (Visual Analogue Scale) scores were compared between the groups. Results: Functional mobility and independence level of the nursing home residents were higher than the home-dwelling elderly (95% CI: -4.88, -0.29 and 0.41, 6.30, respectively), but they had more depressive symptoms (95% CI: 0.30, 5.45), and their level of QoL was lower (95% CI: -15.55, -2.93). Conclusion: These findings are thought to be important and of benefit for health care professionals and caregivers as indicating the areas that need to be supported for the elderly living at home (functional mobility and independence) and in the nursing home (depressive symptoms and quality of life). abstract_id: PUBMED:30165595 Progressive Resistance Training Improves Torque Capacity and Strength in Mobility-Limited Older Adults. Background: Progressive resistance training (PRT) is consistently shown to improve muscle strength in older adults. The efficacy of PRT to improve muscle fatigue in older adults with demonstrated mobility limitations remains unclear. Methods: Mobility-limited (Short Physical Performance Battery [SPPB] ≤ 9) older adults (age 70-92 years) were recruited for this study and randomized to either PRT or home-based flexibility (FLEX) 3 d/wk for 12 weeks. Muscle fatigue and strength outcomes were assessed at baseline and 12 weeks. The primary outcome was torque capacity, a composite measure of strength and fatigue, defined as the sum of peak torques from an isokinetic fatigue test. Results: Seventy participants were randomized (mean [SD] age 78.9 [5.4] years; 60% female; mean [SD] SPPB 7.5 [1.6]). At follow-up, the PRT group improved significantly in torque capacity, mean between-group difference (95% confidence interval) 466.19 (138.4, 793.97) Nm (p = .006), and maximal strength 127.3 (60.96, 193.61) Nm (p = .0003), when compared with FLEX group. Neither group demonstrated significant changes in muscle fatigue or torque variability. Conclusion: Twelve weeks of PRT improved torque capacity, as well as strength in mobility-limited older adults. These results demonstrate PRT improves multiple age-related muscular impairments. abstract_id: PUBMED:18179479 Muscle endurance in elderly nursing home residents is related to fatigue perception, mobility, and circulating tumor necrosis factor-alpha, interleukin-6, and heat shock protein 70. Objectives: To explore the relationships between muscle endurance and circulating interleukin (IL)-6, tumor necrosis factor alpha (TNF-alpha), and heat shock protein (Hsp)70 in nursing home residents and to assess how muscle endurance relates to self-perceived fatigue and mobility. Design: Exploratory study. Setting: Three nursing homes of the Foundation for Psychogeriatrics (Brussels, Belgium). Participants: Seventy-seven residents (53 female and 24 male, mean age 81 +/- 8). Measurements: Participants were assessed for muscle endurance (fatigue resistance and grip work); perceived fatigue (visual analogue scale for fatigue); fatigue during daily activities (Mobility-Tiredness Scale); effect of fatigue on quality of life (World Health Organization Quality Of Life questionnaire); mobility (Tinetti Test &amp; Elderly Mobility Scale (EMS)); and circulating IL-6, TNF-alpha, and Hsp70. Results: Residents with better fatigue resistance reported less self-perceived tiredness (P&lt;.05). Similar trends were observed for fatigue during daily activities and for the extent to which fatigue bothered subjects. Higher grip work was associated with less self-perceived fatigue on all fatigue scales (P&lt;.01). Fatigue resistance and grip work were positively related to balance and basic mobility (all P&lt;.01; trend for relationship between fatigue resistance and EMS). Subjects with high IL-6 and Hsp70 showed significantly worse fatigue resistance (P=.007) and muscle work (P=.045) than those with high IL-6 and low Hsp70. In male residents, higher TNF-alpha was related to worse fatigue resistance and grip work (P&lt;.05). Conclusion: Elderly nursing home residents complaining of fatigue need to be taken seriously, because they show worse muscle endurance, which is related to poorer mobility. Inflammatory processes involving TNF-alpha and the interaction between IL-6 and Hsp70 are related to poorer muscle endurance in these patients. abstract_id: PUBMED:25735475 Improving person-centered mobility care in nursing homes: a feasibility study. Person-centered care is often equated with quality nursing home care. At the same time, quality mobility care contributes to residents' independence and quality of life. Realist evaluation and mixed methods were employed to evaluate the feasibility of a multi-faceted training intervention focused on person-centered mobility care. Staff and ambulatory residents of a ninety bed Australian nursing home with 3 units--one dementia-specific--participated. The intervention consisted of training sessions, weekly mobility care huddles and reflective practice sessions with individual staff. This study demonstrated the feasibility of an intervention aiming to improve person-centered mobility care in nursing homes; it was practical and well accepted. Study methods and outcome measures were suitable. Outcome measures demonstrated capacity to determine the effectiveness of the intervention in a larger randomized controlled trial. Focus groups provided insights regarding the context and mechanisms of change. Future research is recommended to evaluate intervention effectiveness and sustainability. abstract_id: PUBMED:22104262 Lower extremity muscle strength is reduced in people with type 2 diabetes, with and without polyneuropathy, and is associated with impaired mobility and reduced quality of life. Aim: The purpose of the present study was to distinguish the effects of both diabetes mellitus type 2 (DM2) and diabetic polyneuropathy (DPN) on mobility, muscle strength and health related quality of life (HR-QoL). Methods: DPN patients (n=98), DM2 patients without DPN (DC) (n=39) and healthy subjects (HC) (n=19) performed isometric and isokinetic lower limb muscle strength tests. Mobility was determined by a timed up and go test (TUGT), a 6 min walk test and the physical activity scale for the elderly questionnaire. HR-QoL was determined by the SF36 questionnaire. Results: DPN patients had moderate polyneuropathy. In both DPN and DC patients leg muscle strength was reduced by 30-50% compared to HC. Muscle strength was correlated with mobility tests, and reduced muscle strength as well as impaired mobility were associated with a loss of HR-Qol (all p&lt;0.05). We did not observe major differences in muscle strength, mobility (except for the TUGT, p&lt;0.01) and HR-QoL between diabetic patients with and without DPN. Conclusion: DM2 patients, with and without DPN, have decreased maximal muscle strength in the lower limbs and impaired mobility. These abnormalities are associated with a loss of HR-QoL. The additional effect of moderate DPN was small in our patients. abstract_id: PUBMED:26690738 Effects of resistance training on muscle strength, exercise capacity, and mobility in middle-aged and elderly patients with coronary artery disease: A meta-analysis. Background: Resistance training (RT) is a core component of cardiac rehabilitation. We investigated the effects of RT on exercise capacity, muscle strength, and mobility in middle-aged and elderly patients with coronary artery disease (CAD). Methods: We searched for randomized controlled trials of RT versus usual care, or combined RT and aerobic training (AT) versus AT alone, and identified 440 trials in total from inception to January 2014. Participants who had myocardial infarction, coronary revascularization, angina pectoris or CAD were included in the analysis. Those who had heart failure, heart transplants with either cardiac resynchronization therapy or implantable defibrillators were excluded. Results: Twenty-two trials totaling 1095 participants were analyzed. We performed random-effects meta-analysis. In middle-aged participants, RT increased lower extremity muscle strength [standardized mean difference (SMD): 0.65, 95% confidence interval (CI): 0.35 to 0.95], upper extremity muscle strength (SMD: 0.73, 95% CI: 0.48 to 0.99) and peak oxygen consumption (VO2) [weight mean difference (WMD): 0.92mL/kg/min, 95% CI: 0.12 to 1.72], but did not improve mobility compared with the control. In elderly participants, RT increased lower extremity muscle strength (SMD: 0.63, 95% CI: 0.05 to 1.21), upper extremity muscle strength (SMD: 1.18, 95% CI: 0.56 to 1.80), and peak VO2 (WMD: 0.70mL/kg/min, 95% CI: 0.03 to 1.37), and improved mobility (SMD: 0.61, 95% CI: 0.21 to 1.01) compared with the control. Conclusions: Resistance training could increase exercise capacity and muscle strength in middle-aged and elderly patients, and mobility in elderly patients, with CAD. abstract_id: PUBMED:28980220 Feasibility of a machine vs free weight strength training program and its effects on physical performance in nursing home residents: a pilot study. Background: Resistance training holds promise for nursing home residents to prevent further disabilities, falls, and fractures. Free weight as well as machine training may offer an efficient option to improve physical performance, but the feasibility of these training regimes among elderly who require continuous institutional care is still open. Aims: (1) To examine the feasibility of a 3-month machine vs. free weight strength training program in institutionalized older adults, and (2) to determine the effects on physical performance. Methods: This study is a two-arm, single-blind, randomized controlled feasibility study within a nursing home. 45 institutionalized elderly men and women (aged 83.8 ± 8.0, 12 men, 33 women) were randomly divided into two groups. The two groups completed either a free weight (FWT) or machine training (MT) for 12 weeks, twice per week, 45-60 min per session, in an individually supervised format. Performance was assessed with the 11-step stair-climbing test, 10-m walk test, Timed Up and Go Test (TUG), 30-s Chair Rising Test (CRT), grip strength, body mass index. Results: Indices of feasibility showed a recruitment and adherence rate of 53.6 and 87.5%, respectively. 35.6% of the participants dropped out after several weeks for personal reasons, illness, medical visits, or hospital stays. After the program no significant differences on motor performance were found between MT and FWT. However, there were significant improvements for both training groups on the TUG and the CRT. Conclusions: The present pilot study showed that it is feasible to conduct a strength training program in institutionalized participants. The more robust changes in motor function could serve as a basis for large randomized clinical trials. abstract_id: PUBMED:28987643 Blood flow restricted resistance training in older adults at risk of mobility limitations. High-load resistance training (HL) may be contraindicated in older adults due to pre-existing health conditions (e.g. osteoarthritis). Low-load blood flow restricted (BFR) resistance training offers an alternative to HL with potentially similar strength improvement. Purpose: To compare muscle strength, cross-sectional area (CSA), physical function, and quality of life (QOL) following 12-weeks of HL or BFR training in older adults at risk of mobility limitations. Methods: Thirty-six males and females (mean: 75.6years 95% confidence interval: [73.4-78.5], 1.67m [1.64-1.70], 74.3kg [69.8-78.8]) were randomly assigned to HL (70% of one repetition maximum [1-RM]) or low-load BFR (30% 1-RM coupled with a vascular restriction) exercise for the knee extensors and flexors twice per week for 12weeks. A control (CON) group performed light upper body resistance and flexibility training. Muscle strength, CSA of the quadriceps, 400-m walking speed, Short Physical Performance Battery (SPPB), and QOL were assessed before, midway and after training. Results: Within 6-weeks of HL training, increases in all strength measures and CSA were evident and the gains were significantly greater than the CON group (P&lt;0.05). The BFR group had strength increases in leg extension and leg press 1-RM tests, but were significantly lower in leg extension isometric maximum voluntary contraction (MVC) and leg extension 1-RM than the HL group (P&lt;0.01). At 12-weeks HL and BFR training did not differ in MVC (P=0.14). Walking speed increased 4% among all training groups (P&lt;0.01) and no changes were observed for overall SPPB score and QOL (P&gt;0.05). Conclusion: Both training programs resulted in muscle CSA improvements and HL training had more pronounced strength gains than BFR training after 6-weeks and were more similar to BFR after 12-weeks of training. These changes in both groups did not transfer to improvements in QOL, SPPB, and walking speed. Since both programs result in strength and CSA gains, albeit at different rates, future research should consider using a combination of HL and BFR training in older adults with profound muscle weakness and mobility limitations. abstract_id: PUBMED:37254274 The Impact of Functional Strength Training on Muscle Strength and Mobility in Children with Spastic Cerebral Palsy - A Systematic Review and Meta-Analysis. Objective: To review the effects of functional strength training (FST) on muscle strength and mobility in children with spastic cerebral palsy (CP). Methods: Eight databases were screened through March 2022 for studies assessing the impact of FST on strength, mobility, balance, and endurance in children with spastic CP. Study quality was assessed using the Downs and Black checklist and Cochrane Risk of Bias tools, followed by a meta-analysis. Results: Twelve intervention studies of moderate-to-high-level evidence were selected. The interventions included open and closed chain exercises using free weights or body weight, administered approximately thrice a week for 12 weeks. The FST was found to have a moderate-to-large, statistically significant, positive effect on muscle strength (quadriceps, hamstrings, and plantar flexors) and mobility (GMFM D and E), with only four studies showing maintenance of gains during follow-up assessment. Conclusion: FST has a positive effect on muscle strength and mobility in children with spastic CP, but evidence of sustained effects following FST is limited. Answer: Yes, progressive resistance training twice a week can improve mobility and muscle strength in very elderly nursing home residents with impaired mobility. A pilot study conducted with nursing-home residents aged 77 years and older with impaired mobility demonstrated that an eight-week exercise program consisting of progressive resistance training twice a week significantly improved mobility, as assessed by the Elderly Mobility Scale, and muscle strength of upper and lower limbs (PUBMED:23637524). However, the same study noted that most quality of life subscales did not show considerable change, indicating that while physical aspects such as mobility and muscle strength can be improved, the impact on quality of life may be less pronounced or require a different approach to measure effectively. Additionally, other studies have shown that resistance training can improve physical performance measures such as muscle strength, exercise capacity, and mobility in middle-aged and elderly patients with coronary artery disease (PUBMED:26690738), and in mobility-limited older adults, progressive resistance training improved torque capacity and strength (PUBMED:30165595). Furthermore, a feasibility study in a nursing home setting demonstrated that a strength training program is practical and well accepted, with improvements observed in the Timed Up and Go Test and the 30-s Chair Rising Test (PUBMED:28980220). In summary, progressive resistance training twice a week has been shown to be beneficial for improving mobility and muscle strength in very elderly nursing home residents with impaired mobility, although its impact on quality of life may require further investigation.
Instruction: Isolated pulmonary recurrence after resection of colorectal hepatic metastases--is resection indicated? Abstracts: abstract_id: PUBMED:12184413 Isolated pulmonary recurrence after resection of colorectal hepatic metastases--is resection indicated? Background: Resection of colorectal hepatic metastases is an accepted treatment modality for stage IV colorectal cancer. Concurrent or sequential pulmonary metastasectomy continues to be a controversial strategy. We analyzed factors that predicted pulmonary recurrence in patients with resected hepatic metastases and examined the efficacy of these combined interventions in the treatment of metastatic colorectal cancer. Methods: A retrospective review of a database of patients who underwent resection of colorectal hepatic metastases was performed to identify patients who either had synchronous pulmonary metastases (defined as pulmonary recurrence at the time of or within 3 months of hepatic metastasectomy) or subsequently experienced pulmonary metastases. Patient demographics, operative interventions, and overall survival were analyzed. Statistical methods included unpaired Student's t-test, actuarial survival and log-rank analysis. Results: Twenty-one patients (19%) had pulmonary metastases after hepatic resection, of which 12/21(57%) underwent pulmonary resection. No differences were observed between the resection group, the nine patients with pulmonary metastases who did not undergo resection, and the 87 patients without pulmonary metastases with regard to age, sex, race, or extent of hepatic metastases. When comparing the resected versus the unresected pulmonary recurrences, the disease-free interval from hepatic resection to detection of pulmonary metastases was 21 +/- 20 months (range, 3-72 months) versus 16 +/- 8 months (range, 4-25 months), respectively. All patients with pulmonary recurrence who underwent pulmonary metastasectomy had unilateral disease. Seven of 12 (58%) underwent wedge/segmental resections, and the remaining five (42%) required lobectomy in order to obtain a complete resection. Four patients who underwent pulmonary resection had multiple lung metastases (two to four lesions), and eight had isolated metastasis. There were no perioperative deaths in the pulmonary metastasectomy group. Contraindications to pulmonary resection included extensive pulmonary disease and concurrent extrapulmonary disease. A survival benefit was noted at 3 years for the resected versus the unresected group (60% vs 31%). Survival was no different between the resected pulmonary recurrence patients and the resected hepatic metastases only patients (60% vs 54%). Conclusions: Pulmonary metastasectomy can be performed safely and effectively in patients with recurrent disease after hepatic resection for colorectal metastases. Prolonged survival can be achieved with resection of isolated pulmonary recurrence after hepatic resection for colorectal cancer. Further studies that delineate selection criteria for pulmonary resection of colorectal metastases are warranted. abstract_id: PUBMED:34755006 Role of surgical resection and its alternative local therapy for pulmonary metastasis of colorectal cancer. We reviewed surgical and alternative treatments for pulmonary metastasis of colorectal cancer, focusing on recent reports. The standard treatment for pulmonary metastasis of colorectal cancer is pulmonary resection, if resectable, despite the fact that the metastasis is hematogenous to distant organs. Guidelines in several countries, including Japan, have described pulmonary resection as a useful option because of the favorable long-term prognosis reported in various studies pertaining to pulmonary resection. The indications for pulmonary resection have been reviewed in several studies; additionally, the number of metastases, pretreatment carcinoembryonic antigen value, and disease-free interval from the primary resection to pulmonary recurrence have been proposed. However, no consensus has been reached to date. Contrastingly, recent advances in chemotherapy have remarkably improved the outcome of distant metastases, indicating that it is time to reconsider the significance of local treatment, including pulmonary resection. In addition to surgical resection, minimally invasive therapies, such as stereotactic body radiation therapy and radiofrequency ablation have been developed as local treatments for pulmonary metastases, and their long-term results have been reported. Prospective controlled trials and large-scale data analyses are needed to determine the best local treatment for pulmonary metastases and to find the appropriate indication for each treatment. abstract_id: PUBMED:34462661 Efficacy of adjuvant chemotherapy after complete resection of pulmonary metastasis from colorectal cancer. The most effective treatment for pulmonary metastasis from colorectal cancer (CRC) is complete resection. However, as the recurrence rate after resection of the pulmonary metastases from CRC is high, postoperative adjuvant chemotherapy is often performed in clinical practice. The purpose of the present study was to evaluate the efficacy and safety of single-agent adjuvant chemotherapy after resection of pulmonary metastasis from CRC. The medical records of 16 patients who underwent the first complete resection of pulmonary metastasis from CRC were retrospectively reviewed. A total of eight patients were treated with single-agent adjuvant chemotherapy after resection of pulmonary metastasis, and oral fluoropyrimidines were selected in all regimens. As a result, the relapse-free survival rate after resection of pulmonary metastasis in the group that received postoperative adjuvant chemotherapy was significantly improved in comparison with the group treated with surgery alone. In the subgroup analysis, patients who benefited from postoperative adjuvant chemotherapy in some high-risk groups were selected, including patients with a high tumor stage or poor immunological status. In conclusion, single-agent adjuvant chemotherapy after resection of pulmonary metastasis from CRC was effective for reducing the risk of recurrence and was safe to administer. In addition, certain risk factors may identify patients who would receive more benefit from postoperative adjuvant chemotherapy after resection of pulmonary metastasis from CRC. abstract_id: PUBMED:30635188 Surgical margins and risk of local recurrence after wedge resection of colorectal pulmonary metastases. Objective: During resection of pulmonary metastases, the need to spare lung parenchyma is often weighed against the increased risk of local recurrence if an inadequate surgical margin is obtained. We sought to identify risk factors for local recurrence after wedge resection of pulmonary metastases of a colorectal origin. Methods: A retrospective study of patients who underwent a wedge resection for colorectal pulmonary metastases from 2006 to 2016 was performed. Cox regression with robust variance was used to estimate the risk of local recurrence per nodule treated. Results: We identified 335 patients who underwent 679 wedge resections. The 2-year local recurrence risk for each nodule was 11.8% (95% confidence interval, 8.9%-14.6%), and the 5-year risk was 20.6% (95% confidence interval, 16.2%-24.8%). Longer margin length decreased the risk of local recurrence (hazard ratio, 0.434 per additional cm of length; P = .015), whereas larger tumor size increased this risk (hazard ratio, 1.520 per additional cm of size; P = .012). However, other factors tested, including tumor grade, KRAS mutation status, and response to induction chemotherapy, did not affect recurrence risk. A pathologic margin length of at least half the tumor size was estimated to result in a local recurrence rate &lt;11%. Conclusions: Among surgically resected colorectal pulmonary metastases, technical factors related to margin length and tumor size were associated with the risk of local recurrence, whereas tumor grade and KRAS status were not. However, the increased risk of local recurrence with larger tumors was diminished with a sufficient margin length. abstract_id: PUBMED:33389571 Prospective study of recurrence at the surgical margin after wedge resection of pulmonary metastases. Background: Pulmonary metastasectomy is a common treatment for selected patients with pulmonary metastases. Among pulmonary resections, wedge resection is considered sufficient for pulmonary metastases. However, a major problem with wedge resection is the risk of local recurrence, especially at the surgical margin. The aim of this prospective study was to explore the frequency of and the risk factors for recurrence at the surgical margin in patients who underwent wedge resection for pulmonary metastases. Methods: Between September 2013 and March 2018, 177 patients (220 lesions) with pulmonary metastases from 15 institutions were enrolled. We studied 130 cases (169 lesions) to determine the frequency of and risk factors associated with recurrence at the surgical margin in patients who underwent wedge resection. Moreover, we evaluated the recurrence-free rate and disease-free survival after wedge resection. Results: A total of 81 (62.3%) patients developed recurrence. Recurrence at the surgical margin was observed in 11 of 130 (8.5%) cases. The 5-year recurrence-free rate was 89.1%. Per patient, multivariable analysis revealed that the presence of multiple pulmonary metastases was a significant risk factor for recurrence. Per tumor, distance from the surgical margin and tumor/margin ratio were risk factors for local recurrence. The 5-year disease-free survival rate was 34.7%, and the presence of multiple pulmonary metastases and small surgical margin were risk factors for disease-free survival by univariable analysis. Conclusions: Among patients who undergo wedge resection for pulmonary metastasis, patients with multiple pulmonary metastases tend to develop recurrence at the surgical margin. abstract_id: PUBMED:36127553 Survival by Number and Sites of Resections of Recurrence after First Curative Resection of Colorectal Liver Metastases. Background: Recurrence after curative hepatectomy for colorectal liver metastases (CRLM) is common. We sought to determine if number and sites of resections of recurrence after hepatectomy for CRLM impact survival. Methods: The study included patients who underwent resection of recurrence following complete curative-intent resection of CRLM during 1998-2016 at two academic medical centers in Houston, USA, and Rome, Italy. The survival impacts of number and sites of resections of recurrence were evaluated. Patients with synchronous extrahepatic disease at curative CRLM resection were excluded. Results: Among 2163 patients who underwent curative hepatectomy, 1456 (67.3%) developed a recurrence. Four hundred seventy-eight patients underwent one (322/478; 67.4%) or two or more (156/478; 32.6%) resections of recurrence. The 5-year overall survival (OS) rate was higher in patients with resected than unresected recurrence (70.2% vs. 24.0%; p &lt; 0.001). In patients who underwent only one resection of recurrence, the 5-year OS rate differed by location (lung, 81.6%; liver, 64.3%; other, 54.1%). In patients who underwent two or more resections of recurrence, the 5-year OS rate was similar for liver-only resection (87.5%) and resection of liver and other sites (66.1%) (p = 0.223) and for liver-only resection and other-sites-only resection (80.7%) (p = 0.258); 5-year OS rate by site of first resection of recurrence did not differ between liver (78.5%) and lung (81.8%) (p = 0.502) but was worse for other sites (61.1%) than for lung (p = 0.045). Conclusion: When recurrence after initial CRLM resection is resectable, the ability to undergo resection was associated with improved survival and can be considered as an option regardless of the number of recurrence and resection. Sites of resection of recurrence impact survival and should be considered. abstract_id: PUBMED:34821994 Clinical outcomes following colorectal resection of colorectal cancer with simultaneous hepatic and pulmonary metastases at the time of diagnosis. Purpose: There are no established treatment strategies for patients with hepatic and pulmonary metastases at the time of primary colorectal cancer (CRC) diagnosis. This study assessed patients undergoing complete resection of primary CRC and hepatic and pulmonary metastases, to evaluate long-term outcomes and clarify clinicopathological factors associated with failure of complete resection. Methods: This retrospective analysis enrolled patients at Shizuoka Cancer Center between 2002 and 2018 who underwent colorectal resection with curative intent for primary CRC with hepatic and pulmonary metastases. The curative resection (CR) group comprised patients who underwent complete resection of the primary tumor and metastatic lesions, and the non-curative resection (Non-CR) group consisted of those in whom resection of the metastatic lesions was not performed. Univariate and multivariate analyses were conducted to determine clinicopathological factors associated with non-curative resection. Results: Of 26 total patients, the CR and Non-CR groups consisted of 14 (54%) and 12 patients (46%), respectively. In the CR group, the 3-year overall and relapse-free survival rates were 92.9% and 28.6%, respectively. Multivariate analysis showed that pathological stage T4 (odds ratio 8.58, 95% confidence interval 1.13-65.20, p = 0.04) was independently associated with non-curative resection. Conclusion: The percentage of patients undergoing complete resection of primary CRC and metastatic lesions was 56%, and the 3-year OS rate was 92.9%. Resection of primary CRC and metastatic lesions was considered to be appropriate in this population, and pathological stage T4 tumor was associated with incomplete resection of metastatic tumors. abstract_id: PUBMED:26361505 Recurrence after hepatic resection in colorectal cancer liver metastasis -Review article-. The outcomes and management of colorectal liver metastasis have undergone many changes. The incidence of recurrence after liver resection for hepatic metastasis remains very high. Liver resection, which provides the only curative treatment, is believed to have improved the long-term outcome of these patients. However, the management and outcomes of patients with colorectal liver metastasis have greatly improved in the last decade, suggesting that the current use of aggressive multimodality treatments, including surgical resection combined with modern chemotherapeutic regimens, effectively prolong the life expectancy of these patients. abstract_id: PUBMED:35666139 Recurrence patterns predict survival after resection of colorectal liver metastases. Background: Effective treatment of colorectal liver metastases (CRLM) is challenging because recurrence occurs in many patients after curative-intent resection. This study evaluates the recurrence patterns after resection of CRLM and its association with survival. Methods: A retrospective review of prospectively collected data of patients with CRLM managed with curative-intent resection from January 2007 to December 2017 was performed. The main outcomes and measures were the timing of recurrence, initial sites of recurrence, overall survival and recurrence-free survival. Early recurrence was defined as the detection of any organ recurrence ≤6 months from resection of CRLM. Results: A cohort of 194 patients was included for analysis. After a median follow-up of 85.3 months, 145 patients (74.7%) were diagnosed with recurrence. The median overall survival was 67.6 months (95% CI 50.4-80.2) and the 5-year overall survival was 54.1%. After initial recurrence was detected, the median survival was 28.9 months (95% CI 23.6-37.8) months and the 5-year overall survival was 28.8%. Early recurrence occurred in 58 patients (29.9%). Initial recurrence patterns included: liver only in 53 patients (36.5%), multiple sites in 48 patients (33.1%), lung only in 30 patients (20.7%), and other single extrahepatic sites in 14 patients (9.6%). Early recurrence and initial multi-site recurrence were independent predictors of worse overall survival for patients who develop recurrence after resection of CLRM. Conclusion: The timing and initial sites of recurrence are prognostic factors in determining survival after curative-intent resection of CRLM. abstract_id: PUBMED:27773868 Outcomes of segmentectomy and wedge resection for pulmonary metastases from colorectal cancer. Objectives: Although wedge resection is the most common surgical procedure for pulmonary metastases from colorectal cancer, there are few reports about segmentectomy for colorectal metastasectomy. Methods: This was a subset analysis of a Japanese nationwide retrospective study of resected pulmonary metastases from colorectal cancer. The study included 553 patients who underwent segmentectomies ( n = 98) or wedge resections ( n = 455) without preoperative chemotherapy between January 2004 and December 2008. Recurrence patterns, recurrence-free survival and overall survival were analysed for each procedure. Results: As for the patients' background factors, only the median size of resected metastases was different between patients with segmentectomies (median 18 mm, range 5-50 mm) and wedge resections (14 mm, 5-51 mm) ( P &lt; 0.001). Prolonged air leak developed more frequently in those undergoing segmentectomy compared with wedge resection (5.1% vs 1.8%) ( P = 0.048). The resection-margin recurrence rate was higher in patients who underwent wedge resection compared with segmentectomy (7.3% vs 2.0%; P = 0.035). The 5-year recurrence-free survival was 48.8% in patients with segmentectomy and 36.0% in patients with wedge resections. The 5-year overall survival was 80.1% in patients with segmentectomy and 68.5% in patients with wedge resection. Multivariable analysis revealed that the surgical procedure, segmentectomy, was a significant favourable factor for recurrence (hazard ratio: 0.63, 95% confidence interval: 0.44-0.87, P = 0.005), but not for overall survival (hazard ratio: 0.65, 95% confidence interval: 0.38-1.05, P = 0.080). Conclusions: Segmentectomy demonstrated a good efficacy with less resection-margin recurrence in patients with resectable pulmonary-limited metastasis from colorectal cancer. Answer: Yes, resection is indicated for isolated pulmonary recurrence after resection of colorectal hepatic metastases. A study analyzing patients who underwent resection of colorectal hepatic metastases found that pulmonary metastasectomy can be performed safely and effectively in patients with recurrent disease after hepatic resection for colorectal metastases. Prolonged survival can be achieved with resection of isolated pulmonary recurrence after hepatic resection for colorectal cancer (PUBMED:12184413). Moreover, the standard treatment for pulmonary metastasis of colorectal cancer, if resectable, is pulmonary resection, which has been associated with favorable long-term prognosis. Despite the fact that the metastasis is hematogenous to distant organs, guidelines in several countries describe pulmonary resection as a useful option (PUBMED:34755006). Additionally, adjuvant chemotherapy after complete resection of pulmonary metastasis from colorectal cancer has been shown to significantly improve the relapse-free survival rate compared to surgery alone, particularly in high-risk groups such as patients with a high tumor stage or poor immunological status (PUBMED:34462661). Furthermore, technical factors related to margin length and tumor size have been associated with the risk of local recurrence after wedge resection of pulmonary metastases, suggesting that achieving a sufficient margin length can reduce the risk of local recurrence (PUBMED:30635188). In conclusion, the evidence supports that resection of isolated pulmonary recurrence after hepatic resection for colorectal cancer is a viable treatment option that can lead to prolonged survival, and it is often considered the standard of care when the metastases are resectable.
Instruction: Is radical prostatectomy feasible in all cases of locally advanced non-bone metastatic prostate cancer? Abstracts: abstract_id: PUBMED:28546714 Radical Prostatectomy for Locally Advanced Prostate Cancers-Review of Literature. Twenty-five to thirty percent of patients with prostate cancer present with locally advanced disease. While risk stratification remains the same with high incidence of upstaging of disease on imaging and histopathological evaluation; there have been progressive refinements in surgical therapy. With availability of reasonably robust data, radical prostatectomy in men with locally advanced prostate cancers seems to effect improvement in both cancer specific and overall survival rates in comparison to the current standard of care of radiation with androgen deprivation therapy. Studies using radical prostatectomy as a part of multimodality approach have also shown promising results. There is an imminent need for well-designed prospective studies of benefits of radical prostatectomy over radiation and androgen deprivation as well as benefits of multimodality therapy over monotherapy. Surgery for patients with locally advanced prostate cancer is technically challenging. Surgical outcomes are comparable to those of organ-confined disease when performed in high-volume centers. Neoadjuvant therapies prior to radical prostatectomy might improve surgical outcomes, but whether they will translate into a better cancer specific and overall survival are yet to be ascertained. abstract_id: PUBMED:28901630 Role of robot-assisted radical prostatectomy in locally advanced prostate cancer. Locally advanced prostate cancer is regarded as a very high-risk disease with a poor prognosis. Although there is no definitive consensus on the definition of locally advanced prostate cancer, radical prostatectomy for locally advanced prostate cancer as a primary treatment or part of a multimodal therapy has been reported. Robot-assisted radical prostatectomy is currently carried out even in high-risk prostate cancer because it provides optimal outcomes. However, limited studies have assessed the role of robot-assisted radical prostatectomy in patients with locally advanced prostate cancer. Herein, we summarize and review the current knowledge in terms of the definition and surgical indications of locally advanced prostate cancer, and the surgical procedure and perisurgical/oncological outcomes of robot-assisted radical prostatectomy and extended pelvic lymphadenectomy for locally advanced prostate cancer. abstract_id: PUBMED:17049718 Is radical prostatectomy feasible in all cases of locally advanced non-bone metastatic prostate cancer? Results of a single-institution study. Objectives: Previous prospective studies of the surgical treatment of locally advanced prostate cancer have enrolled patients selected on the basis of a limited T3 disease extension. The aim of the present study was to assess the feasibility and the oncologic outcome of radical prostatectomy administered to a consecutive unselected series of advanced, non-bone metastatic prostate cancers. Methods: Between March 1998 and February 2003 radical prostatectomy was offered at our institution to any patient diagnosed with prostate cancer with no sign of extranodal metastatic disease. Data on morbidity and survival for 51 clinically advanced cases (any T&gt;/=3, N0-N1, or any N1 or M1a disease according to the TNM 2002 classification system) operated on by a single expert surgeon were compared with a series of 152 radical prostatectomies performed during the same period by the same operator for clinically organ-confined disease. Adjuvant treatment was administered according to current guidelines. Results: The two groups did not differ significantly in surgical morbidity except for blood transfusion, operative time, and lymphoceles, which showed a higher rate in patients with advanced disease. The Kaplan-Meier estimate of overall survival and prostate cancer-specific survival at 7 yr were 76.69% and 90.2% in the advanced disease group and 88.4% and 99.3% in the organ-confined disease group, respectively. Conclusions: Even in the scenario of extensive surgical indications up to M1a disease, radical prostatectomy proved to be technically feasible and to have an acceptable morbidity rate compared with organ-confined disease. Our initial survival data strengthen the role for surgery as an essential part in the multimodal approach to treating advanced prostate cancer. abstract_id: PUBMED:36557022 Comparative Outcomes of Robotic Radical Prostatectomy in Patients with Locally Advanced Prostate Cancer. The effectiveness of radical prostatectomy alone for locally advanced prostate cancer is controversial owing to an increased complication rate and treatment-related morbidity. With technical advances and refinements in surgical techniques, robotic-assisted radical prostatectomy (RARP) has improved the outcomes of patients with locally advanced prostate cancer. RARP therefore plays a role in the treatment of locally advanced prostate cancer. In this study, we enrolled a total of 76 patients with pathologic stage pT3a, pT3b, pT4, or pN1. All patients were followed from surgery to June 2022, and their characteristics, perioperative outcomes, complications, adjuvant therapies and outcomes were analyzed. The median age of the patients was 69 years, and the initial PSA level was 20.5 (IQR 10.8-31.6) ng/mL. The median operative time was 205 (IQR 182-241) minutes. Sixty-six patients (86.8%) regained continence within 1 year, and the continence rate within 3 years of follow-up was 90.8% (69 patients). The overall survival rate was 100%. Twenty-two patients had BCR, of whom 13 received salvage androgen deprivation therapy (ADT), 2 received salvage external beam radiation therapy (EBRT) alone, and 7 received combined ADT and EBRT. No patient had disease progression to castration-resistant prostate cancer during a median 36 months of follow-up after salvage therapy. Our results suggest that RARP can also decrease tumor burden and allow for accurate and precise pathological staging with the need for subsequent treatment. Therefore, we recommend that RARP represents a well-standardized, safe, and oncologically effective option for patients with locally advanced prostate cancer. abstract_id: PUBMED:37363117 Efficacy of Degarelix in the Neoadjuvant Setting in Managing Locally Advanced Prostate Cancer. Background Prostate cancer holds a substantial presence in the global cancer landscape, and a considerable proportion of diagnoses occur at late stages, particularly in India. Management of locally advanced prostate cancer necessitates a multimodal treatment strategy. A critical part of this strategy is neoadjuvant androgen deprivation therapy, typically administered via luteinizing hormone-releasing hormone (LHRH) analogs. This study explores the potential of an alternative approach: neoadjuvant therapy with degarelix, an LHRH antagonist, and its impact on perioperative and postoperative outcomes in patients undergoing radical prostatectomy for locally advanced or high-risk prostate cancer. Methodology We conducted a retrospective, non-randomized clinical study at Apollo Hospitals in Chennai, India. Patients diagnosed with locally advanced or high-risk prostate cancer who underwent radical prostatectomy were included. Participants were patients treated with neoadjuvant degarelix and subsequent radical prostatectomy between March 2020 and June 2022. We excluded patients receiving radical radiotherapy, those switching from LHRH agonists to antagonists, and those contraindicated for androgen deprivation therapy due to existing comorbidities. For comparison, we selected a group from the institutional database who received conventional treatment (i.e., without neoadjuvant therapy). Results The study compared two groups, each with 32 patients. The groups had no significant difference in total operative duration and console times. The postoperative pathological assessment showed significantly lower margin positivity rates and notable pathological downstaging in the group receiving neoadjuvant degarelix compared to the control group. The incidence of node positivity, prostate-specific antigen levels at three months postoperative, and number of pads used per day at one month did not differ significantly between the two groups. Conclusions Our study suggests that neoadjuvant degarelix could notably enhance patient outcomes in locally advanced prostate cancer management. The benefits include improved symptom control, significant reductions in margin positivity rates, and facilitated surgical procedures. Neoadjuvant degarelix therapy could potentially enhance the feasibility of the surgical intervention in locally advanced prostate cancer management, thus suggesting a promising pathway for improved patient care. abstract_id: PUBMED:28983643 Radical prostatectomy in locally advanced prostate cancer Background: Due to an inverse stage migration, the proportion of patients with more aggressive and locally advanced prostate cancer (PCa) has increased over the last few years. The natural history in these patients shows a higher risk of local complications and worse oncologic outcome. Objectives: To analyze the impact of radical prostatectomy (RP) in patients with locally advanced PCa. Materials And Methods: A review of the literature was performed using PubMed and MEDLINE databases focusing on articles addressing locally advanced PCa. Results: Current guidelines recommend local therapy in patients with locally advanced PCa among other treatment options. Thereby no strong evidence favoring radiotherapy or RP is present. Compared to patients without local treatment, RP may improve oncologic outcome and decrease the risk of local complications. Due to more difficult surgery and an increased need of multimodal therapy, higher perioperative morbidity and worse functional outcomes compared to patients with localized PCa are reported. No reliable prospective data indicating a widespread use of neoadjuvant treatment exists. Indication for further adjuvant or salvage therapies depends on pathologic results and postoperative course. Conclusions: RP is one of the treatment options with good long-term results which can be offered to patients with locally advanced PCa. Nevertheless, patients need to be counselled especially about the worse postoperative functional outcome compared to patients with localized PCa. abstract_id: PUBMED:35813256 Characteristics of Patients in SPCG-15-A Randomized Trial Comparing Radical Prostatectomy with Primary Radiotherapy plus Androgen Deprivation Therapy in Men with Locally Advanced Prostate Cancer. Background: There is no high-grade evidence for surgery as primary treatment for locally advanced prostate cancer. The SPCG-15 study is the first randomized trial comparing surgical treatment with radiotherapy. Objective: To describe the baseline characteristics of the first 600 randomized men in the SPCG-15 study. The study will compare mortality and functional outcomes. Design Setting And Participants: This study is a Scandinavian prospective, open, multicenter phase III randomized clinical trial aiming to randomize 1200 men. Intervention: Radical prostatectomy with or without consecutive radiotherapy (experimental) and radiotherapy with neoadjuvant androgen deprivation therapy (standard of care). Outcome Measurements And Statistical Analysis: Cause-specific survival, metastasis-free survival, overall survival, and patient-reported bowel function, sexual health, and lower urinary tract symptoms were measured. Results And Limitations: The distribution of characteristics was similar in the two study arms. The median age was 67 yr (range 45-75 yr). Among the operated men, 36% had pT3a stage of disease and 39% had pT3b stage. International Society of Urological Pathology grades 2, 3, 4, and 5 were prevalent in 21%, 35%, 7%, and 27%, respectively. Half of the men (51%) in the surgery arm had no positive lymph nodes. The main limitation is the pragmatic design comparing the best available practice at each study site leading to heterogeneity of treatment regimens within the study arms. Conclusions: We have proved that randomization between surgery and radiotherapy for locally advanced prostate cancer is feasible. The characteristics of the study population demonstrate a high prevalence of advanced disease, well-balanced comparison groups, and a demography mirroring the Scandinavian population of men with prostate cancer at large. Patient Summary: This study, which has recruited &gt;600 men, compares radiotherapy with surgery for prostate cancer, and an analysis at the time of randomization indicates that the study will be informative and generalizable to most men with locally advanced but not metastasized prostate cancer. abstract_id: PUBMED:26150873 Role of Surgery in locally advanced prostate cancer. A significant proportion of patients present with locally advanced prostate cancer with inherent higher risk of local recurrence and systemic relapse after initial treatment. Recent literature favors combination of radiation therapy and extended hormonal therapy for this subset of patients. The effectiveness of Radical prostatectomy (RP) alone for locally advanced prostate cancer is controversial and is a focus of debate. However, it can decrease the tumor burden and allows the accurate and precise pathological staging with the need for subsequent treatment. Comparison of RP and other treatment modalities is difficult and incorrect because of inherent selection bias. RP as a part of multi-modality treatment (with neo-adjuvant hormonal therapy (HT) and with adjuvant or salvage radiation therapy (RT)/HT) can provide long term overall and cancer specific survival. Surgical treatment (such as transurethral resection (TUR), JJ stenting and percutaneous nephrostomy (PCN) placement etc.) can also provide symptomatic improvement and palliative care to improve quality of life of patients with advanced prostate cancer. Sources Of Data/study Selection: The articles published between years 1998-2014 were searched on electronic databases Pubmed, Science direct, Google scholar and Embase and used for preparation of this review. abstract_id: PUBMED:33040689 Does radical prostatectomy result in lower urinary tract symptom improvement in high-risk and locally advanced prostate cancer? A Single-center experience. Purpose: Radical prostatectomy represents the most popular method of prostate cancer treatment, including cases with high-risk and locally advanced cancer. Besides, men with this disease often experience lower urinary tract symptoms (LUTS) and report high International Prostate Symptom Scores (IPSS), pathological post-void residual (PVR) urine volumes and low levels of maximum urinary flow rates (Qmax). In this study we assessed the effect of radical prostatectomy on the above parameters in patients with high-risk and locally advanced disease. Methods: A number of 240 individuals were enrolled in the study. Patients that required any post-operative manipulation up to the completion of 12 months after surgery were excluded. All patients were assessed pre- and post-operatively at 3, 6 and 12 months. Evaluation included IPSS, Qmax and PVR. Results: Mean age was 66.8 years. Mean PSA value was 12.7 ng/ml and mean Gleason score was 7.9. At baseline 41.3% of the patients had Qmax ⩽10 and 42.5% had IPSS &gt;8. There was a significant increase in Qmax during the follow-up (median value was 12 at baseline and increased to 21 at 12 months). Also, IPSS and PVR decreased significantly during the follow-up. IPSS median value decreased from 9 at baseline to 5 at 12 months. Improvement was observed in all grades of symptoms. abstract_id: PUBMED:37338747 Nomogram Predicting Locally Advanced Prostate Cancer in Patients with Clinically Organ-Confined Disease Who Underwent Robot-Assisted Radical Prostatectomy: A Retrospective Multicenter Cohort Study in Japan (The MSUG94 Group). Purpose: We created a clinically applicable nomogram to predict locally advanced prostate cancer using preoperative parameters and performed external validation using an external independent validation cohort. Patients And Methods: From a retrospective multicenter cohort study of 3622 Japanese patients with prostate cancer who underwent robot-assisted radical prostatectomy at ten institutions, the patients were divided into two groups (MSUG cohort and validation cohort). Locally advanced prostate cancer was defined as pathological T stage ≥ 3a. A multivariable logistic regression model was used to identify factors strongly associated with locally advanced prostate cancer. Bootstrap area under the curve was calculated to assess the internal validity of the prediction model. A nomogram was created as a practical application of the prediction model, and a web application was released to predict the probability of locally advanced prostate cancer. Results: A total of 2530 and 427 patients in the MSUG and validation cohorts, respectively, met the criteria for this study. On multivariable analysis, initial prostate-specific antigen, prostate volume, number of cancer-positive and cancer-negative biopsy cores, biopsy grade group, and clinical T stage were independent predictors of locally advanced prostate cancer. The nomogram predicting locally advanced prostate cancer was demonstrated (area under the curve 0.72). Using a nomogram cutoff of 0.26, 464 of 1162 patients (39.9%) could be correctly diagnosed with pT3, and 2311 of 2524 patients (91.6%) could avoid underdiagnosis. Conclusions: We developed a clinically applicable nomogram with external validation to predict the probability of locally advanced prostate cancer in patients undergoing robot-assisted radical prostatectomy. Answer: Radical prostatectomy is technically feasible in many cases of locally advanced non-bone metastatic prostate cancer, as indicated by several studies. A single-institution study found that radical prostatectomy was technically feasible and had an acceptable morbidity rate compared with organ-confined disease, even when surgical indications were extended up to M1a disease (PUBMED:17049718). Additionally, robot-assisted radical prostatectomy (RARP) has been reported to provide optimal outcomes for high-risk prostate cancer, including locally advanced cases, and has been carried out with improved outcomes due to technical advances and refinements in surgical techniques (PUBMED:28901630; PUBMED:36557022). However, the feasibility and effectiveness of radical prostatectomy alone for locally advanced prostate cancer can be controversial due to an increased complication rate and treatment-related morbidity (PUBMED:28983643). It is often part of a multimodal therapy approach, which may include neoadjuvant hormonal therapy and adjuvant or salvage radiation therapy or hormonal therapy, to provide long-term overall and cancer-specific survival (PUBMED:26150873). Moreover, neoadjuvant therapies, such as degarelix, an LHRH antagonist, have been shown to improve surgical outcomes by reducing margin positivity rates and facilitating surgical procedures (PUBMED:37363117). The role of radical prostatectomy in managing locally advanced prostate cancer is also being evaluated in randomized trials, such as the SPCG-15 study, which compares radical prostatectomy with primary radiotherapy plus androgen deprivation therapy (PUBMED:35813256). In summary, while radical prostatectomy is feasible in many cases of locally advanced non-bone metastatic prostate cancer, it is often part of a multimodal treatment strategy, and its use should be considered within the context of the individual patient's disease characteristics and overall health status.
Instruction: Tooth wear in young subjects: a discriminator between sleep bruxers and controls? Abstracts: abstract_id: PUBMED:19639069 Tooth wear in young subjects: a discriminator between sleep bruxers and controls? Purpose: This study investigated whether the presence of tooth wear in young adults can help to discriminate patients with sleep bruxism (SB) from control subjects. Materials And Methods: The tooth wear clinical scores and frequency of sleep masseter electromyographic activity of 130 subjects (26.6 +/- 0.5 years) were compared in this case-control study. Tooth wear scores (collected during clinical examination) for the incisors, canines, and molars were pooled or analyzed separately for statistics. Sleep bruxers (SBrs) were divided into two subgroups according to moderate to high (M-H-SBr; n = 59) and low (L-SBr; n = 48) frequency of masseter muscle contractions. Control subjects (n = 23) had no history of tooth grinding. The sensitivity and specificity of tooth wear versus SB diagnosis, as well as positive and negative predictive values (PPV and NPV), were calculated. One-way analysis of variance and the Mann-Whitey U test were used to compare groups. Results: Both SBr subgroups showed significantly higher tooth wear scores than the control group for both pooled and separated scores (P &lt; .001). No difference was observed between M-H-SBr and L-SBr frequency groups (P = .14). The pooled sum of tooth wear scores discriminates SBrs from controls (sensitivity = 94%, specificity = 87%). The tooth wear PPV for SB detection was modest (26% to 71%) but the NPV to exclude controls was high (94% to 99%). Conclusions: Although the presence of tooth wear discriminates SBrs with a current history of tooth grinding from nonbruxers in young adults, its diagnostic value is modest. Moreover, tooth wear does not help to discriminate the severity of SB. Caution is therefore mandatory for clinicians using tooth wear as an outcome for SB diagnosis. abstract_id: PUBMED:29336237 Masticatory function and oral stereognosis in bruxers. Objective: To compare the mastication and oral sensory function of individuals with and without sleep bruxism. Methods: The sample comprised 60 volunteers, of which 30 were bruxers (mean age = 30.5 ± 6.0) and 30 non-bruxers (mean age = 28.6 ± 3.3). Sleep bruxism was clinically diagnosed and confirmed using the Bruxoff® device. Mastication was evaluated according to swallowing threshold and masticatory efficiency. Swallowing threshold was determined from the median particle size of Optosil®, obtained after an individualized number of masticatory cycles, which was first determined by chewing peanuts; masticatory efficiency was estimated by the sieving method. Oral stereognosis test was used to determine the oral sensory function. Variables were compared using one-way ANOVA (p &lt; .05). Results: The median particle size, masticatory efficiency, and oral sensory function did not differ significantly between bruxers and non-bruxers (p &gt; .05). Conclusion: Dentate adults present similar mastication and oral sensory function, despite the presence of sleep bruxism. abstract_id: PUBMED:30369985 Assessment of Mandibular Surface Area Changes in Bruxers Versus Controls on Panoramic Radiographic Images: A Case Control Study. Background: Bruxism is the commonest of the many parafunctional activities of the masticatory system. Opinions on the causes of bruxism were numerous and widely varying. It can occur during sleep as well as wakefulness. Bruxism was for long considered a major cause of tooth wear. Other effects of bruxism may include tooth movement and tooth mobility, as well as changes in oral soft tissues and jaw bone. Since the exact etiology and manifestations are unclear, it was difficult to diagnose Bruxism. In this study, we evaluated the area change as measured from digital panoramic radiographs that can occur in the lower jaw bone in those with Bruxism and compared the results with non-bruxers. Aims And Objective: To determine the surface area changes of the mandible, condylar and coronoid processes in Bruxers from Panoramic radiographs and to compare and contrast the changes with age and gender matched controls. Materials And Methods: The study was conducted in the department of Oral Medicine and Radiology. The total sample size was 40. The sample was divided into two groups, Bruxers and non-bruxers with 20 subjects in each group. Healthy volunteers aged between 20- 30 years diagnosed with Bruxism and Healthy volunteers aged between 20- 30 years diagnosed without Bruxism were included in group II (Non-Bruxers). Bruxchecker was made use of in confirming the Bruxism in Group I. The Panoramic radiograph was used as the imaging modality for the study. The measurements were made with the help of software, Image J. All the measurements were tabulated and statistical analysis was made using ANOVA (Post hoc) followed by Dunnett's test and unpaired t test. Results: A comparison of the mandibular surface area as a whole and also condylar and coronoid processes individually were carried out. Significant results were obtained in case of condylar and coronoid processes between the two groups. The surface area of condylar process of Group I was found to be lower than that of Group II. The surface area of the right coronoid process of group I was found to be less when compared to that of group II but the values of the left coronoid process of group I was found to be more when compared with group II. The surface area of the mandible showed no significant difference between the groups. There was significant difference between the genders in case of mandible, condyle and coronoid. The surface area of mandible and condylar process was found to be lower in female when compared to male. The surface area of coronoid process was found to be more in case of females when compared to that of males in Group I.The results of our study show that while the overall surface area of bruxers remain unaffected when compared to controls, the condylar and coronoid process show significant change. The hitherto belief that the primary brunt of bruxism is borne by the masseter would require a revisit since alteration in tonicity of the masseter would reflect in surface area change of the mandible as a whole. An increase in the surface area of the coronoid process in bruxers was observed in our study which could be attributed to altered activity of the temporalis, a muscle largely responsible for the posture of the mandible. This could imply that bruxers show alteration in temporalis activity which would explain several clinical manifestations such as headache, neck pain, shoulder pain and altered posture and so on which we have observed in the clinical practice of neuromuscular dentistry. Further studies examining the activity of the temporalis and masseter would further corroborate our findings and form the basis for future research in this arena. Conclusion: This original research was carried out to assess the surface area changes in mandible and condylar and coronoid processes of Bruxers and non-bruxers. The results showed significant changes in the surface area of condylar and coronoid process in Bruxers when compared to the controls. This is an area of study with paucity of available literature. This study would be a stepping stone for future studies in this arena. abstract_id: PUBMED:8655758 Sleep bruxism: validity of clinical research diagnostic criteria in a controlled polysomnographic study. The clinical validity of diagnostic criteria for sleep orofacial motor activity--more specifically, bruxism--has never been tested. Polysomnographic recordings from 18 bruxers and 18 asymptomatic subjects, selected according to American Sleep Disorders Association criteria, were analyzed (1) to discriminate sleep bruxism from other orofacial motor activities and (2) to calculate sensitivity, specificity, and predictive values of research criteria. Clinical observations and reports revealed that all 18 bruxers reported frequent tooth-grinding during sleep. Tooth wear was noted in 16 out of 18 bruxers and jaw discomfort reported by six of them. These findings were present in none of the controls. The analysis of polysomnographic data showed that the asymptomatic subjects presented a mean of 1.7 +/- 0.3 bruxism episodes per hour of sleep (sustained or repetitive bursting activity in jaw closer muscles), while bruxers had a significantly higher level of activity: 5.4 +/- 0.6. Controls exhibited 4.6 +/- 0.3 bruxism bursts per episode and 6.2 (from 0 to 23) bruxism bursts per hour of sleep, whereas bruxers showed, respectively, 7.0 +/- 0.7 and 36.1 (5.8 to 108). Bruxism-like episodes with at least two grinding sounds were noted in 14 of the 18 bruxers and in one control. The two groups exhibited no difference in any of the sleep parameters. Based on the present findings, the following polysomnographic diagnostic cut-off criteria are suggested: (1) more than 4 bruxism episodes per hour, (2) more than 6 bruxism bursts per episode and/or 25 bruxism bursts per hour of sleep, and (3) at least 2 episodes with grinding sounds. When the polysomnographic bruxism-related variables were combined under logistic regression, the clinical diagnosis was correctly predicted in 81.3% of the controls and 83.3% of the bruxers. The validity of these clinical research criteria needs now to be challenged in a larger population, over time, and in subjects presenting various levels of severity of sleep bruxism. abstract_id: PUBMED:26298779 Sleep-associated aspects of myofascial pain in the orofacial area among Temporomandibular Disorder patients and controls. Study Objectives: To assess sleep-associated aspects of temporomandibular disorder (TMD) with myofascial pain (MP) in the orofacial area of patients and controls. Measurements: Three hundred five female patients were screened to find 44 participants fulfilling the inclusion criteria, 22 suffering from MP and 22 in a control group. Sleep quality was assessed by use of the Pittsburgh Sleep-Quality-Index (PSQI) and a validated German sleep questionnaire (SF-AR). Tooth wear was assessed and anterior temporalis muscle activity was measured at home for several nights by use of a portable electromyography (EMG) device. Results: 22 patients (45.0 ± 13.6 years) and 22 controls (45.2 ± 9.0 years) were recruited. The PSQI sum-score was 7.5 ± 3.7 for patients and 4.4 ± 3.0 for controls (p = 0.006). The SF-AR showed that 23% of the controls and 14% of the patients were "long sleepers". The overall number of episodes in the two groups was not significantly different (4.10 ± 2.65 versus 4.57 ± 1.99 episodes per hour). However, more patients had temporalis muscle activity possibly related to SB during all four consecutive nights (p = 0.04). According to the International Classification of Sleep Disorders - Third Edition (ICSD-3) criteria, 13.6% of the controls and 71.4% of the patients (p &lt; 0.001) showed SB. Conclusions: Sleep-associated disturbances, including reduction of sleep quality and greater prevalence of SB and facial pain in the morning, occurred significantly more often among TMD patients. Additionally, SB fluctuated over the nights especially in controls. This should be taken into consideration when the prevalence of SB is assessed by use of EMG. abstract_id: PUBMED:25975418 Influence of bruxism and splint therapy on tongue pressure against teeth. Objectives: This study estimated the maximum tongue pressure (MTP) against teeth in bruxers, and verified the effect of a stabilization splint. Method: Forty subjects (mean age: 26.57 years) were assigned to two groups (n520): bruxers and nonbruxers. Sleep bruxism was diagnosed by self-report, presence of tooth wear and tongue indentation, and by using a BiteStrip2 device. Maximum tongue pressure was measured by intraoral appliances containing pressure sensors. After baseline evaluation, bruxers received a stabilization splint and non-bruxers received a palatal splint, both for nocturnal use. Maximum tongue pressure was reassessed after 30 and 60 days. Data were submitted to repeated-measures ANOVA and Tukey test (Pv0.05). Results: Bruxers showed an increased MTP against teeth (Pv0.05). Splint therapy decreased MTP for bruxers and non-bruxers. Discussion: Bruxers present tongue indentations or ulcerations; however, there is no evidence on their MTP. Further research should focus on therapies to avoid tongue ulcerations in bruxers. abstract_id: PUBMED:26059761 Agreement between clinical and portable EMG/ECG diagnosis of sleep bruxism. The aim of this study was to compare clinical sleep bruxism (SB) diagnosis with an instrumental diagnosis obtained with a device providing electromyography/electrocardiography (EMG/ECG) recordings. Forty-five (N = 45) subjects (19 males and 26 females, mean age 28 ± 11 years) were selected among patients referring to the Gnathology Unit of the Dental School of the University of Torino. An expert clinician assessed the presence of SB based on the presence of one or more signs/symptoms (i.e., transient jaw muscle pain in the morning, muscle fatigue at awakening, presence of tooth wear, masseter hypertrophy). Furthermore, all participants underwent an instrumental recording at home with a portable device (Bruxoff; OT Bioelettronica, Torino, Italy) allowing a simultaneous recording of EMG signals from both the masseter muscles as well as heart frequency. Statistical procedures were performed with the software Statistical Package for the Social Science v. 20.0 (SPSS 20.0; IBM, Milan, Italy). Based on the EMG/ECG analysis, 26 subjects (11 males, 15 females, mean age 28 ± 10 years) were diagnosed as sleep bruxers, whilst 19 subjects (7 males, 12 females, mean age 30 ± 10 years) were diagnosed as non-bruxers. The correlation between the clinical and EMG/ECG SB diagnoses was low (ϕ value = 0.250), with a 62.2% agreement (28/45 subjects) between the two approaches (kappa = 0.248). Assuming instrumental EMG/ECG diagnosis as the standard of reference for definite SB diagnosis in this investigation, the false-positive and false-negative rates were unacceptable for all clinical signs/symptoms. In conclusion, findings from clinical assessment are not related with SB diagnosis performed with a portable EMG/ECG recorder. abstract_id: PUBMED:10022179 The craniofacial morphology of bruxers versus nonbruxers. The purpose of this investigation was to test for an association between the craniofacial morphologies of bruxers and nonbruxers. The sample for this retrospective descriptive comparative study consisted of 28 Caucasian dental school subjects. Sixteen were bruxers and 12 were nonbruxers. The determination of bruxism was based on a six-item questionnaire as well as objective measures of the severity of tooth wear as analyzed from dental casts. Craniofacial morphology was determined directly using anthropometric spreading calipers. Craniofacial measurements included glabella-opiscranion, euryon-euryon, nasion-gnathion, zygoma-zygoma, and gonion-gonion. From these measurements, the following indices were calculated: cephalic (Gla-Op/Eu-Eu), facial (Na-Gla/Zy-Zy), gonial (Zy-Zy/Go-Go), and gonial height (Na-Gla/Go-Go). This study found no differences in the craniofacial morphologies of bruxers and nonbruxers, nor was there a difference in overbite. There was, however, a statistically significant difference in the bizygomatic (Zy-Zy) and cranial (Eu-Eu) widths of bruxers compared with nonbruxers. abstract_id: PUBMED:38469259 Validity of clinical diagnostic criteria for sleep bruxism by comparison with a reference standard using masseteric electromyogram obtained with an ultraminiature electromyographic device. Evaluation of sleep bruxism (SB) in clinical practice is currently conducted based on clinical findings, i.e., clinical diagnostic criteria consisting of medical interview, findings of tooth wear, and symptoms of temporomandibular joint (TMJ) and muscles. However, there are many unclear points about validity of the criteria. In this study, validity tests were conducted to clarify the accuracy of the clinical diagnostic criteria for SB by comparison with a reference standard using a single-channel masseteric electromyogram (EMG) obtained with ultraminiature electromyographic devices. The subjects included 30 'probable' bruxers (P-bruxers) who were clinically diagnosed as having SB and 30 non-bruxers. EMG was recorded during sleep under unrestrained and accustomed condition at each subject's home using ultraminiature cordless EMG devices. Bursts with amplitudes of more than 5%, 10%, 20% of the maximum voluntary contraction (MVC) value (EMG-burst-5%, EMG-burst-10%, and EMG-burst-20%) and episodes of sleep bruxism (EMG-episode) were selected for analyses. In all conditions for burst selection, the P-bruxer group showed a significantly larger number of bursts and episodes than those in the non-bruxer group. Accuracy of the clinical diagnosis criteria was 66.7% with the reference standard using EMG-burst-5%/h and 58.3% with that using EMG-episodes/h. By applying single-channel EMG as the reference standard, we were able to conduct validity tests of clinical diagnostic criteria with a larger sample. It was clarified that the level of accuracy of clinical diagnostic criteria for SB were not high despite using the combination of an interview and clinical findings. abstract_id: PUBMED:33474786 Association between polysomnographic parameters of sleep bruxism and attrition-type tooth wear. Background: Clinically, sleep bruxism is considered to be associated with the presence of tooth wear, but strong evidence is still lacking. Objective: To examine whether an association exists between polysomnographic parameters, recorded from patients with possible sleep bruxism and tooth wear. Methods: Sixty-three possible sleep bruxers (19 males and 44 females, mean ± SD age = 38.5 ± 11.4 years) were recruited among patients attending the Clinic for orofacial pain and dysfunction of the Academic Centre for Dentistry Amsterdam (ACTA). The incisal/occlusal tooth wear was recorded for each tooth clinically, using a 5-point ordinal scale. Subsequently, all patients underwent an one-night ambulatory polysomnographic recording, during which the number of bruxism episodes per hour of sleep (Epi/h), the number of bruxism bursts per hour of sleep (Bur/h), and the bruxism time index (BTI) were recorded and analysed. Logistic regression analysis was performed using the presence of tooth wear as the dependent variable, the polysomnographic recordings as independent variables, and corrected for age and gender. The Bur/h and BTI were removed from the analyses due to collinearity with the Epi/h. Additionally, the polysomnographic recordings were also tested for possible association with self-reported grinding of the teeth during sleep. Results: No significant correlation was found between tooth wear and Epi/h (P = 0.381). In addition, the presence of tooth wear was not associated with self-reported parafunctions. Conclusion: Clinically measured tooth wear and self-reported parafunction seem not be related to the polysomnographic parameters of possible sleep bruxism. Answer: Tooth wear in young subjects can be a discriminator between sleep bruxers and controls, but its diagnostic value is modest. A study found that both moderate to high frequency and low frequency sleep bruxers showed significantly higher tooth wear scores than control subjects without a history of tooth grinding (PUBMED:19639069). The pooled sum of tooth wear scores was able to discriminate sleep bruxers from controls with a sensitivity of 94% and specificity of 87%. However, the positive predictive value (PPV) for detecting sleep bruxism based on tooth wear was only modest, ranging from 26% to 71%, although the negative predictive value (NPV) to exclude controls was high, between 94% and 99%. This indicates that while the presence of tooth wear can help identify individuals with a current history of tooth grinding, it does not necessarily indicate the severity of sleep bruxism. Therefore, clinicians should use caution when using tooth wear as an outcome for sleep bruxism diagnosis (PUBMED:19639069).
Instruction: Food consumption patterns in a mediterranean region: does the mediterranean diet still exist? Abstracts: abstract_id: PUBMED:15256802 Food consumption patterns in a mediterranean region: does the mediterranean diet still exist? Aims: To assess the food consumption patterns in the adult population of a Mediterranean region: the Balearic Islands (BI). Methods: A cross-sectional nutritional survey was carried out in the BI between 1999 and 2000. A random sample (n = 1,200) of the adult population (16-65 years) was interviewed. Dietary habits were assessed by means of 24-hour recall during 2 non-consecutive days, the first in the warm season and the second in the cold season, and a quantitative food frequency questionnaire. Anthropometric measurements were also obtained. Results: Current food patterns in the BI are characterized by a prominent consumption of dairy products, vegetables, fruit, bread, soft drinks and meat. According to the dietary guidelines for the Spanish population, only appropriate consumption of olive oil and other fats, dairy products, nuts, and soft drinks have been achieved. Intakes of fruit, vegetables, fish, eggs, pulses, cereals and potatoes are below desirable levels. On the other side, intakes of sugary products, sweets and cakes are higher than desirable. Alcohol intake, in the context of the Mediterranean diet, can be considered as acceptable. Conclusions: The dietary patterns observed among the BI population are in harmony with the actual dietary trends in Spain and other Mediterranean countries. Mediterranean dietary habits still exist, but a progressive departure from the traditional Mediterranean diet is being observed mainly in younger generations. It would be desirable to increase the consumption of fish, vegetables, fruit, cereals and pulses and decrease the intake of food sources containing saturated fat. abstract_id: PUBMED:27838057 Mediterranean countries' food consumption and sourcing patterns:An Ecological Footprint viewpoint. Securing food for growing populations while minimizing environmental externalities is becoming a key topic in the current sustainability debate. This is particularly true in the Mediterranean region, which is characterized by scarce natural resources and increasing climate-related impacts. This paper focuses on the pressure Mediterranean people place on the Earth ecosystems because of their food consumption and sourcing patterns and then explores ways in which such pressure can be reduced. To do so, it uses an Ecological-Footprint-Extended Multi-Regional Input-Output (EF-MRIO) approach applied to 15 Mediterranean countries. Results indicate that food consumption is a substantial driver of the region's ecological deficit, whereby demand for renewable resources and ecosystems services outpaces the capacity of its ecosystems to provide them. Portugal, Malta and Greece are found to have the highest per capita food Footprints (1.50, 1.25 and 1.22 global hectares (gha), respectively), while Slovenia, Egypt and Israel have the lowest (0.63, 0.64 and 0.79gha, respectively). With the exception of France, all Mediterranean countries rely on the biocapacity of foreign countries to satisfy their residents' demand for food. By analyzing the effect of shifting to a calorie-adequate diet or changing dietary patterns, we finally point out that the region's Ecological Footprint - and therefore its ecological deficit - could be reduced by 8% to 10%. abstract_id: PUBMED:38474763 Adherence to the Mediterranean Food Pattern and Carbon Footprint of Food Intake by Employees of a University Setting in Portugal. Background: The Mediterranean diet is characterized by the predominance of the consumption of fruit, vegetables, cereals, nuts, and pulses; consumption of olive oil as the main source of fat; regular but moderate consumption of win; moderate consumption of fish, dairy products, eggs, and white meats; and low consumption of red meat as well as sugar and sugary products. In addition to the reported health benefits, the Mediterranean diet has also been widely recognized as a sustainable food pattern. The objective of this study was to understand the relationship between the degree of adherence to the Mediterranean diet of employees of the University of Porto and the relationship with the carbon footprint of their food consumption. Methods: An observational, analytical study was carried out, through the analysis of food consumption data collected in the form of a survey in the previous 24 h inserted in the eAT24 software, as well as the analysis of sociodemographic, lifestyle, and health data collected in the questionnaire. The carbon footprint was calculated from the previous 24 h surveys using data available on a website, obtained in carbon dioxide equivalent per kilogram of food. Sociodemographic, eating habit, and lifestyle questions were selected to understand the relationship between these and adherence to the Mediterranean diet and the carbon footprint of food consumption. Results: In total, 13.2% of the participants had a Mediterranean diet score equal to or greater than six, and the sample had an average food intake carbon footprint of 8146 ± 3081 CO2eq/Kg of food. A weak, statistically significant positive correlation (r = 0.142, p = 0.006) was observed between Mediterranean diet adherence and the carbon footprint of food intake. Conclusions: Most of the participants had a low adherence to the Mediterranean diet, as well as a high carbon footprint of food intake when compared to other countries. People with higher scores for Mediterranean diet adherence have, on average, a higher carbon footprint from food consumption intake. abstract_id: PUBMED:36231575 A Comparison of the Mediterranean Diet and Current Food Patterns in Italy: A Life Cycle Thinking Approach for a Sustainable Consumption. The transition toward more sustainable food systems, which already represents a central element of the European Farm to Fork and Green Deal strategies, could be an effective measure to contribute to global decarbonization and greenhouse gas (GHGs) reduction goals; concurrently, it could improve the health status and nutrition of the global population. In this context, the Mediterranean diet (MD) could play a considerable role, as it is generally recognized as a more balanced, healthy, and sustainable eating pattern than Western consumption patterns, which are characterized by excess food and high energy content, thus causing undesirable effects on both human health and the environment. Although traditionally linked to MD, Italy sees relatively moderate adherence by its citizens, as they consume about +75% of the daily caloric intake recommended by MD. Therefore, this study aims to quantitatively assess the potential environmental, economic, and health impacts of this lower adherence to MD by Italians. Current Italian Food Patterns (CIFP) in 2019 were analyzed and compared to the MD recommended amounts through a Life Cycle Thinking (LCT) approach (LCA) and carbon footprint (CF) analysis. The results show that CIFP, compared to MD, has +133% greater impacts on the environmental macro-area, +100% greater impacts on the human health macro-area, and +59% greater impacts on the economic macro-area (with annual fossil and mineral resource savings of $53.35 per person, $3.2 billion per year). The analysis also shows that CIFP has a CF of 6.54 × 101 kg CO2 eq, +142% over MD (2.7 × 101 kg CO2 eq), resulting in a lower environmental impact of the Mediterranean diet. abstract_id: PUBMED:38217643 Alcohol consumption patterns and adherence to the Mediterranean diet in the adult population of Spain. Purpose: The objective is to evaluate the association between various indicators of alcohol consumption and the degree of adherence to the Mediterranean diet among the Spanish adult population. Methods: A cross-sectional study including 44,834 participants ≥ 15 years of age from the 2017 National Health Survey and the 2020 European Health Survey in Spain. Alcohol patterns were defined based on (1) average intake: individuals were classified as low risk (1-20 g/day in men and 1-10 g/day in women) and high risk (&gt; 20 g/day in men or &gt; 10 g/day in women), (2) binge drinking, and (3) alcoholic beverage preference. Non-adherence to the Mediterranean diet was defined as scoring &lt; 7 points on an adapted Mediterranean Diet Adherence Screener index (range 0-10). Odds ratios (OR) were estimated using logistic regression models adjusted for relevant covariates. Results: Compared to non-drinkers, low and high-risk drinkers were more likely to report non-adherence to the Mediterranean diet: ORs 1.35 (95% CI 1.23; 1.49) and 1.54 (95% CI 1.34; 1.76), respectively. Similarly, reports of binge drinking less than once a month was associated with higher likelihood of non-adherence (OR 1.17; 95% CI 1.04; 1.31). Individuals reporting no preference for a specific beverage and those with a preference for beer or for spirits had lower adherence: ORs 1.18 (95% CI 1.05; 1.33), 1.31 (95% CI 1.17; 1.46), and 1.72 (95% CI 1.17; 2.54), respectively, while a preference for wine showed no association (OR 1.01; 95% CI 0.90; 1.13). Conclusion: Alcohol consumption, even in low amounts, is associated with lower adherence to the Mediterranean diet. Therefore, alcoholic beverages should not be included in measures that define the Mediterranean diet. abstract_id: PUBMED:28659225 Mediterranean food consumption patterns: low environmental impacts and significant health-nutrition benefits. The Mediterranean dietary patterns comply better with recommended nutrient and micronutrient intakes. The Mediterranean diet (MD) was associated with reduced mortality and lower risk for metabolic chronic diseases. It has also low ecological, carbon and water footprints due to its high share of plant-based foods. In fact, the share of plant-based dietary energy is higher in the Mediterranean than in Northern Europe. The Mediterranean hotspot is a major centre of plant and crop diversity. Mediterranean people gather and consume about 2300 plant species. This review paper aims at highlighting the nutrition-health benefits of the MD and analysing the main environmental impacts of the Mediterranean food consumption patterns. There is a growing body of scientific evidence that the MD has significant health-nutrition benefits and low environmental footprints, so there is urgent need to reverse the ongoing erosion of the MD heritage and to promote it as a sustainable diets model. abstract_id: PUBMED:22166194 Food consumption and civil society: Mediterranean diet as a sustainable resource for the Mediterranean area. Objective: To define the Mediterranean diet model inside a Mediterranean social and cultural food framework and from the perspective of a local model of consumption. Design: Reflexion and review of literature available in relation to the Mediterranean diet, locality and proximity. Setting And Subjects: Mediterranean region and its populations. Results: The Mediterranean local food system under the term Mediterranean diet encourages local production and local consumption. From this perspective, this model takes part of every local Mediterranean lifestyles and encourages sustainability. Conclusions: From a local Mediterranean point of view and as a proximity model of consumption, Mediterranean food and diet can be a sustainable resource for the Mediterranean area. abstract_id: PUBMED:31122086 Mediterranean food pattern vs. Mediterranean diet: a necessary approach? "Food pattern" and "diet" sometimes are referred with the same meaning. "Food pattern" can be defined by a priori and a posteriori approaches and refers to the characterisation of foods and frequency of consumption within a population combining with various characteristics. "Diet" may be represented as an individual way of life and considers the food consumption and others like culture, history, society, economy, sustainability, culinary activities, conviviality, physical activity and rest. Mediterranean diet fits on these two concepts. The question is if we are measuring Mediterranean diet as a whole, or whether we are only measuring one of its parts, neglecting the rest of components. Can this compromise the way we monitor its adherence and evolution, with the possibility of losing the perception of certain aspects? How can we preserve and promote the concept if we do not monitor all the pieces of the puzzle? abstract_id: PUBMED:18838026 Food patterns and Mediterranean diet in western and eastern Mediterranean islands. Objective: To assess current Mediterranean dietary patterns (MDP) in the western and eastern Mediterranean, i.e. in Balearic islanders (BI) and Greek islanders (GI). Subjects And Methods: Dietary patterns were assessed using FFQ on a representative sample (n 1200) of the BI and GI (n 1324) adult population. A Mediterranean Diet Score (MDS) was calculated according to the consumption of nine MDP components: (i) with a beneficial effect on health, i.e. vegetables, fruits and nuts, cereals, legumes, fish and shellfish, MUFA:SFA ratio, and moderate alcohol consumption; and (ii) with a detrimental effect on health, i.e. meat and meat products, and milk and dairy products. Persons with consumption of beneficial components below the median (GI plus BI) received a value of 0 and those with consumption above the median a value of 1. Persons with below-median consumption of detrimental components received a value of 1 and above-median consumption a value of 0. For alcohol, a value of 1 was given to consumptions of 10-50 g/d (men) and 5-25 g/d (women). The range of the MDS was 0-9, with higher scores indicating greater adherence to the MDP. Results: GI showed higher adherence (mean MDS 5.12 (sd 1.42)) to the MDP than BI (mean MDS 3.32 (sd 1.23)). BI diet was characterized by a high intake of legumes, nuts, seed oils, sugar and confectionery, and non-alcoholic beverages compared with GI, whereas GI diet was richer in fruit, vegetables, potatoes, olive oil, animal products and alcoholic beverages. Conclusions: The GI diet seems closer to the traditional MDP than the BI diet. abstract_id: PUBMED:35614126 The positive climate impact of the Mediterranean diet and current divergence of Mediterranean countries towards less climate sustainable food consumption patterns. The Mediterranean diet (MD) is a world-renowned healthy dietary pattern. In the present study we analyse the climate sustainability of the MD and the greenhouse gas emissions (EGHG) associated with current dietary patterns in Mediterranean and non-Mediterranean EU countries, focusing on the major deviations from the MD health and environmental targets in Mediterranean countries. The EGHG associated with dietary patterns were calculated for seven Mediterranean countries (Cyprus, Croatia, Greece, Italy, Portugal, Spain and Malta, referred to as 7MED) and the other 21 countries in the EU 28 (referred to as 21OTHER), using 2017 as the reference year. A new harmonised compilation of 3449 carbon footprint values of food commodities, based on a standardised methodology to extract information and assign optimal footprint values and uncertainties to food items, was used to estimate EGHG of food consumption. Our findings show that the EGHG associated with the ideal MD pattern, 2.3 kg CO2equivalents (CO2eq) capita-1 d-1, are in line with planetary GHG climate targets, though GHG emissions associated with food consumption in Mediterranean countries strongly diverged from the ideal MD. Both MED and 21OTHER countries were found to have comparable dietary associated EGHG (4.46 and 4.03 kg CO2eq capita-1 d-1 respectively), almost double that expected from a sustainable dietary pattern. The primary factor of dietary divergence in 7MED countries was found to be meat overconsumption, which contributed to 60% of the EGHG daily excess (1.8 kg of CO2eq capita-1 d-1). Answer: The Mediterranean diet, characterized by high consumption of fruits, vegetables, cereals, nuts, pulses, olive oil as the main source of fat, moderate consumption of fish, dairy products, eggs, and white meats, and low consumption of red meat and sugary products, is traditionally associated with numerous health benefits and low environmental impacts (PUBMED:28659225). However, recent studies indicate that there is a progressive departure from the traditional Mediterranean diet, particularly among younger generations in Mediterranean regions such as the Balearic Islands (BI). The current food patterns in BI are characterized by high consumption of dairy products, vegetables, fruit, bread, soft drinks, and meat, with intakes of fruit, vegetables, fish, eggs, pulses, cereals, and potatoes below desirable levels, and higher than desirable intakes of sugary products, sweets, and cakes (PUBMED:15256802). In the broader Mediterranean region, food consumption is a substantial driver of the area's ecological deficit, with countries like Portugal, Malta, and Greece having the highest per capita food footprints. The region relies heavily on the biocapacity of foreign countries to satisfy its residents' demand for food. Shifting to a calorie-adequate diet or changing dietary patterns could reduce the region's ecological footprint by 8% to 10% (PUBMED:27838057). In Portugal, a study found that most participants had low adherence to the Mediterranean diet and a high carbon footprint of food intake. Interestingly, those with higher adherence to the Mediterranean diet had, on average, a higher carbon footprint from food consumption (PUBMED:38474763). Similarly, in Italy, current food patterns show significantly greater environmental, economic, and health impacts compared to the Mediterranean diet, with the carbon footprint of current Italian food patterns being 142% over the Mediterranean diet (PUBMED:36231575). In Spain, alcohol consumption, even in low amounts, is associated with lower adherence to the Mediterranean diet, suggesting that alcoholic beverages should not be included in measures that define the Mediterranean diet (PUBMED:38217643). The Mediterranean diet is also seen as a sustainable resource for the Mediterranean area, encouraging local production and consumption and contributing to sustainability (PUBMED:22166194). Overall, while the Mediterranean diet still exists, there is a clear trend towards less traditional and less sustainable food consumption patterns in Mediterranean countries, with a need to promote and preserve the traditional Mediterranean diet for its health, cultural, and environmental benefits (PUBMED:31122086; PUBMED:18838026; PUBMED:35614126).
Instruction: Are photographs a suitable alternative to dental study casts when assessing primary surgical outcome in children born with unilateral cleft lip and palate? Abstracts: abstract_id: PUBMED:26283326 Are photographs a suitable alternative to dental study casts when assessing primary surgical outcome in children born with unilateral cleft lip and palate? Objective: To compare the use of the 5-year-olds' index on both dental study casts and intraoral photographs when measuring primary surgical outcome for children born with unilateral cleft lip and palate (UCLP). Design: A cross-sectional study. Setting: Models and photographs collected from cleft units across the UK as part of the CCUK (Cleft Care UK) study were scored by two clinicians at the School of Oral and Dental Sciences, Bristol, UK. Participants: Five-year-old children born with UCLP as part of the CCUK study. One hundred and ninety-eight had dental study casts available and 49 had intraoral photographs available. Methods: The records of both groups, that is study casts (n = 198) and photographs (n = 49) were scored using the 5-year-olds' index on two occasions by two examiners. Results: Reliability of scoring is reduced for intraoral photographs compared with dental study casts. Using weighted Kappa the inter-rater reliability for dental study casts was 0.72 to 0.77 and the inter-rater reliability for intraoral photographs was 0.52 to 0.59. Limitations: The photographs and study casts were not matched for each individual and were collected by a number of different clinicians in each unit, both of which will have had an effect on the quality and consistency of the final records. Conclusions: Dental study casts provide more reliable results and thus still represent the gold standard when assessing primary surgical outcome in cleft care using the 5-year-olds' index. abstract_id: PUBMED:15516169 Photographs of study casts: an alternative medium for rating dental arch relationships in unilateral cleft lip and palate. Objective: To investigate the reliability of using photographs of study casts as an alternative to casts for rating dental arch relationships. Design: Repeated-measures study. Setting: Cleft Palate Center of the University Medical Center Nijmegen, Nijmegen, The Netherlands. Patients: Records of children with complete unilateral cleft lip and palate (UCLP) (n = 49) at the age of 9 years were included. Mean Outcome Measure(s): Statistics of intra- and interexaminer agreement. Results: No significant differences were found between the rating of dental casts and photographs of dental casts, using the Goslon Yardstick. Conclusions: Photographs of dental casts provide a consistent, reproducible method for rating dental arch relationships in patients with UCLP at the age of 9 years and provide a reliable alternative to the application of the Goslon Yardstick on dental casts. abstract_id: PUBMED:22154575 Photographs of dental casts or digital models: rating dental arch relationships in bilateral cleft lip and palate. Yardsticks have been developed to measure dental arch relations in cleft lip and palate (CLP) patients as diagnostic proxies for the underlying skeletal relationship. Travelling with plaster casts to compare results between CLP centres is inefficient so the aim of this study was to investigate the reliability of using digital models or photographs of dental casts instead of plaster casts for rating dental arch relationships in children with complete bilateral cleft lip and palate (CBCLP). Dental casts of children with CBCLP (n=20) were included. Plaster casts, digital models and photographs of the plaster casts were available for all the children at 6, 9, and 12 years of age. All three record formats were scored using the bilateral cleft lip and palate (BCLP) yardstick by four observers in random order. No significant differences were found for the BCLP yardstick scores among the three formats. The interobserver weighted kappa scores were between 0.672 and 0.934. Comparison between the formats per observer resulted in weighted kappa scores between 0.692 and 0.885. It is concluded that digital models and photographs of dental casts can be used for rating dental arch relationships in patients with CBCLP. These formats are a reliable alternative for BCLP yardstick assessments on conventional plaster casts. abstract_id: PUBMED:19642768 Intraoral photographs for rating dental arch relationships in unilateral cleft lip and palate. Background And Purpose: The Goslon Yardstick is one of the most commonly used methods to assess dental arch relationships of patients with unilateral cleft lip and palate. This system was originally applied to dental casts. For reasons of economy and convenience, we aimed to determine whether intraoral photographs could substitute for dental casts for rating dental arch relationships. Methods: Records of 58 patients with nonsyndromic complete unilateral cleft lip and palate from the Chang Gung Craniofacial Center, Taipei, Taiwan, were used in this study. A set of dental casts and digital intraoral photographs taken at around 9 years of age were available for all patients. An experienced examiner rated the dental casts using the Goslon Yardstick to provide the reference scores. The other three examiners rated the intraoral photographs and repeated the rating 1 week later to calculate inter- and intraexaminer reliability. The photographic scores for each examiner were then compared with the reference scores to determine the validity of the photographs. Results: The results showed no significant difference between the rating of dental casts and photographs using the Goslon Yardstick. Reliability was also high for rating on photographs. Conclusions: Intraoral photographs appear to be a viable alternative to the application of the Goslon Yardstick on dental casts. abstract_id: PUBMED:12019006 Palate height: another indicator of surgical outcome in unilateral cleft lip and palate? Objective: To determine whether palate height and maxillary arch depth are systematically related to the surgical center at which primary repair in unilateral cleft lip and palate (UCLP) was carried out. Design: A retrospective comparison based on study casts of consecutive cases of UCLP obtained at age 9 years from six different centers. The observer who conducted measurements was blinded to the source of individual records. Setting: The patients whose records were analyzed received all their surgical care in a national health service setting in six different northern European centers and regions. Patients: Patients were consecutively treated Caucasian children with non-syndromic complete UCLP born in the period 1976 to 1979. Main Outcome Measures: The main outcome measures for the original study were craniofacial form, dental arch relationships, nasolabial appearance, and speech. This report focuses on measurements of anterior maxillary arch depth and palate height. Results: Anterior arch depth and anterior palate height showed some variation among the centers. There was a tendency for anterior arch depth and palate height to also be reduced at centers at which patients showed unfavorable dental arch relationships. Conclusions: Anterior arch depth and palate height might be considered in future studies of surgical outcome and in their possible relationship to problems of articulation. abstract_id: PUBMED:21837387 Predictors of poor dental arch relationship in young children with unilateral cleft lip and palate. The aim of this cross-sectional outcome study using retrospective data capture of treatment histories was to examine the characteristics of young children with unilateral cleft lip and palate who had poor dental arch relationship (i.e., Goslon 5). The study sample comprised 120 children born with nonsyndromic complete unilateral cleft lip and palate between 1995 and 2003, and were aged between 5.0 and 7.0 years (mean age, 5.1 years) at the time of data collection. The dental arch relationship was assessed using the Goslon yardstick from intraoral dental photographs. An independent investigator recorded treatment histories from the clinical notes. The inter- and intraexaminer agreements evaluated by weighted kappa statistics were high. There was no association between dental arch relationship and the type of presurgical orthopedics or pharyngeal flap. Dental arch relationship was associated with the initial cleft size (odds ratio, OR = 1.3; 95% confidence interval, CI = 1.1-1.5, p &lt; 0.01), surgeon grade for palate repair (OR = 5.0, 95% CI = 1.2-19.9, p &lt; 0.05), and primary gingivoperiosteoplasty (OR = 2.8, 95% CI = 1.0-8.1, p = 0.05). These data suggest that intraoral dental photographs provide a reliable method for rating dental arch relationship. Wide initial cleft, high-volume surgeon, and primary gingivoperiosteoplasty are predictors of poor dental arch relationship outcome in young children with unilateral cleft lip and palate. These findings may improve treatment outcome by modifying the treatment protocol for patients with unilateral cleft lip and palate. abstract_id: PUBMED:21846258 Palate dimensions in six-year-old children with unilateral cleft lip and palate: a six-center study on dental casts. Objective: To compare palatal dimensions in 6-year-old children with unilateral cleft lip and palate (UCLP) treated by different protocols with those of noncleft children. Design: Retrospective intercenter outcome study. Patients : Upper dental casts from 129 children with repaired UCLP and 30 controls were analyzed by the trigonometric method. Setting: Six European cleft centers. Main outcome measures : Sagittal, transverse, and vertical dimensions of the palate were observed. Statistics: Palate variables were analyzed with descriptive methods and nonparametric tests. Regarding several various characteristics measured on a relatively small number of subjects, hierarchical, k-means clustering, and principal component analyses were used. Results: Mean values of the observed dimensions for five cleft groups differed significantly from the control (p &lt; .05). The group with one-stage closure of the cleft differed significantly from all other cleft groups in most variables (p &lt; .05). Principal component analysis of all 159 cases identified three clusters with specific morphologic characteristics of the palate. A similar number of treated children were classified into each cluster, while all children without clefts were classified in the same cluster. The percentage of treated children from a particular group that fit this cluster ranged from 0% to 70% and increased with age at palatal closure and number of primary surgical procedures. Conclusion: At 6 years of age, children with stepwise repair and hard palate closure after the age of two more frequently result in palatal dimensions of noncleft control than children with earlier palatal closure and one-stage cleft repair. abstract_id: PUBMED:12560679 Risk factors for poor dental arch relationships in young children born with unilateral cleft lip and palate. To identify risk factors for poor dental arch relationships in children with unilateral cleft lip and palate in the United Kingdom, the authors performed a cross-sectional outcome study with retrospective data capture of treatment histories in children under the care of 44 cleft teams in the United Kingdom. The study sample comprised 238 children born with nonsyndromic complete unilateral cleft lip and palate between April 1, 1989, and March 31, 1991, who were between 5.0 and 7.7 years of age (mean age, 6.5 years) at the time of data collection. The Five-Year-Old Index was used to rank dental arch relationships from dental study models. Velopharyngeal insufficiency was assessed with the use of the Cleft Audit Protocol for Speech. An independent panel recorded surgical treatment histories from the clinical notes. There was no association between the technique and the timing of primary repair, the experience of the surgeon, or presurgical orthopedics and dental arch relationships. Secondary velopharyngeal surgery was independently associated with poor outcome (OR, 4.14; 95 percent CI, 1.6 to 10.7; p = 0.003). Primary nasal repair was protective (OR, 0.47; 95 percent CI, 0.23 to 0.93; p = 0.031) against poor dental arch relationships. Secondary velopharyngeal surgery and primary nasal repair were found to be independently associated with dental arch relationship outcomes in young children with unilateral cleft lip and palate in the United Kingdom. abstract_id: PUBMED:17113439 Speech outcome after cranial-based pharyngeal flap in children born with total cleft, cleft palate, or primary velopharyngeal insufficiency. Purpose: The aim of this study was to compare the effect of a cranial-based pharyngeal flap on the speech of children born with a unilateral cleft lip and palate (UCLP), bilateral cleft lip and palate (BCLP), cleft palate (CP), or primary velopharyngeal insufficiency (VPI) without cleft. Patients And Methods: A total of 234 children born with clefts and 22 children born with primary VPI were evaluated. Children with associated abnormalities were excluded from this study. The Borel-Maisonny classification system was used to evaluate the velar insufficiency. The cranial-based pharyngeal flap was performed using the Sanvenero-Rosselli technique. Results: Between 1984 and 2001, 74 children underwent pharyngeal flap for VPI. The mean follow-up period was 7 years. Borel-Maisonny scores after pharyngeal flap surgery were as follows: children with UCLP (n = 22), 59.1% type 1, 36.4% type 1/2, and 4.5% type 2; children with BCLP (n = 18), 44.4% type 1, 27.8% type 1/2, 16.7% type 2, and 11.1% type 2/3; children with CP (n = 17), 64.7% type 1, 23.5% type 1/2, and 11.8% type 2; children with primary VPI (n = 17), 29.4% type 1, 29.4% type 1/2, 29.4% type 2/3, and 11.8% type 3. There were significant differences in outcome among the 4 groups (P = .029; Fisher exact test). Conclusions: The positive effect on speech of a cranial-based pharyngeal flap is greater in children born with a UCLP or CP than in those born with a BCLP. In children born with primary VPI, this operation has only a slightly positive effect on speech that shows compensatory misarticulations; in such cases, alternative surgical choices or secondary procedures may be indicated. This information should be clearly conveyed to the parents in presurgical consultation so that they know what to expect from the procedure and postoperative adjuvant therapy. abstract_id: PUBMED:20815714 The use of the 5-year-olds' index: comparison between study models and intraoral photographs. Objective: To determine if photographs may be used instead of dental study models to assess the outcome of primary surgery in 5- to 6-year-olds who were born with complete unilateral cleft lip and palate. Design: Retrospective method comparison study. Setting: Five U.K. cleft units. Subjects: Records of 96 children born with complete unilateral cleft lip and palate. Method: Dental study models and photographs including an overjet measurement taken from both the most prominent and retrusive tooth in each case were scored using the 5-year-olds' index. Results: There was moderate to very good agreement between the methods of using study models or photographs. Intraexaminer agreement was moderate to very good for study models. For the photographs it was good to very good. Overall interexaminer agreement was moderate for both study models and photographs. Conclusions: When impressions are difficult to obtain, intraoral photographs with clinically recorded overjet measurements may be used for the 5-year-olds' index. Answer: The use of photographs as an alternative to dental study casts for assessing primary surgical outcomes in children born with unilateral cleft lip and palate (UCLP) has been explored in several studies with varying conclusions. One study found that the reliability of scoring is reduced for intraoral photographs compared with dental study casts, with inter-rater reliability for dental study casts being higher (0.72 to 0.77) than for intraoral photographs (0.52 to 0.59) (PUBMED:26283326). This suggests that dental study casts provide more reliable results and are considered the gold standard when assessing primary surgical outcome in cleft care using the 5-year-olds' index. However, other studies have reported that photographs of dental casts provide a consistent, reproducible method for rating dental arch relationships in patients with UCLP at the age of 9 years and provide a reliable alternative to the application of the Goslon Yardstick on dental casts (PUBMED:15516169). Similarly, digital models and photographs of dental casts have been found to be reliable alternatives for BCLP yardstick assessments on conventional plaster casts (PUBMED:22154575). Another study concluded that intraoral photographs appear to be a viable alternative to the application of the Goslon Yardstick on dental casts, with high reliability for rating on photographs (PUBMED:19642768). Additionally, a study that compared the use of the 5-year-olds' index between study models and intraoral photographs found moderate to very good agreement between the methods, suggesting that photographs with clinically recorded overjet measurements may be used for the 5-year-olds' index when impressions are difficult to obtain (PUBMED:20815714). In summary, while dental study casts are considered the gold standard and provide more reliable results for assessing primary surgical outcomes in children with UCLP, several studies have demonstrated that photographs, particularly of dental casts, can be a suitable and reliable alternative in certain circumstances. The choice between the two methods may depend on the specific context, such as the age of the child, the quality of the photographs, and the feasibility of obtaining dental impressions.
Instruction: Management of urachal remnants in children: Is surgical excision mandatory? Abstracts: abstract_id: PUBMED:26094100 Management of urachal remnants in children: Is surgical excision mandatory? Introduction: The classical management of urachal remants consists in surgical resection, in order to prevent infections and long term malignancies. However, some reports have recently spread a wait and see management. The aim of our study was to report the results of the surgical management in our center. Material And Methods: We conducted a retrospective, monocentric review of all patients managed for urachal remnants from January 2005 to December 2014. Results: Thirty-five patients have been operated during the study period (18 girls and 17 boys). Mean age at surgery was 4,9±4,4 years old. Twenty-seven patients were referred due to symptoms whereas 8 were discovered incidentally (4 by ultrasound scan and 4 during laparoscopy). Among them, 10 were urachal cysts, 15 were urachus sinusa and 10 were patent urachus. Thirty were operated using an open approach and 5 using a laparoscopic approach. Mean length of stay was 3,8±1,7days (1-10) with a mean duration of bladder drainage of 2,5±1 days. No major complications occurred. No abnormal tissue was discovered at the histological analysis. Conclusion: Presentation of urachal remnants is variable but surgical outcomes remain excellent in our experience. When symptoms occur, the surgical decision is easy, but when the diagnosis is incidental, the decision is much more complicated. Official guidelines could ease the decision process and the management of urachal anomalies. abstract_id: PUBMED:30333439 THE OPTIMAL TROCAR PLACEMENT FOR LAPAROSCOPIC EXCISION OF URACHAL REMNANTS (Objective) Although laparoscopic excision of urachal remnants has been applied widely, the standard surgical method has yet to be established. The aim of this study, therefore, was to evaluate perioperative outcome and surgical techniques, including the trocar placement at our institution. (Subjects and methods) A total of 20 patients with urachal remnant were treated using laparoscopic surgery between October 2002 and August 2016. 3 trocars were inserted in the peritoneal cavity. Camera trocar was placed at right side of umbilicus and two working trocars were placed at the upper side of umbilicus and right lower quandrant. A 5 mm, 30° angled lens camera was used during operation. (Results) The mean age was 27 years old. There were 16 males and 4 females. The classifications of urachal remnants were urachal sinus (n=18) and urachal diverticulum (n=2). Umbilical excision was performed in 18 case and partial cystectomy were required in 4 cases. 4 trocars were inserted in only one case due to severe adhesion in the peritoneal cavity. Laparoscopic excision was successfully completed in all cases. The mean operative time was 220 minutes and the mean blood loss was minimal. The complications occurred in 5 cases including small intestinal injury (n=2), peritonitis (n=1), wound infection (n=1), intestinal obstruction (n=1) and acute renal failure (n=1) (intestinal obstruction and acute renal failure were occurred in the same case). (Conclusion) Although there are problems such as shortening the operative time and reducing complications, our unique port placement enables good surgical view and easy suturing during partial cystectomy. abstract_id: PUBMED:30959359 Surgical management for an infected urachal cyst in an adult: Case report and literature review. Introduction: Incomplete urachal obliteration during fetal development gives rise to distinct malformations of the median umbilical ligament. Most anomalies are asymptomatic and resolve during early infancy, but some go unrecognized until adulthood. These rare cases can present with acute abdominal symptomatology secondary to infected urachal remnants. Presentation Of Case: A 20-year-old man presented with periumbilical pain. Physical exam showed a warm, erythematous infra-umbilical mass that was tender to palpation. CT revealed an infected urachal cyst. The patient underwent urachal abscess incision and drainage with cyst excision. The patient returned home on postoperative day two. Two-week outpatient follow-up confirmed an uncomplicated recovery. Discussion: Surgical excision of urachal anomalies in adults is curative and preventive against recurrent infection and malignancy, but preoperative management is currently unstandardized. Current literature recommends a two-stage approach characterized by preoperative antibiotics and cyst incision and drainage followed later by complete surgical excision. Our patient underwent a single therapeutic approach with preoperative antibiotics and definitive operative excision. Our patient had a comparable outcome with no complications and a short hospitalization. Conclusion: Preoperative antibiotics with excision of urachal remnants was effective in our case. Staged approaches are appropriate in complicated cases as a means to reduce post-operative infection rates and hospital stays. Urachal remnant excision is recommended in adults to prevent future malignancy. abstract_id: PUBMED:24094971 Current diagnosis and management of urachal remnants. Purpose: The purposes of this study are to review our experience with patients who were found to have urachal remnants. We discuss their diagnosis and management and we also evaluate post-operative complications in our cohort of patients with this entity. Methods: We performed a retrospective review of all patients diagnosed with a urachal remnant (UR) between January 2006 and December 2010. Patient variables included age, gender, presenting symptoms and signs, diagnostic modality, and type of urachal remnant. Operative management, post-operative complications, and non-operative follow-up were abstracted. Results: We identified 103 pediatric patients with either a urachal cyst (n = 38), urachal diverticulum (n = 13), urachal sinus (n = 11), patent urachus (n = 21), or a non-specific atretic urachal remnant (n = 20). Seventy-eight patients had symptomatic URs. Thirty-four symptomatic patients underwent surgical excision of the UR and 44 symptomatic patients were observed. Eighteen URs were incidentally diagnosed in asymptomatic patients, none of whom underwent surgical excision. In 7 patients symptoms could not be determined from the records. No patients with non-specific atretic remnants were operated upon. Nineteen of the observed patients were reimaged subsequently. In fifteen (78.9%) of these patients, the URs had resolved spontaneously. None of the patients who were observed required subsequent excision of the UR. Post-operative complications occurred in 14.7% of patients in whom URs were excised. Conclusion: Review of our patients diagnosed with URs indicates there is a subset of patients in whom spontaneous resolution could be expected. Complications are not uncommon after surgical excision. abstract_id: PUBMED:18721938 Management of urachal remnants in early childhood. Purpose: Generally, it is recommended that all urachal remnants be excised to avoid recurrent disease and possible malignant transformation later in life. However, spontaneous resolution with no need for further intervention has been reported. We reviewed the experience with urachal remnants at a single institution and evaluated which patients could be treated nonoperatively and which required surgical intervention. Materials And Methods: We reviewed the medical records and radiographic studies of all patients with urachal remnants from January 1999 to January 2007. Patients were analyzed according to initial presentation, imaging findings and treatment. Serial clinical examinations and radiographic imaging were used to follow patients. Results: An external urachal sinus was found in 9 patients (39.1%) and 1 was surgically excised. Of the 12 urachal cysts (52.2%) 9 ultimately required surgical excision, of which 6 were infected initially. A patent urachus was found in 2 patients, which resolved during an observation period. Radiographic imaging and/or physical examination diagnosed all remnants initially and confirmed complete resolution during followup. Eight of the 10 urachal remnants (80.0%) that resolved developed in patients younger than 6 months. Various accompanying urogenital anomalies were found in 8 patients (34.8%). Conclusions: A small urachal remnant, especially at birth, may be viewed as physiological. Urachal remnants in patients younger than 6 months are likely to resolve with nonoperative management. However, if symptoms persist or the urachal remnant fails to resolve after 6 months of age, it should be excised to prevent recurrent infections. abstract_id: PUBMED:23678477 Laparoscopic management of complicated urachal remnants. Managing persistent and symptomatic urachal anomalies requires wide surgical excision of all anomalous tissue with a cuff of bladder tissue via the open approach. We report 7 cases with complete laparoscopic removal of symptomatic urachal remnants with or without a cuff of bladder tissue. We expected that this technique would be less invasive and have lower morbidity. We report on the feasibility of this approach, including efficacy and outcomes. Eight patients with a mean age of 36.5 years who had symptomatic urachal diseases underwent laparoscopic excision between July 2004 and July 2012. With the use of four ports, the urachal remnant was dissected transperitoneally and then removed via the umbilicus port. The clinical results of laparoscopic urachal remnant excision as a minimally invasive surgery, the perioperative records, and pathologic results were evaluated. There were no intraoperative or postoperative complications. Mean surgery time was 2.7 hours. Mean hospital stay was 14.6 days. The patients with bladder cuff resection had a long admission and Foley catheterization period (mean, 14.4 and 11 days). Pathological evaluations were 6 cases of infected urachal cysts, 1 case of infected urachal sinus, and 1 case of urachal adenocarcinoma. We found no postoperative complications including any symptom recurrence or voiding difficulty during a mean follow-up of 46.3 months. The perioperative surgical outcomes achieved infection control and symptomatic relief and additionally good cosmesis. Complete laparoscopic removal of symptomatic urachal remnants with or without a cuff of bladder tissue seems to be a safe, effective, and better cosmetic alternative with the advantages of a minimally invasive approach. abstract_id: PUBMED:12720197 Surgical implications of urachal remnants: Presentation and management. Background/purpose: The abdominal manifestations of urachal remnants often prompt referral to the pediatric general surgeon. The purpose of this study was to evaluate the authors' management of this anomaly. Methods: The authors performed a retrospective review of patients presenting to their institution with urachal remnants between 1984 and 2001. Clinical and radiographic details of presentation, management, and outcomes are described. Results: Twenty-six patients presented at a median age of 4 years (range, 2 days to 12 years), 16 were boys, and 18 required inpatient care. Eleven (42%) presented with infection, 7 (27%) with clear drainage, 3 (12%) with umbilical polyps/granulation, 3 (12%) with pain, one (4%) with recurrent urinary tract infections, and one (4%) with an asymptomatic punctum. One had an associated anomaly (hypospadias). Urinalysis and urine cultures did not correlate with infection. Ultrasound scan was diagnostic in greater than 90% of cases. Overall, 20 patients underwent primary cyst excision, and 6 underwent incision and drainage (I&amp;D) with delayed excision. Five patients underwent primary excision while infected, and 2 had postoperative complications (wound infection and urine leak). All 6 patients who underwent 2-stage procedure initially presented with infection, and none had complications. Conclusions: Persistent urachal remnants can present at any age with a variety of clinical manifestations. Ultrasound scan is a reliable diagnostic tool. Additional diagnostic studies generally are not warranted. Simple excision of noninfected lesions is appropriate. In cases of acute infection, initial I&amp;D with delayed cyst excision may be preferable to avoid unnecessary complications. abstract_id: PUBMED:32347613 Evaluation and management of urachal remnants in children. Background: This study aimed to evaluate the diagnosis and treatment of pediatric patients with urachal remnants. Methods: Medical records of patients diagnosed with urachal remnants between 2015-2019 were evaluated retrospectively. Age, gender, admission complaints, diagnosis, size of cysts, the complications that developed, and treatment modalities were recorded. Results: The study population consisted of 15 cases including seven girls. Most of the cases were asymptomatic. The urachal remnants were detected by ultrasonography. The most common symptom in symptomatic patients was abdominal pain and umbilical discharge. Three patients required surgery and most of them were conservatively followed up. Complication, infection, and bladder rupture were observed in one case. Conclusions: Follow-up of patients with urachal remnants can be performed conservatively. The type of complaint and the presence of clinical and radiological regression (if any) are important for deciding the treatment during the follow-up. However, conservative follow-up is more prominent today, as shown in this study. abstract_id: PUBMED:28251114 A rare case of primary urachal actinomycosis mimicking malignancy. Primary actinomycosis occurring in urachal remnants is rarely documented in literature and may mislead the clinicians to diagnose urachal carcinoma. A 50-year-old man came with complaints of lower abdominal pain, dysuria, and dribbling of urine for 2 months. A vague mass of 7 cm was palpable in the suprapubic region. Imaging of the abdomen revealed an irregular mass seen superior to fundus of the urinary bladder appearing adherent to mesentery and bowel loops, suggesting an urachal remnant associated lesion with infiltration. Provisional diagnosis of urachal carcinoma was considered. Excision of the mass with partial cystectomy and resection of involved ileal segments were done. Microscopic examination revealed actinomycotic colonies surrounded by microabscesses and dense inflammatory fibrotic lesion. Small intestinal segments showed a similar lesion in the serosa. abstract_id: PUBMED:30602341 Is excision necessary in the management of adult urachal remnants?: a 12-year experience at a single institution. Objectives: To determine the ideal management of uncomplicated adult urachal remnants and to confirm if conservative management with repeated imaging is acceptable. Patients And Methods: This is a retrospective review of clinical notes and imaging of 45 patients with a mean age of 54.6 years who had been diagnosed with urachal anomalies at a single urology tertiary centre from January 2005 to December 2016. Patients who underwent surgical intervention and findings from patients managed non-operatively, with a mean follow up of 31 months, were evaluated. Results: Thirty (66.7%) patients had incidental findings, while 15 (33.3%) were symptomatic. Eight underwent excision, one underwent an incision and drainage of abscess, while 34 of the 36 remaining patients elected to undergo surveillance with repeated ultrasound imaging. Two were lost to follow-up. Malignancy was confirmed in three patients. There was interval stability of the urachal remnant in all the patients in the non-operative cohort. The small sample size, limited follow-up, and retrospective nature of the study are recognised limitations. Conclusions: It was found that simple and asymptomatic lesions can be monitored with ultrasound, but effort must be made on initial diagnosis to ensure that malignancy is excluded using CT imaging and flexible cystoscopy where possible. Long-term follow-up of this cohort is required to assess the natural history of observed urachal anomalies. Answer: The management of urachal remnants in children is not uniformly agreed upon, and whether surgical excision is mandatory is a subject of debate. Some studies suggest that surgical excision is the classical management of urachal remnants to prevent infections and long-term malignancies (PUBMED:26094100). However, there is also a trend towards a "wait and see" approach, especially in cases where the diagnosis is incidental and the patient is asymptomatic (PUBMED:26094100). A retrospective review of pediatric patients with urachal remnants indicated that there is a subset of patients in whom spontaneous resolution could be expected, and complications are not uncommon after surgical excision (PUBMED:24094971). Another study found that urachal remnants in patients younger than 6 months are likely to resolve with nonoperative management, but if symptoms persist or the urachal remnant fails to resolve after 6 months of age, it should be excised to prevent recurrent infections (PUBMED:18721938). Laparoscopic excision of urachal remnants has been applied widely, and while it has been associated with some complications, it is considered a less invasive option with lower morbidity and good cosmetic outcomes (PUBMED:23678477). In cases of acute infection, initial incision and drainage with delayed cyst excision may be preferable to avoid unnecessary complications (PUBMED:12720197). In adults, surgical excision of urachal anomalies is curative and preventive against recurrent infection and malignancy, but preoperative management is currently unstandardized (PUBMED:30959359). A study on adult patients found that simple and asymptomatic lesions can be monitored with ultrasound, but malignancy must be excluded using CT imaging and flexible cystoscopy where possible (PUBMED:30602341). In conclusion, while surgical excision of urachal remnants in children has been the traditional approach, especially in symptomatic cases or when there is a risk of complications, there is growing evidence that conservative management with observation may be appropriate in selected cases, particularly in asymptomatic or young patients where spontaneous resolution is possible. Official guidelines could help standardize the decision process and management of urachal anomalies (PUBMED:26094100).
Instruction: Influence of the admission pattern on the outcome of patients admitted to a respiratory intensive care unit: does a step-down admission differ from a step-up one? Abstracts: abstract_id: PUBMED:23650433 Influence of the admission pattern on the outcome of patients admitted to a respiratory intensive care unit: does a step-down admission differ from a step-up one? Background: The outcomes of patients admitted to a respiratory ICU (RICU) have been evaluated in the past, but no study has considered the influence of location prior to RICU admission. Methods: We analyzed prospectively collected data from 326 consecutive patients admitted to a 7-bed RICU. The primary end points were survival and severity of morbidity-related complications, evaluated according to the patient's location prior to RICU admission. Three admission pathways were considered: step-down for patients transferred from the ICUs of our hospital; step-up for patients coming from our respiratory wards or other medical wards; and directly for patients coming from the emergency department. The secondary end point was the potential influence of several risk factors for morbidity and mortality. Results: Of the 326 subjects, 92 (28%) died. Overall, subjects admitted in a step-up process had a significantly higher mortality (P &lt; .001) than subjects in the other groups. The mortality rate was 64% for subjects admitted from respiratory ward, 43% for those from medical wards, and 18% for subjects from both ICU and emergency department (respiratory ward vs medical ward P = .04, respiratory ward vs emergency department P &lt; .001, respiratory ward vs ICU P &lt; .001, medical ward vs emergency department P &lt; .001, and medical ward vs ICU P &lt; .001). Subjects admitted from a respiratory ward had a lower albumin level, and Simplified Acute Physiology Score II was significantly higher in subjects following a step-up admission. About 30% of the subjects admitted from a respiratory ward received noninvasive ventilation as a "ceiling treatment." The highest odds ratios related to survival were subject location prior to RICU admission and female sex. Lack of use of noninvasive ventilation, younger age, female sex, higher albumin level, lower Simplified Acute Physiology Score II, higher Barthel score, and absence of chronic heart failure were also statistically associated with a lower risk of death. Conclusions: The pathway of admission to a RICU is a determinant of outcome. Patients following a step-up pattern are more likely to die. Other major determinants of survival are age, nutritional status and female sex. abstract_id: PUBMED:28515605 Influence of Admission Source on the Outcome of Patients in an Intensive Care Unit. Aim Of The Study: The admission in the Intensive Care Unit (ICU) occurs from various sources, and the outcome depends on a complex interplay of various factors. This observational study was undertaken to describe the epidemiology and compare the differences among patients admitted in a tertiary care ICU directly from the emergency room, wards, and ICUs of other hospitals. Materials And Methods: A retrospective study was conducted on 153 consecutive patients admitted from various sources in a tertiary care ICU between July 2014 and December 2015. The primary endpoint of the study was the influence of the admission source on ICU mortality. The secondary endpoints were the comparison of the duration of mechanical ventilation, length of ICU stay, and the ICU complication rates between the groups. Results: Out of the 153 patients enrolled, the mortality of patients admitted from the ICUs of other hospital were significantly higher than the patients admitted directly from the emergency room or wards/operating rooms (60.5% vs. 48.2% vs. 31.9%; P = 0.02). The incidence of ventilator-associated lung injury was lower in the patients admitted directly from the emergency room (23.4% vs. 50% vs. 50%; P = 0.03). Multivariate logistic regression analysis revealed higher age, increased disease severity, longer duration of mechanical ventilation, and longer ICU stay as independent predictors of mortality in the patients shifted from the ICUs of other hospitals. Conclusion: The study demonstrated a higher risk of ICU mortality among patients shifted from the ICUs of other hospitals and identified the independent predictors of mortality. abstract_id: PUBMED:27233431 Outcome of mechanically ventilated patients initially denied admission to an intensive care unit and subsequently admitted. The outcome of mechanically ventilated patients initially denied admission to an intensive care unit (ICU) and subsequently admitted is unclear. We compared outcomes of patients denied ICU admission and subsequently admitted, to those of patients admitted to the ICU and to patients refused ICU admission. The medical records of all the patients who were subjected to mechanical ventilation for at least 24h over a 4year period (2010-2014) were reviewed. Of 707 patients (757 admissions), 124 (18%) were initially denied ICU admission and subsequently admitted. Multivariate stepwise logistic regression analysis showed significant association with death of: age, length of stay, nursing home residency, duration of mechanical ventilation, previous admission with mechanical ventilation, cause for mechanical ventilation, rate of failed extubations, associated morbidity (previous cerebrovascular accident, dementia, chronic renal failure), and occurrence of nosocomial bacteremia. The odds for death among patients denied ICU admission and subsequently transferred to the ICU compared to patients admitted directly to the ICU was 3.6 (95% CI: 1.9-6.7) (P&lt;0.0001). The odds for death among patients refused ICU admission compared to those who were initially denied and subsequently admitted were not statistically significant (OR=1.7, 95% CI: 0.8-3.8). In conclusion, patients denied ICU admission and subsequently admitted face a considerable risk of morbidity and mortality. Their odds of death are nearly three times those admitted directly to the ICU. Late admission to the ICU does not appear to provide benefit compared to patients who remain in general medicine wards. abstract_id: PUBMED:30554695 Routine admission to step-down unit as an alternative to intensive care unit after pediatric supraglottoplasty. Objective: To evaluate the feasibility and the outcomes for step-down (SD) unit admission as an alternative to intensive care unit (ICU) admission after supraglottoplasty in the pediatric patient. Methods: A review of 98 patients who underwent supraglottoplasty from 2012 to 2017 at a tertiary referral pediatric hospital was performed. An SD unit had 1-to-3 nurse-to-patient ratio with noninvasive positive pressure ventilation capability. Data variables included demographics, comorbidities, preoperative and postoperative respiratory requirements, and length of stay. Results: Routine admission to SD occurred for 85% patients while 15% patients were selectively admitted to ICU due to intubation requirement or perioperative respiratory distress. In SD, noninvasive respiratory support was required for 28 (34%) patients. Three (4%) required re-intubation and ICU transfer without delay in care. Patients at high risk for requiring respiratory support after surgery have a neurologic condition (OR 7.0, 95% 2.4-20.2, p &lt; 0.01) or intrinsic pulmonary disease (OR 4.5, 95% CI 1.5-13.3, p &lt; 0.01). Median length of stay was shorter for patients in step-down (1 day, IQR 1-2). Conclusion: Patients can be managed safely in a SD unit after supraglottoplasty supporting de-escalation of care. Patients with neurologic and pulmonary comorbidities may have higher respiratory needs postoperatively. Prospective studies are warranted to further optimize resource allocation. abstract_id: PUBMED:14663388 Patients refused in admission to an intensive care unit. Prospective evaluation of the causes and outcome Objectives: To determine the reasons for refusing admission to an intensive care unit and the immediate outcome of the patients refused. Method: All patients triaged for admission to a medical intensive care unit ICU during a three-month period were studied prospectively. The frequency and reason for refused admission were studied. The immediate outcome of the refused patients, together with the time lapse before their subsequent management were determined. Results: Out of 251 patients, 132 (53%) were refused admission. Primary reasons for refusal was lack of beds (92%). Nearly one patient out of 5 was not subsequently treated in an intensive care unit. Conclusion: Some refusals could have been avoided if the downstream networks (short-hospitalisation, re-habilitation centres.) had been able to accommodate the patients proposed. Moreover, by abandoning the strategy of wide admission to intensive care to the benefit of a selective triage of the patients proposed, a certain number of refusals would probably have been avoided. Nevertheless, there are obvious difficulties in the application of the latter method. abstract_id: PUBMED:29181330 Evaluation of Admission Indications, Clinical Characteristics and Outcomes of Obstetric Patients Admitted to the Intensive Care Unit of a Teaching Hospital Center: A Five-Year Retrospective Review. Background: Care of obstetric patients has always been a challenge for critical care physicians, because in addition to their complex pregnancy-related disease, fetal viability is considered. Objectives: The aim of this study was to review the admission indications, clinical characteristics and outcomes of obstetric patients, admitted to the intensive care unit of Alzzahra teaching hospital affiliated to Guilan University of Medical Sciences, Rasht, Iran. Methods: This retrospective cohort study was conducted on pregnant /post-partum (up to 6 weeks) patients admitted to the ICU over a 5-year period from April 2009 to April, 2014. Results: Data from 1019 subjects were analyzed. Overall, 90.1% of the patients were admitted in the postpartum period. The most common indications for admission were pregnancy related hypertensive disorders (27.5%) and obstetric hemorrhage (13.5%). Epilepsy (5.4%) and cardiac disease (5.2%) were the most common non-obstetric indications. Conclusions: Pregnancy-related hypertensive disorders and obstetric hemorrhage were the main reasons for admission, and epilepsy and cardiac disease were the most common non-obstetric indications. Efforts must be concentrated on increasing antenatal care. abstract_id: PUBMED:33614253 Patterns of Admission and Clinical Outcomes Among Patients Admitted to Medical Intensive Care Unit of a Teaching and Referral Hospital, Northwest Ethiopia. Background: The intensive care unit (ICU) is a health care delivery service for patients who are in critical condition with potentially recoverable diseases. Patients can benefit from more detailed observation, monitoring and advanced treatment than other wards or department. The care is advancing but in resource-limited settings, it is lagging far behind and mortality is still higher due to various reasons. Therefore, we aimed to determine the admission patterns, clinical outcomes and associated factors among patients admitted medical intensive care unit (MICU). Methods: A retrospective cross-sectional study was conducted based on a record review of logbook and charts of patients admitted from September, 2015 to April, 2019. Data were entered and analysed using SPSS version 20. Both bivariate and multivariate logistic regression analyses were used and a P-value &lt; 0.05 was considered statistically significant. Results: A total of 738 patients were admitted to medical intensive care unit (MICU) during September, 2015 - April, 2019. Five hundred and four patients (68%) of all intensive care unit (ICU) admissions had complete data. Out of the 504 patients, 268 (53.2%) patients were females. Cardiovascular disease 182(36.1%) was the commonest categorical admission diagnosis. The overall mortality rate was 38.7%. In the multivariate analysis, mortality was associated with need for mechanical ventilation (AOR = 5.87, 95% CI: 3.24 - 10.65) and abnormal mental status at admission (AOR = 2.8, 95% CI: 1.83-4.29). Patients who had stay less than four days in MICU were 5 times more likely to die than those who has stay longer time (AOR= 5.58, 95% CI: 3.58- 8.69). Conclusions: The overall mortality was considerably high and cardiovascular diseases were the most common cause of admission in MICU. Need for mechanical ventilator, length of intensive care unit stay and mental status at admission were strongly associated with clinical outcome of patients admitted to medical intensive care unit. abstract_id: PUBMED:21255970 Admission hyperlactatemia: causes, incidence, and impact on outcome of patients admitted in a general medical intensive care unit. Purpose: The aim of this study was to evaluate the causes, incidence, and impact on outcome of admission hyperlactatemia in patients admitted to a general medical intensive care unit (ICU). Methods: A retrospective cohort study was done in an 8-bed general ICU of tertiary care hospital over 15 months. Data regarding patient demographics, probable cause of hyperlactatemia, presence of shock, need for organ support, and ICU outcome were recorded. Patients were divided into 2 groups based on admission lactate levels as follows: high lactate (&gt;2 mmol/L) and normal lactate (&lt;2 mmol/L). Patients were compared in terms of need for organ support and ICU mortality. Results: Admission hyperlactatemia was present in 199 of 653 (30.47%) patients. Shock was the commonest cause, 53.3% patients, followed by respiratory and renal failure in 26 (13.1%) and 16 (8%) patients, respectively. Mean ± SD lactate levels in survivors and nonsurvivors were 1.64 ± 1.56 and 4.77 ± 4.72 mmol/L, respectively (P = .000). Receiver operating characteristic curve for lactate was 0.803 (95% confidence interval [CI], 0.753-0.853). Sensitivity and specificity of lactate (&gt;2 mmol/L) to predict ICU mortality was 74.8% and 77.8%, respectively. Odds ratio for dying in patients with hyperlactatemia was 10.39 (95% CI, 6.378-16.925) with a relative risk of 1.538 (95% CI, 1.374-1.721). On subgroup analysis, in patients without hypotension too, ICU mortality was significantly increased in patients with hyperlactatemia (1.3% vs 6.45%, P = .009). Conclusions: Admission hyperlactatemia is common in a general ICU and is associated with increased mortality, irrespective of presence of hypotension. Shock was the commonest cause for hyperlactatemia, followed by respiratory and renal failures. abstract_id: PUBMED:23185988 Delayed intensive care unit admission is associated with increased mortality in patients with cancer with acute respiratory failure. Acute respiratory failure (ARF) is the leading reason for intensive care unit (ICU) admission in patients with cancer. The aim of this study was to identify early predictors of death in patients with cancer admitted to the ICU for ARF who were not intubated at admission. We conducted analysis of a prospective randomized controlled trial including 219 patients with cancer with ARF in which day-28 mortality was a secondary endpoint. Mortality at day 28 was 31.1%. By multivariate analysis, independent predictors of day-28 mortality were: age (odds ratio [OR] 1.30/10 years, 95% confidence interval [CI] [1.01-1.68], p = 0.04), more than one line of chemotherapy (OR 2.14, 95% CI [1.08-4.21], p = 0.03), time between respiratory symptoms onset and ICU admission &gt; 2 days (OR 2.50, 95% CI [1.25-5.02], p = 0.01), oxygen flow at admission (OR 1.07/L, 95% CI [1.00-1.14], p = 0.04) and extra-respiratory symptoms (OR 2.84, 95%CI [1.30-6.21], p = 0.01). After adjustment for the logistic organ dysfunction (LOD) score at admission, only time between respiratory symptoms onset and ICU admission &gt; 2 days and LOD score were independently associated with day-28 mortality. Determinants of death include both factors non-amenable to change, and delay in ARF management. These results suggest that early intensive care management of patients with cancer with ARF may translate to better survival. abstract_id: PUBMED:20455722 Indications and outcome for obstetric patients' admission to intensive care unit: a 7-year review. The objective of this retrospective study was to investigate the indications, interventions and clinical outcome of pregnant and newly delivered women admitted to the multidisciplinary intensive care unit at the King Abdullah University Hospital in Jordan over a 7-year period from January 2002 to December 2008. The collected data included demographic characteristics of the patients, mode of delivery, pre-existing medical conditions, reason for admission, specific intervention, length of stay and maternal outcome. A total of 43 women required admission to the intensive care unit (ICU), which represented 0.37% of all deliveries. The majority (95.3%) of patients were admitted to the ICU postpartum. The most common reasons for admissions were (pre)eclampsia (48.8%) and obstetric haemorrhage (37.2). The remainder included adult respiratory distress syndrome (6.9%), pulmonary embolism (2.3%) and neurological disorders (4.6%). Mechanical ventilation was required to support 18.6% of patients and transfusion of red blood cells was needed for 48.8% of patients. There were three maternal deaths (6.9%). A multidisciplinary team approach is essential to improve the management of hypertensive disorders and postpartum haemorrhage to achieve significant improvements in maternal outcome. A large, prospective study to know which women are at high risk of admission to the intensive care units and to prevent serious maternal morbidity and mortality is warranted. Answer: The influence of admission patterns on patient outcomes in a respiratory intensive care unit (RICU) is significant, with different admission pathways showing varying mortality rates. A study analyzing 326 consecutive patients admitted to a 7-bed RICU found that patients admitted through a step-up process, coming from respiratory wards or other medical wards, had a significantly higher mortality rate compared to those admitted directly from the emergency department or transferred from the ICUs of the same hospital (PUBMED:23650433). Specifically, the mortality rate was 64% for patients admitted from the respiratory ward, 43% for those from medical wards, and 18% for subjects from both ICU and emergency department. The study concluded that the pathway of admission to a RICU is a determinant of outcome, with step-up admissions being more likely to result in death. Other major determinants of survival included age, nutritional status, and female sex (PUBMED:23650433). This finding is consistent with another study that showed a higher risk of ICU mortality among patients shifted from the ICUs of other hospitals compared to those admitted directly from the emergency room or wards/operating rooms (PUBMED:28515605). Additionally, a study on mechanically ventilated patients initially denied ICU admission and subsequently admitted revealed that these patients faced a considerable risk of morbidity and mortality, with their odds of death being nearly three times those admitted directly to the ICU (PUBMED:27233431). In conclusion, the admission pattern to a RICU does influence patient outcomes, with step-up admissions associated with higher mortality rates compared to step-down or direct admissions from the emergency department. This suggests that the timing and source of admission to critical care can be crucial factors in patient prognosis.
Instruction: Is the second-born twin at high risk? Abstracts: abstract_id: PUBMED:38214719 Second-versus first-born twin: comparison of short- and long-term outcomes. Purpose: An ancient description of the competition between twins for first breath is found in the biblical story of Jacob and Esau (Genesis 25:26) when Jacob pulled his older brother's heel in the hope of becoming firstborn but to no avail. In this study, we sought to evaluate the short and long-term outcomes of twin pairs, comparing between the second- and first-born twin. Methods: A population-based cohort study, including dichorionic twin deliveries occurring between the years 1991 and 2021 at Soroka University Medical Center. A General estimation equation (GEE) was applied to adjust for confounders. The incidence of offspring's hospitalizations due to various medical conditions was compared. Kaplan-Meier survival analyses compared cumulative morbidity. Cox proportional hazards models were used to control for confounders. Results: 5507 twin deliveries met the inclusion criteria. Second-born twins had higher rates of cesarean deliveries, statistically significant in the GEE multivariable analysis. More first-twin fetuses were experiencing non-reassuring fetal heart rate patterns, although other obstetrical outcomes as well as mortality rates were comparable between groups. Second twins weighed lower than their older sibling (mean difference 33 g) and were more frequently SGA and low birthweight (1500-2500 g); (p &lt; 0.05). Later during childhood, offspring of twin deliveries experienced notable morbidity due to infectious (23.8-24.1%), respiratory (10.5-10.9%), neurological (7.0-7.8%) and cardiovascular pathologies (1.7-1.9%) during childhood, that was unaffected by birth order. Conclusion: Other than birthweight differences, the birth order of dichorionic twins is not associated with adverse neonatal health indices, nor does it predict excess risk for morbidity during childhood. abstract_id: PUBMED:9478326 Is the second-born twin at high risk? Objective: To compare the outcome of the second-born twin with that of the first twin and to find out whether there were any differences and the reason for such differences, if any, and how to improve those differences. Method: We retrospectively reviewed twin deliveries from the 15th of April, 1994 to the 14th of April, 1996. Excluded were twins weighing &lt; 500 gm, either twin with a lethal malformation, and either twin who died before the onset of labour. After this exclusion 246 twin pairs remained in the study. We compared perinatal mortality and 5-minute Apgar scores for both twins. Results: Perinatal mortality was similar for both twins as well as 5-minute Apgar scores. The twins &lt; 1500 gm appeared at special risk. The mode of delivery had no influence on the perinatal outcome of either twin. Conclusions: The second-born twin may not be at increased risk of complications compared with the first-born twin and caesarean delivery may not improve this outcome. abstract_id: PUBMED:15512030 Is the second-born twin at high risk? We set out to compare the outcome of the second born twin with that of the first twin to determine if any differences could be improved. We reviewed retrospectively twin deliveries from 15 April 1994 to 14 April 1996. We excluded from analysis twins weighing &lt; 500 g, where either twin had lethal malformation, and where either twin was dead before the onset of labour. After these exclusions 246 twin pairs remained in the study. We compared perinatal mortality, and 5 minutes Apgar scores for both twins. Perinatal mortality was similar for both twins as was the 5 minutes Apgar scores. Twins &lt; 1500 g appear at special risk. The mode of delivery has no influence on the perinatal outcome of both twins. It is concluded that the second born twin may not be at increased risk of complications compared with the first born twin. Caesarean delivery will not improve this outcome. abstract_id: PUBMED:38347253 Risk factors for cesarean section of the second twin after vaginal delivery of the first twin. Purpose: To document risk factors for combined delivery, defined as unplanned cesarean section for the second twin after vaginal delivery of the first twin, particularly focusing on delivery interval between twins. Methods: A multi-center cross-sectional study among a cohort of 5411 women with twin pregnancy experiencing vaginal delivery of the first twin in 2007-2016 at 191 tertiary referral hospitals in Japan was conducted. Primary outcome was the occurrence of combined delivery, and data were collected through the Japan Society of Obstetrics and Gynecology perinatal database. Risk factors for combined delivery were investigated using Poisson regression analysis. Results: Combined delivery occurred in 235 women (5.1%) and was significantly associated with delivery interval (P &lt; 0.001). Multivariate analysis showed women with ≥ 25 kg/m2 pre-pregnancy body mass index (BMI) and with birthweight &lt; 1500 g of the second twin had significantly higher risk for combined delivery than women with 18.5-25 kg/m2 pre-pregnancy BMI and with birthweight ≥ 2500 g of the second twin (adjusted risk ratio (aRR) 1.72, 95% confidence interval (CI) 1.15-2.57, and aRR 2.06, 95% CI 1.14-3.72, respectively). Breech and transverse presentation of the second twin were also risk factors for combined delivery compared with cephalic presentation (aRR 3.60, 95% CI 2.67-4.85, and aRR 9.94, 95% CI 6.50-15.0, respectively). Although association of combined delivery with pre-pregnancy BMI was attenuated after adjustment by delivery interval, association with birthweight of the second twin was strengthened. Conclusion: Delivery interval was significantly associated with combined delivery and mediated the association between combined delivery and some other risk factors. abstract_id: PUBMED:32896679 Long term outcome of second twins born to mothers who attempted vaginal delivery a retrospective study. Objective: To evaluate mortality and long term neurodevelopmental outcomes of the second twins born to mothers who attempted vaginal delivery. Study Design: Two hundred and twenty-seven eligible cases of second twin born to mothers who attempted vaginal delivery were identified retrospectively in a ten-year period. Information on adverse long term outcomes (a composite of mortality and neurodevelopmental disorders) were retrieved from their electronic medical record, and the risk factors were determined. Results: The median follow-up duration was 8 years (range 4-13 years). Adverse composite long term outcomes were observed in 6.6% (15/227). Gestation at delivery &lt; 32 week (p = 0.001) and inter-twin delivery interval of &gt; 30 min (P = 0.000) were significantly associated with adverse long term outcomes of the second twin on multivariate analysis. Second twins in the combined vaginal- caesarean birth group had no significant increase in adverse outcomes compared to those in the vaginal-vaginal birth group. Conclusion: Adverse long term outcomes were uncommon among second twins born to mothers who attempted vaginal delivery. Adverse outcomes were associated with prematurity and inter-twin delivery interval of more than 30 min, but not with actual mode of delivery. abstract_id: PUBMED:24431633 Management of nonvertex second twin. Objectives: The objective of the current study is to compare outcomes of twin pregnancies with attempted labor and active second-stage management with twin pregnancies delivered by planned cesarean delivery. Material And Methods: Two hundred and eighty-three patients with twin pregnancy meeting the inclusion criteria were reviewed. They were followed for success of ECV and/or IPV in planned vaginal group and abdominal mode of delivery. Fetal outcome was assessed by APGAR score of both twins as well as NICU admission, if needed. Results: Out of 283 patients, 116 patients (40.9 %) had planned cesarean section, and 167 patients (59.01 %) had planned vaginal delivery. Out of 167 patients, 148 patients (88.6 %) had a vagi nal delivery of both twins. ECV was successful in 36 patients (25.3 %), and IPV was successful in 102 (95.3 %). IPV failed in five patients (4.6 %), and hence resorted to emergency cesarean section. There was no significant difference in the rates of twin B having a 5-min Apgar score lower than 7 or an arterial cord pH below 7.20 in both the groups. Among the patients in the planned vaginal delivery group, the cesarean delivery rate was 8.3 %, out of which combined vaginal-cesarean delivery rate was 4.6 %. Conclusion: Active second-stage management is associated with neonatal outcomes similar to those with planned cesarean delivery and a low risk of combined vaginal-cesarean delivery. abstract_id: PUBMED:30668193 The burden to be second twin: a population-based study of 2686 twins: (2124 dichorionic). Proposal of the concept of mobility. Objectives: To identify if there is a specific neonatal morbidity/mortality among second twins relative to first twins.Study design: A 17-year (2001-2017) population-based observational cohort of all twin newborns born in the South of Reunion island after 21 weeks.Results: Among 1062 dichorionic (DTP) and 281 monochorionic twin pregnancies (2686 newborns), twin 2 have a doubled risk to be in breech presentation and a bad Apgar at 1 mn (≤6) in vaginal deliveries. Specific to dichorionic pregnancies, twin 2 were lighter by 50-60 g than twin 1, had higher rates of intrauterine growth retardation (IUGR), OR 1.33, p = .007, a doubled risk to have congenital abnormalities OR 2.1, p = .006.Conclusion: In dichorionic twin pregnancies, second twins having a doubled prevalence of severe congenital abnormalities are not completely elucidated and deserves further research. (1) We propose that twin 2 presenting higher risks of being IUGR and much higher risks of severe malformations suggest that during pregnancy, the less mobile of the two twins is "relegated" to the back of the uterus. (2) For interventions in the delivery room, systematically the most experimented neonatologist should plan to manage the second twin because significantly twin 2 presents higher problems than twin 1. abstract_id: PUBMED:29776396 Delivery of the second twin: influence of presentation on neonatal outcome, a case controlled study. Background: Spontaneous vaginal twin delivery after 32nd week of gestation is safe when first twin presenting cephalic. Aim of this study is to identify obstetric factors influencing the condition of second twin and to verify whether non-cephalic presentation and vaginal breech delivery of the second twin is safe. Methods: This is a retrospective case controlled cohort study of 717 uncomplicated twin deliveries ≥32 + 0 weeks of gestation from 2005 to 2014 in two tertiary perinatal centers. Obstetric parameters were evaluated in three groups with descriptive, univariate logistic regression analysis for perinatal outcome of second twins. Results: The three groups included twins delivered by elective cesarean section ECS (n = 277, 38.6%), by unplanned cesarean section UPC (n = 233, 32.5%) and vaginally (n = 207, 28.9%). Serious adverse fetal outcome is rare and we found no differences between the groups. Second twins after ECS had significant better umbilical artery UA pH (p &lt; 0.001) and better Apgar compared to UPC (p = 0.002). Variables for a fetal population "at risk" for adverse neonatal outcome after vaginal delivery (UA pH &lt; 7.20, Apgar 5´ &lt; 9) were associated with higher gestational age (p = 0.001), longer twin-twin interval (p = 0.05) and vacuum extraction of twin A (p = 0.04). Non-cephalic presentation of second twins was not associated (UA pH &lt; 7.20 OR 1.97, CI 95% 0.93-4.22, p = 0.07, Apgar 5´ &lt; 9 OR 1.63, CI 95% 0.70-3.77, p = 0.25, Transfer to neonatal intermediate care unit p = 0.48). Twenty-one second twins (2,9%) were delivered by cesarean section following vaginal delivery of the first twin. Even though non-cephalic presentation was overrepresented in this subgroup, outcome variables were not significantly different compared to cephalic presentation. Conclusions: Even though elective cesarean means reduced stress for second twins this seems not to be clinically relevant. Non-cephalic presentation of the second twin does not significantly influence the perinatal outcome of the second twin but might be a risk factor for vaginal-cesarean birth. abstract_id: PUBMED:8203422 The second-born twin: can we improve outcomes? Objective: Second-born twin outcome was compared with that of the first-twin birth mate to determine whether there were differences, the reason for the differences, if any, and the clinical decisions that could be made to eliminate or ameliorate these differences. Study Design: Twin pregnancies from 1989 through 1992 were retrospectively reviewed. Charts demonstrating both twins stillborn, either twin weighing &lt; 500 gm, or either twin with a serious congenital anomaly were excluded from evaluation. After these exclusions 200 twin pairs remained for analysis. Each twin was compared directly with its birth mate. The first twin is postulated as having the best possible outcome for the pregnancy involved, and second-twin outcomes are compared with these. Results: The second-born twin was more likely to be intubated, have respiratory distress syndrome, need resuscitation, and have lower 5-minute Apgar scores. Second-born twins had more nursery complications. The &lt; 1500 gm group appeared to be at special risk. In this group there were more second-born twin neonatal deaths and much higher rates of intubation and resuscitation. Nonvertex presentation in the second-born twin increased chances for resuscitation, intubation, respiratory distress syndrome, and nursery complications. These outcomes in the second-born were not affected by cesarean delivery. Conclusions: The second-born twin is at increased risk of untoward outcome compared with its birth mate. Some of this risk can be attributed to birth weight. The risk posed by nonvertex presentation is small. Cesarean section delivery does not appear to eliminate the difference between first- and second-twin outcome. Prospective studies are needed. abstract_id: PUBMED:27821985 Perinatal Outcome of the Second Twin at a Tertiary Care Center in India. Purpose: This cross-sectional observational study was undertaken to assess perinatal outcome of the second twin in respect to gestational age, presentation, mode of delivery, and birth weight. Methods: Seventy women with confirmed twin pregnancy were admitted and managed in a tertiary care teaching hospital in Kolkata, India from May 2008 to April 2009. All relevant data were recorded and analyzed statistically by simple proportions and χ2 test. Results: Women with frequent antenatal visits had highly favorable perinatal outcome than those with fewer or no visits in this hospital (p &lt; 0.001). Higher perinatal mortality was observed among preterm than term (p &lt; 0.01) cases, and among low birth weight than normal babies (p &lt; 0.05). Second twins in vertex-vertex presentation encountered higher perinatal mortality compared to those in vertex-nonvertex and nonvertex-other presentations (p &lt; 0.05). Perinatal outcome was unfavorable when both delivered vaginally than for both cesarean deliveries and cesarean after first vaginal delivery (p &lt; 0.01). Preterm labor was the most frequently observed maternal complication. Birth asphyxia and perinatal mortality were common among second than first twins. Conclusions: Gestational age, presentation, mode of delivery, and birth weight are the significant determinants of perinatal outcome of the second twin. Women with frequent antenatal care show favorable outcome. The second twin is at higher risk of perinatal morbidity and mortality than the first twin. Answer: The question of whether the second-born twin is at high risk compared to the first-born twin has been explored in various studies, with mixed results. Some studies suggest that the second-born twin may not be at increased risk of complications compared to the first-born twin. For instance, a study that retrospectively reviewed twin deliveries found that perinatal mortality and 5-minute Apgar scores were similar for both twins, and the mode of delivery had no influence on the perinatal outcome of either twin (PUBMED:9478326; PUBMED:15512030). This suggests that the second-born twin may not be at a higher risk of complications. Another study found that other than birthweight differences, the birth order of dichorionic twins is not associated with adverse neonatal health indices, nor does it predict excess risk for morbidity during childhood (PUBMED:38214719). This indicates that the second-born twin does not necessarily face a higher risk of health issues in the short or long term solely based on birth order. However, other research has identified specific risks associated with the second twin. For example, a study on dichorionic twins reported that second twins had a doubled risk of being in breech presentation and a bad Apgar score at 1 minute in vaginal deliveries, higher rates of intrauterine growth retardation, and a doubled risk of severe congenital abnormalities (PUBMED:30668193). This suggests that the second twin may face certain increased risks. Another study found that adverse long-term outcomes were associated with prematurity and inter-twin delivery interval of more than 30 minutes, but not with the actual mode of delivery (PUBMED:32896679). This indicates that while the second twin may face certain risks, these are not necessarily related to being born second. In summary, while some studies suggest that the second-born twin is not at a higher risk of complications compared to the first-born twin, other research indicates that there may be specific risks associated with being the second twin. These risks can be related to factors such as birthweight, presentation, and the interval between deliveries, rather than the birth order itself.
Instruction: Can a Boxer Engine Reduce Leg Injuries Among Motorcyclists? Abstracts: abstract_id: PUBMED:25793434 Can a Boxer Engine Reduce Leg Injuries Among Motorcyclists? Analysis of Injury Distributions in Crashes Involving Different Motorcycles Fitted with Antilock Brakes (ABS). Objectives: Several studies have shown that motorcycle antilock braking systems (ABS) reduce crashes and injuries. However, it has been suggested that the improved stability provided by ABS would make upright crashes more frequent, thus changing the injury distributions among motorcyclists and increasing the risk of leg injuries. The overall motorcycle design can vary across different categories and manufacturers. For instance, some motorcycles are equipped with boxer-twin engines; that is, with protruding cylinder heads. A previous study based on a limited material has suggested that these could provide some leg protection; therefore, the aim of this research was to analyze injury distributions in crashes involving ABS-equipped motorcycles with boxer-twin engines compared to similar ABS-equipped motorcycles with other engine configurations. Methods: Swedish hospital and police records from 2003-2014 were used. Crashes involving ABS-equipped motorcycles with boxer-twin engines (n = 55) were compared with similar ABS-equipped motorcycles with other engines configurations (n = 127). The distributions of Abbreviated Injury Scale (AIS) 1+ and AIS 2+ were compared. Each subject's injury scores were also converted to the risk for permanent medical impairment (RPMI), which shows the risk of different levels of permanent medical impairment given the severity and location and of injuries. To compare injury severity, the mean RPMI 1+ and RPMI 10+ were analyzed for each body region and in overall for each group of motorcyclists. Results: It was found that AIS 1+, AIS 2+, and PMI 1+ leg injuries were reduced by approximately 50% among riders with boxer engines. These results were statistically significant. The number of injuries to the upper body did not increase; the mean RPMI to the head and upper body were similar across the 2 groups, suggesting that the severity of injuries did not increase either. Indications were found suggesting that the overall mean RPMI 1+ was lower among riders with boxer engines, although this result was not statistically significant. The mean values of the overall RPMI 10+ were similar. Conclusions: Boxer-twin engines were not originally developed to improve motorcycle crashworthiness. However, the present article indicates that these engines can reduce leg injuries among riders of motorcycles fitted with ABS. Though it is recommended that future research should look deeper into this particular aspect, the present findings suggest that the concept of integrated leg protection is indeed feasible and that further engineering efforts in this area are likely to yield significant savings in health losses among motorcyclists. abstract_id: PUBMED:34727891 Comparison of physical and psychological health outcomes for motorcyclists and other road users after land transport crashes: an inception cohort study. Background: Serious injuries and fatalities among vulnerable road users on two wheeled motorised vehicles have increased across Australia and internationally in the past decade yet fallen for motor vehicle occupants. Almost half of all reported motorcycle injury crashes cause serious injury or death, nearly double that of motor vehicle police-reported crashes. This study explores associations with sociodemographic and pre-injury health characteristics and health outcomes after a road traffic injury; aiming to compare motorcyclists with other road users and inform recovery care. Methods: An inception cohort study recruited 1854 individuals aged &gt; 17 years, injured following land-transport crashes in New South Wales, Australia (July 2013-November 2016). Interviews conducted at baseline, 6-and 12-months post-injury elicited demographic, socioeconomic, and self-reported health conditions. Results: Primary analysis involved 1854 participants who were recruited at baseline as three distinct road user groups; 628 (33.9%) motorcyclists, 927 (50%) vehicle occupants and 299 (16.1%) bicyclists. At baseline, injury patterns differed significantly between road user groups; motorcyclists were more than twice as likely to sustain lower extremity injury (p &lt; 0.001); to have more severe injury severity scores (p &lt; 0.001) and longer hospital stays versus vs vehicle occupants and bicyclists (&lt; 0.001) across these measures. Injured motorcyclists were predominantly male (88.1%, p &lt; 0.001), were younger on average (38 years) than bicyclists (41.5 years), had lower income and education levels, and poorer pre-injury physical health than other road user groups. Despite these differences, at 12 months post-injury motorcyclists had better physical health (SF12-PCS 2.07 (0.77, 3.36), p = 0.002) and reported lower pain scores (- 0.51 (- 0.83, - 0.2), p &lt; 0.001) than vehicle occupants. Motorcyclists displayed less evidence of psychological distress than vehicle occupants, but more than bicyclists across several measures used. Conclusions: Road user types differ in important characteristics, including pre-injury health status and recovery after injury. As vulnerable road users experiencing transport crash and considering their higher initial injury severity, the degree of recovery among motorcyclists compared with other user types is remarkable and unexplained. Health and recovery outcomes after land-transport crashes is least favourable among vehicle occupants despite their higher levels of protection in a crash. This information is valuable for targeting early intervention strategies by road user type during the post-crash care phase, to improve long-term recovery. abstract_id: PUBMED:12361520 Possibilities for load reductions using garment leg protectors for motorcyclists -- a technical, medical and biomechanical approach. This study has considered demands placed on protective clothes for motorcyclists together with observed benefits and limitations for absorbing loads and preventing injuries. In this study, 1933 accidents involving injured motorcyclists, collected by an in-depth investigation team at ARU-MUH, were analysed to find the detailed injury patterns comparing persons with and without protective clothes. The parameters influencing the force reduction are discussed comparing impact speed, elasticity coefficient and thickness of the materials used in special protectors. Additionally, post mortem human tests were carried out to find the effectiveness of such protector devices for load reductions in the lower leg. This study showed that loads, and the subsequent risk of fractures, can be reduced efficiently by foam-plate systems. Benefits were established for using a laminate of hard and soft materials for protection against soft tissue injuries both in the accident analysis and experimental tests. abstract_id: PUBMED:12680525 Riding motorcycles: is it a lower limb hazard? Unlabelled: The morbidity and mortality among motorcyclists involved in road traffic accidents (RTA) in Singapore is high. Due to their relatively small size, they represent a vulnerable group of road-users. Many reports from studies performed overseas have shown that both lower limb and head injuries appear to be common among motorcyclists. Objectives: To study the characteristics of lower limb injuries among motorcyclists involved in RTA, who present to the Department of Emergency Medicine of an urban, tertiary, teaching hospital for treatment. Methods: The study was conducted prospectively from 1 July 2000 to 30 June 2001. Demographic data was collected together with details of the type of injuries, mechanism involved, management and disposition. SPSS (Chicago, Inc.) was utilised for data management and statistical analysis. Results: Of the 1,809 motorcyclists studied, 1,056 (58.3%) sustained lower limb injuries, 328 (18.1%) had head injuries and 256 (14.2%), sustained facial injuries. The mean age was 26.4 +/- 7.2 years and males made up the majority of the patients (1,733, 95.8%). Helmet usage was 100%. The commonest type of lower limb injury was fractures (531, 50.3%). The most common type of fracture was that of the shaft of the tibia and fibula (231, 43.5%), followed by fractures around the ankle (186, 35.0%). For those with more than one body region injured, head injury was noted to be not commonly associated with lower limb injuries. The commonest mechanism of injury was collision with another vehicle, while approaching a turn (769, 42.5%). There were 96 motorcyclists (5.3%) who had clinical evidence of alcohol consumption on their breath at presentation. There were 533 (29.5%) patients who were admitted for in-patient management and the mean duration of stay was 4.8 +/- 4.5 days. Amongst those with lower limb injuries, the admission rate was 30.5% (322 of 1,056) and the mean duration of hospitalisation was 5.3 +/- 3.9 days. Conclusion: Lower limb injuries represent the commonest form of injury among motorcyclists involved in RTA. Improved training via motorcycle rider education, better design of future motorcycles and protective footwear may help to reduce this problem. abstract_id: PUBMED:19037618 Two wheels - too dangerous? Analysis of real-world crash data and federal statistics Background: Motorcyclists profit from improved safety measures and medical care only slightly compared with other road users. The aims of this study were to gain new insights into crash mechanisms and the resulting injuries of motorcyclists and to understand trends in road traffic accident statistics over the last 10 years. Methods: In our prospective analysis we documented data from collisions in which at least one subject sustained an injury of 2-6 on the Maximum Abbreviated Injury Scale (MAIS). Additionally, key parameters published by the German Federal Statistical Office were analysed. Results: Data were studied for 66 motorcycle crashes and 66 injured subjects. The mean ISS was 17.9, with injuries of the lower limb (44%) and head (41%) being the most frequent in motorcyclists. Psychometric questionnaires revealed that motorcyclists had a significantly higher willingness to take risks compared with drivers of other vehicles. Federal statistics revealed a dramatic increase in mortality in the first months of 2007, but in the next months, mortality decreased to levels comparable to those of the preceding year. Conclusions: Knowledge of crash mechanisms, injury patterns and profiles of high-risk drivers can help reduce the collision rates and injury severity of motorcyclists. abstract_id: PUBMED:11503301 Crash simulation of lower limb with motorcycle basket. Lower limb injuries are the main cause of temporary and permanent disability among motorcyclists in Malaysia. They cause non-fatal but serious injuries requiring hospitalisation. Detailed studies on factors influencing lower limb injuries are justified in an attempt to reduce the occurrence of these injuries. This study presents a computer simulation of the crash behaviour of the basket of a small-engined motorcycle with the lower limb using finite element (FE) methods. The results suggest that the extensive deformation of the motorcycle basket may reduce the risk of injury to the lower limb. The behaviour of the basket during collision is analogous to the crumple zone of automobiles. abstract_id: PUBMED:3413322 Epidemiologic study of traffic accidents occurring among the inhabitants of Algeria An observation area was created in Algiers to improve both epidemiological knowledge of traffic accidents and the national information system. Carried out cooperatively by the police and the hospitals of the area, our cohort study revealed a high annual incidence (700 per 100,000) of traffic accidents among residents of Algiers. Children between 5 and 9, adolescents, young adults and people over 60 are the most vulnerable. Older persons and children are for the most part pedestrians. Fractures of lower limbs are relatively numerous and account for 10% of all registered injuries. Moreover, 92.1% of injuries, corresponding to the highest AIS in each part of the body affected, are either minor or moderate. The severity of injuries seems to be independent of age, sex and category (pedestrians, motorcyclists, etc.) of the road user. The ISS, which is correlated to the duration of hospital stay, gives much the same results. Only by linking the data from the different information sources can one perceive the various facets of the problem in an overall manner. However, even before accumulating highly refined data certain actions are manifestly urgent. One of them is an improved organization of traumatology in order to reduce lethality of injuries. abstract_id: PUBMED:8168893 Injuries associated with a fractured shaft of the femur. A knowledge of injury patterns, both from the mechanism of injury and from their associations, allows specific injuries to be suspected and actively excluded. We have reviewed fractures of the femoral shaft to highlight the patterns of their associated high morbidity and mortality. Our survey shows that accidents involving motorcyclists, pedestrians and motor cars have the highest incidence of associated injury. Common associations include chest, head, pelvis and ipsilateral leg injuries in pedestrians, and pelvic and ipsilateral leg injuries in motorcyclists. Rarer associations with femoral shaft fractures include ipsilateral tibia and talus/navicular fractures in motorcyclists, knee injuries and contralateral acetabular fractures in pedestrians and contralateral necks of femur in motor car passengers. abstract_id: PUBMED:14569759 Vulnerability of the lower limb in non-fatal motorcycle injuries. Motorcyclists form the highest group of fatalities on Malaysian roads. This is a prospective study conducted at 3 major hospitals for a period of 12 months to determine the type of motorcyclist injuries that usually required hospitalization. Four hundred and twelve consecutively injured motorcyclists were available for study. One hundred and eighty six (45.15%) were fatally injured and 226 (54.85%) were seriously injured and surviving. The main cause of fatalities was head injury, while lower limb injuries accounted for majority of hospitalizations. This study highlights the vulnerability of the motorcyclist to lower limb injuries. abstract_id: PUBMED:37568139 Demographic and clinical profile of an inception cohort of road trauma survivors. Background: Road trauma is a major public health concern, often resulting in reduced health-related quality of life and prolonged absenteeism from work even after so-called 'minor' injuries that do not result in hospitalization. This manuscript compares pre-injury health, sociodemographic characteristics and injury details between age, sex, and road user categories in a cohort of 1,480 road trauma survivors. Methods: This was a prospective observational inception cohort study of road trauma survivors recruited between July 2018 and March 2020 from three trauma centres in British Columbia, Canada. Participants were aged ≥ 16 years and arrived in a participating emergency department within 24 h of involvement in a motor vehicle collision. Data were collected from structured interviews and review of medical records. Results: The cohort of 1,480 road trauma survivors included 280 pedestrians, 174 cyclists, 118 motorcyclists, 683 motor vehicle drivers, and 225 passengers. Median age was 40 (IQR = [27, 57]) years; 680 (46%) were female. Males and younger patients were significantly more likely to report better pre-injury physical health. Motorcyclists and cyclists tended to report better physical health and less severe somatic symptoms, whereas pedestrians and motor vehicle drivers reported better mental health. Injury severity and hospital admission rates were higher in pedestrians and motorcyclists and lower in motorists. Upper and lower extremity injuries were most common in pedestrians, cyclists and motorcyclists, whereas neck injuries were most common in motor vehicle drivers and passengers. Conclusions: In a large cohort of road trauma survivors, overall injury severity was low. Motorcyclists and pedestrians, but not cyclists, had more severe injuries than motorists. Extremity injuries were more common in vulnerable road users. Future research will investigate one-year recovery outcomes and identify risk factors for poor recovery. Answer: Yes, a boxer engine can reduce leg injuries among motorcyclists. According to a study that analyzed injury distributions in crashes involving ABS-equipped motorcycles with boxer-twin engines compared to similar ABS-equipped motorcycles with other engine configurations, it was found that leg injuries classified as AIS 1+, AIS 2+, and PMI 1+ were reduced by approximately 50% among riders with boxer engines. These results were statistically significant, and the number of injuries to the upper body did not increase. The mean RPMI to the head and upper body were similar across the two groups, suggesting that the severity of injuries did not increase either. Although the overall mean RPMI 1+ was lower among riders with boxer engines, this result was not statistically significant. The mean values of the overall RPMI 10+ were similar. The study concluded that boxer-twin engines, which were not originally developed to improve motorcycle crashworthiness, can indeed reduce leg injuries among riders of motorcycles fitted with ABS (PUBMED:25793434).
Instruction: Assessing surgeon stress when operating using heart rate variability and the State Trait Anxiety Inventory: will surgery be the death of us? Abstracts: abstract_id: PUBMED:25406932 Assessing surgeon stress when operating using heart rate variability and the State Trait Anxiety Inventory: will surgery be the death of us? Aim: Performance in the operating room is affected by a combination of individual, patient and environmental factors amongst others. Stress has a potential negative impact on performance with the quality of surgical practice and patient safety being affected as a result. In order to appreciate the level of stress encountered during surgical procedures both objective and subjective methods can be used. This study reports the use of a combined objective (physiological) and subjective (psychological) method for evaluating stress experienced by the operating surgeon. Method: Six consultant colorectal surgeons were evaluated performing eighteen anterior resections. Heart rate was recorded using a wireless chest strap at eight pre-determined operative steps. Heart Rate Variability indices were calculated offline using computerized software. Surgeon reported stress was collected using the State Trait Anxiety Inventory, a validated clinical stress scale. Results: A significant increase in stress was demonstrated in all surgeons whilst operating as indicated by sympathetic tone (control: 4.02 ± 2.28 vs operative: 11.42 ± 4.63; P &lt; 0.0001). Peaks in stress according to operative step were comparable across procedures and surgeons. There was a significant positive correlation with subjective reporting of stress across procedures (r = 0.766; P = 0.0005). Conclusion: This study demonstrates a significant increase in sympathetic tone in consultant surgeons measured using heart rate variability during elective colorectal resections. A significant correlation can be demonstrated between HRV measurements and perceived stress using the State Trait Anxiety Inventory. A combined approach to assessing operative stress is required to evaluate any effect on performance and outcomes. abstract_id: PUBMED:23370764 Heart rate and heart rate variability as indirect markers of surgeons' intraoperative stress. Objective: In a difficult and demanding environment such as the operating room, the individual workload response of physicians is of particular importance. The aim of this study was to examine the specific effects of intraoperative stress on the cardiovascular system by measuring heart rate (HR) and heart rate variability (HRV). Methods: In a cross-sectional study, the effect of intraoperative stress on autonomic balance, measured by HRV, among surgeons differing with respect to their intraoperative stress levels was studied. Based on their perceived stress (State Trait Anxiety Inventory), surgeons were classified into a stressed and a non-stressed sample. Data on HR and HRV were collected by means of ambulatory ECG-recordings. Changes in autonomic nervous system activity were quantified by frequency and time domain analysis of R-R interval variability. Demographic, anthropometric, and surgery-related group differences were assessed by non-parametric Mann-Whitney U test, differences between relative changes of HR and HRV by Wilcoxon signed-ranks test. In multivariate linear analysis of covariance, group differences were adjusted for possible confounding factors. Results: There was a significant difference in intraoperative HR for stressed and non-stressed surgeons (median: 99.3 vs. 63.7; P &lt; 0.05). During sleep, HRV indices indicated a reduced autonomic recovery in stressed participants. Conclusions: Our results reveal that higher perceived stress in the operating room is associated with increased intraoperative HR and decreased HRV at night. Non-stressed surgeons show greater relaxation during sleep compared to their stressed colleagues. abstract_id: PUBMED:19286681 Intraoperative heart rate variability of a cardiac surgeon himself in coronary artery bypass grafting surgery. The mental strain of a cardiac surgeon may differ when he performs coronary surgery from and when he only assists in performing coronary surgery. In 50 selected cases of on-pump heart arrested coronary artery bypass grafting (CABG), an attending-consultant surgeon performed 30 procedures of CABG (Group A) and an attending-consultant surgeon supervised the remaining 20 cases of CABG performed by two resident surgeons (Group B). Intraoperative Holter electrocardiograms of the attending-consultant surgeon were recorded and analyzed for heart rate variability (HRV). In Group A, the ratio of low frequency to high frequency was at a peak in the beginning of the operation and gradually decreased toward the end of the operation. In Group B, the ratio of low frequency to high frequency was at a peak in the phase of aortic cross-clamp, coronary anastomosis, and unclamping. When an attending-consultant surgeon performed the operation himself, the most anxious part of the operation was at the beginning and thereafter the level of anxiety gradually declined. In contrast, when he assisted a resident, the highest level of anxiety was when the aortic cross-clamp was in place and out of place and during the coronary anastomosis. abstract_id: PUBMED:36810132 Cardiac autonomic and cortisol stress responses to real operations in surgeons: relationship with individual psychobiological characteristics and experience. Background: Surgeons are exposed to high levels of intraoperative stress, which could compromise their psychological well-being in the long term. This study aimed at exploring the effects of real operations on the activity of stress response systems (i.e., cardiac autonomic function and hypothalamic-pituitary-adrenal axis) during and in the aftermath of surgery, and the moderating role of individual psychobiological characteristics and different levels of experience (senior vs expert surgeons). Methods: Heart rate, heart rate variability, and salivary cortisol measures (as indexes of cardiac autonomic and hypothalamic-pituitary-adrenal axis activity, respectively) were assessed during real operations and in the perioperative period in a sample of surgeons (n = 16). Surgeons' psychometric characteristics were collected using questionnaires. Results: Real operations triggered both cardiac autonomic and cortisol stress responses which were independent from surgeons' level of experience. Intraoperative stress responses did not affect cardiac autonomic activity during the following night but were associated with a blunted cortisol awakening response. Moreover, senior surgeons reported higher levels of negative affectivity and depressive symptoms than expert surgeons prior to the surgery. Lastly, the magnitude of heart rate responses to surgery positively correlated with scores on negative affectivity, depression, perceived stress, and trait anxiety scales. Conclusion: This exploratory study allows to put forward the hypotheses that in surgeons cardiac autonomic and cortisol stress responses to real operations (i) may be associated with specific individual psychological characteristics regardless of the level of experience, (ii) and may have a longer lasting impact on hypothalamic-pituitary-adrenal axis function with potential implications for surgeons' physical and psychological well-being. abstract_id: PUBMED:38053225 Surgeon reported measures of stress and anxiety prior to and after elective gynecological surgery. Introduction: Performing surgical procedures is a recognized source of stress for surgeons. Vocational stress is an important contributor to performance, patient care, and burnout with dispositional and environmental factors contributing. Accurately assessing surgeon stress is critical to measuring effectiveness of stress reduction programs. The primary aim was to identify differences between surgeons' self-reported anticipated stress and anxiety prior to gynecological surgery, compared with their recollection of experienced stress and anxiety during surgery. Secondary aims assessed any differences by level of training, surgical type, and surgeon role. Material And Methods: Attending and resident gynecologists performing routine elective surgeries completed a visual analog scale (VAS) assessing perceived stress and the State-Trait Anxiety Inventory (STAI) prior to and immediately after completing 161 elective surgeries including total laparoscopic hysterectomy, laparoscopic excision of moderate-severe endometriosis, or hysteroscopic myomectomy. Results: Eight attending gynecologists and nine residents participated. Residents commenced as primary surgeon in 62/90 (69%) procedures. Stress experienced during surgery was greater than anticipated in 92/161 (57%) surgery episodes (mean VAS increase: 3.9; 95% CI: 1.1-6.8, p = 0.009). State anxiety was greater than anticipated in 99/161 (62%) episodes (mean state anxiety increase: 4.4; 95% CI: 3.0-5.8, p &lt; 0.001). Greater preprocedural anticipatory stress and anxiety was observed in residents vs. attending gynecologists (VAS 51.9 vs. 22.8, p &lt; 0.001; state anxiety 38.3 vs. 28.1, p &lt; 0.001) and in primary vs. assistant surgeons (VAS 47.2 vs. 29.9, p &lt; 0.001; state anxiety 36.9 vs. 28.3, p &lt; 0.001). Intraoperative stress and anxiety were greater in primary surgeons (VAS 50.4 vs. 30.5, p &lt; 0.001; anxiety 41.3 vs. 32.5, p &lt; 0.001) and residents (VAS 43.4 vs. 31.7, p &lt; 0.001; anxiety 53.5 vs. 33.7, p &lt; 0.001) compared with assistants and attending gynecologists. Perceived stress and anxiety were positively correlated at both timepoints (r = 0.68, p &lt; 0.001; r = 0.82, p &lt; 0.001). Conclusions: When asked to reflect on stress experienced during surgery, our data show that stress during surgery is greater than anticipated for many surgical episodes. Self-reported stress symptoms commence prior to surgery and are more commonly reported by surgeons operating as primary surgeon and by those in training. Future research should focus on determinants of presurgical stress and examine when stressors become inhibitory to performance. abstract_id: PUBMED:21310451 A comparison of evaluation, time pressure, and multitasking as stressors of psychomotor operative performance. Background: There is gathering interest in determining the typical sources of stress for an operating surgeon and the effect that stressors might have on operative performance. Much of the research in this field, however, has failed to measure stress levels and performance concurrently or has not acknowledged the differential impact of potential stressors. Our aim was to examine empirically the influence of different sources of stress on trained laparoscopic performance. Methods: A total of 30 medical students were trained to proficiency on the validated Fundamentals of Laparoscopic Surgery peg transfer task, and then were tested under 4 counterbalanced test conditions: control, evaluation threat, multitasking, and time pressure. Performance was assessed via completion time and a process measure reflecting the efficiency of movement (ie, path length). Stress levels in each test condition were measured using a multidimensional approach that included the State-Trait Anxiety Inventory (STAI) and the subject's heart rate while performing a task. Results: The time pressure condition caused the only significant increase in stress levels but did not influence completion time or the path length of movement. Only the multitasking condition significantly increased completion time and path length, despite there being no significant increase in stress levels. Overall, the STAI and heart rate measures were not correlated strongly. Conclusion: Recommended measures of stress levels do not necessarily reflect the demands of an operative task, highlighting the need to understand better the mechanisms that influence performance in surgery. This understanding will help inform the development of training programs that encourage the complete transfer of skills from simulators to the operating room. abstract_id: PUBMED:38263368 The effect of music on the operating surgeon: a pilot randomized crossover trial (the MOSART study). Objective: The experience of stress is common among surgeons while working in the operating theatre (OT). Understanding and finding ways to mitigate this stress is important for optimizing surgical quality and maintaining clinician wellbeing. In this pilot study, we tested the feasibility and reported the outcomes of measuring the effect of background music on intra-operative surgeon stress in the clinical environment. Methods: The effect of Music on the Operating Surgeon: A pilot Randomized crossover Trial (the MOSART study) was conducted over a 9-month period in a single-centre. Vascular and general surgeons acting as primary operators (POs) performing elective, general anaesthetic operations were included. The intervention was surgeon-selected music, and the control was the absence of music. Outcome measures were feasibility (recruitment rate, practicability, and completeness of data), heart rate variability (HRV) indices, the Six-Item State-Trait Anxiety Inventory (STAI-6), and the Surgical Task-load Index (SURG-TLX). Results: Five POs performed 74 eligible randomized cases. The protocol was well tolerated, and no cases were abandoned. Data was incomplete in 8% of cases. The overall mean (SD) operative SURG-TLX score was 48 (±22). Mean HR increased and RMSSD decreased significantly from baseline, suggesting reduced parasympathetic activity while operating. The presence of intra-operative music was not found to affect the psychological or physiological outcomes. Conclusions: A music interventional study of this nature is feasible in the operating theatre environment, though no difference was found between in the music and non-music conditions. Follow-up research in a simulated environment with intensive physiological monitoring could be considered. abstract_id: PUBMED:27871552 Preoperative depressed mood and perioperative heart rate variability in patients with hepatic cancer. Study Objective: How perioperative heart rate variability (HRV) indices differ according to the anxiety or depressed mood of patients scheduled to undergo a major surgical procedure for cancer. Design: Prospective observational study. Setting: Operating room. Patients: Forty-one male patients between 40 and 70 years of age with hepatocellular carcinoma were included in the final analysis. Interventions: HRV was measured on the day before surgery (T1), impending anesthesia (T2), and after anesthetic induction (T3). Preoperative anxiety and depressed mood of all patients were evaluated using the State-Trait Anxiety Inventory and Self-Rating Depression Scale (SDS). Measurements And Results: HRV was significantly different among T1, T2, and T3. At T2, high frequency (HF) (normalized units of HF [nuHF]) was decreased and low frequency (LF) (normalized units of LF) and LF/HF were increased compared with those at T1 and T3. In the subgroup analysis between high and low SDS groups, high SDS group showed significantly decreased nuHF (P = .035), increased nuLF (P = .039), and increased LF/HF (P = .020) compared to low SDS group at T1. However, these values at T2 and T3 were not different between 2 groups. In analysis within the groups, low SDS group showed significant differences in nuHF, nuLF, and LF/HF among T1, T2, and T3 (P &lt; .05, respectively), but no changes in these values were observed in high SDS group among the 3 different time points. Conclusions: HRV decreased significantly immediately before anesthesia and recovered to baseline with anesthetic induction. Preoperative, more depressed patients showed increased sympathetic tone at baseline and blunted response to impending anesthesia on the HRV measurements. abstract_id: PUBMED:20393847 The Imperial Stress Assessment Tool (ISAT): a feasible, reliable and valid approach to measuring stress in the operating room. Background: Stress can impair surgical performance and may compromise patient safety. This prospective, cross-sectional study describes the feasibility, reliability, and validity of the Imperial Stress Assessment Tool (ISAT) as an approach to measuring stress during surgery. Methods: A total of 54 procedures were observed with 11 surgeons (4 attendings, 4 senior residents and 3 junior residents) in a large university teaching hospital in London, UK. Data collection involved physiological measures of operating surgeons [heart rate (HR) and salivary cortisol] and self-report questionnaires (State Trait Anxiety Inventory, or STAI). Results: In all, 23 of 54 procedures were stressful, as identified by self-reporting. For stressful procedures compared to nonstressful ones, STAI was higher (mean +/- SD) 9.81 +/- 2.20 vs. 12.87 +/- 4.27, t (30.64) = 3.15 as was the HR (mean +/- SD) 79.94 +/- 8.55 vs. 93.17 +/- 14.94, t(32.57) = 3.81) (p &lt; 0.05). Significant positive correlations were obtained between the measures indicating concurrent validity: Pearson's r = 0.47 (HR vs. STAI), 0.34 (cortisol vs. STAI), and 0.57 (HR vs. cortisol) (p &lt; 0.05). Perfect correlation of subjective and objective measures was found for 70% of the procedures. HR and cortisol had specificities of 78% and 91% and sensitivities of 91% and 70% respectively for detecting stress during surgery. Conclusion: ISAT is a nonintrusive, feasible approach that combines subjective and objective methods for measuring stress in the operating room. The ISAT may increase understanding of the effects of stress on clinical performance and outcomes, leading to improved patient care. abstract_id: PUBMED:29478845 The potential of heart rate variability for exploring dental anxiety in mandibular third molar surgery. An objective method to recognize patient psychology using heart rate variability (HRV) has recently been developed and is increasingly being used in medical practice. This study compared the potential of this new method with the use of conventional surveys measuring anxiety levels in patients undergoing impacted third molar (ITM) surgery. Patient anxiety was examined before treatment in 64 adults who required ITM surgery, using two methods: measurement of HRV and conventional questionnaire surveys (state section of the State-Trait Anxiety Inventory (STAI-S) and Dental Fear Survey (DFS)). Both methods were assessed for their respective abilities to determine the impact of personal background, the amount of information provided, and the surgical procedure on patient psychology. Questionnaires and HRV yielded the same finding: dental experience was the single background factor that correlated with patient anxiety; the other factors remain unclear. The STAI-S showed a significant relationship between the information provided to the patient and their anxiety level, while the DFS and HRV did not. In addition, HRV demonstrated its ability to assess the effects of the surgical procedure on patient psychology. HRV demonstrated great potential as an objective method for evaluating patient stress, especially for providing real-time information on the patient's status. Answer: The assessment of surgeon stress during operations using heart rate variability (HRV) and the State Trait Anxiety Inventory (STAI) suggests that the surgical profession involves significant stress, which can impact performance and patient safety. Studies have demonstrated that surgeons experience a significant increase in sympathetic tone, as indicated by HRV, during various surgical procedures, including elective colorectal resections (PUBMED:25406932) and coronary artery bypass grafting (PUBMED:19286681). Surgeons' intraoperative stress levels, as measured by HR and HRV, correlate with their perceived stress, with stressed surgeons showing increased intraoperative HR and decreased HRV during sleep, indicating reduced autonomic recovery (PUBMED:23370764). Furthermore, real operations trigger both cardiac autonomic and cortisol stress responses, which are independent of surgeons' levels of experience. The magnitude of heart rate responses to surgery positively correlates with scores on negative affectivity, depression, perceived stress, and trait anxiety scales, suggesting that individual psychological characteristics may influence stress responses (PUBMED:36810132). Surgeons' self-reported stress and anxiety prior to and after elective gynecological surgery also indicate that stress experienced during surgery is often greater than anticipated, with residents and primary surgeons reporting higher levels of stress and anxiety compared to attending surgeons and assistants (PUBMED:38053225). While the presence of intra-operative music did not affect psychological or physiological outcomes in one study (PUBMED:38263368), other research has shown that preoperative depressed mood can lead to increased sympathetic tone at baseline and a blunted response to impending anesthesia on HRV measurements (PUBMED:27871552). The Imperial Stress Assessment Tool (ISAT) has been proposed as a nonintrusive, feasible approach that combines subjective and objective methods for measuring stress in the operating room, potentially leading to improved patient care (PUBMED:20393847). In summary, the use of HRV and STAI to assess surgeon stress during operations reveals that surgery is a high-stress profession, with potential implications for surgeons' well-being and patient outcomes. While it is unlikely that surgery will be the "death" of surgeons, it is clear that managing stress is crucial for maintaining performance and ensuring the safety and quality of surgical practice.
Instruction: Meta-analysis: are 3 questions enough to detect unhealthy alcohol use? Abstracts: abstract_id: PUBMED:19075207 Meta-analysis: are 3 questions enough to detect unhealthy alcohol use? Background: Both the 10-item Alcohol Use Disorders Identification Test (AUDIT) and its abbreviated 3-item version (Alcohol Use Disorders Identification Test-Consumption [AUDIT-C]) are considered to detect unhealthy alcohol use accurately. Purpose: To examine whether the AUDIT-C is as accurate as the full AUDIT for detecting unhealthy alcohol use in adults. Data Sources: MEDLINE, EMBASE, CINAHL, Web of Science, PsycINFO, and BIOSIS Previews from 1998 to July 2008. Study Selection: Three independent reviewers selected studies that administered both the AUDIT and the AUDIT-C, applied a valid reference standard, avoided verification and incorporation bias, and reported relevant data. No language restrictions were applied. Data Extraction: Two reviewers extracted study characteristics and outcome data, which were cross-checked by a third reviewer. One reviewer assessed methodological quality with a standardized checklist. Data Synthesis: Fourteen studies were found. Most involved primary care patients in Europe and the United States. Sample sizes ranged between 112 and 13 438 patients, and sex and age distributions varied considerably. No statistically significant differences were found between the overall accuracy of the AUDIT and the AUDIT-C for detecting risky drinking, alcohol use disorders, or unhealthy alcohol use in primary care. Hierarchical summary receiver-operating characteristic curve analysis yielded pooled positive likelihood ratios of 6.62 for the AUDIT and 2.99 for the AUDIT-C, respectively, for detecting risky drinking; 4.03 and 3.82, respectively, for detecting any alcohol use disorder; and 4.82 and 3.91, respectively, for detecting risky drinking or any alcohol use disorder. Findings from a few studies on general population samples and inpatients suggested but did not prove that the AUDIT might be better than the AUDIT-C for identifying severe conditions, such as alcohol dependence. Limitation: Studies used different reference standards and had heterogeneous findings. Conclusion: Available evidence is inconclusive but suggests that the full AUDIT may be superior to the AUDIT-C for identifying unhealthy alcohol use in adults in some settings. abstract_id: PUBMED:34823726 Screening for Unhealthy Alcohol and Drug Use in General Medicine Settings. Unhealthy alcohol and drug use are among the top 10 causes of preventable death in the United States, but they are infrequently identified and addressed in medical settings. Guidelines recommend screening adult primary care patients for alcohol and drug use, and routine screening should be a component of high-quality clinical care. Brief, validated screening tools accurately detect unhealthy alcohol and drug use, and their thoughtful implementation can facilitate adoption and optimize the quality of screening results. Recommendations for implementation include patient self-administered screening tools, integration with electronic health records, and screening during routine primary care visits. abstract_id: PUBMED:25193718 Prevalence of unhealthy alcohol use in hospital outpatients. Background: Few studies have examined the prevalence of unhealthy alcohol use in the hospital outpatient setting. Our aim was to estimate the prevalence of unhealthy alcohol use among patients attending a broad range of outpatient clinics at a large public hospital in Australia. Methods: Adult hospital outpatients were invited to complete the Alcohol Use Disorders Identification Test Consumption questions (AUDIT-C) using an iPad as part of a randomised trial testing the efficacy of alcohol electronic screening and brief intervention. Unhealthy alcohol use was defined as an AUDIT-C score ≥5 among men and ≥4 among women. Results: Sixty percent (3616/6070) of invited hospital outpatients consented, of whom 89% (3206/3616) provided information on their alcohol consumption (either reported they had not consumed any alcohol in the last 12 months or completed the AUDIT-C). The prevalence of unhealthy alcohol use was 34.7% (95% confidence interval [CI]: 33.0-36.3%). The prevalence among men aged 18-24 years, 25-39 years, 40-59 years and 60 years and older, was 74.4% (95% CI: 68.4-80.4%), 54.3% (95% CI: 48.7-59.8%), 44.1% (95% CI: 39.9-48.3%), and 27.0% (95% CI: 23.6-30.4%), respectively (43.1% overall; 95% CI: 40.8-45.5%). The prevalence among women aged 18-24 years, 25-39 years, 40-59 years, and 60 years and older, was 48.6% (95% CI: 39.2-58.1%), 36.9% (95% CI: 31.2-42.6%), 25.2% (95% CI: 21.5-29.0%) and 14.5% (95% CI: 11.7-17.3%), respectively (24.9% overall; 95% CI: 22.7-27.1%). Conclusion: A large number of hospital outpatients who are not currently seeking treatment for their drinking could benefit from effective intervention in this setting. abstract_id: PUBMED:36169779 Characterizing Unhealthy Alcohol Use Patterns and Their Association with Alcohol Use Reduction and Alcohol Use Disorder During Follow-Up in HIV Care. Outcomes of PWH with unhealthy alcohol use, such as alcohol use reduction or progression to AUD, are not well-known and may differ by baseline patterns of unhealthy alcohol use. Among 1299 PWH screening positive for NIAAA-defined unhealthy alcohol use in Kaiser Permanente Northern California, 2013-2017, we compared 2-year probabilities of reduction to low-risk/no alcohol use and rates of new AUD diagnoses by baseline use patterns, categorized as exceeding: only daily limits (72% of included PWH), only weekly limits (17%), or both (11%), based on NIAAA recommendations. Overall, 73.2% (95% CI 70.5-75.9%) of re-screened PWH reduced to low-risk/no alcohol use over 2 years, and there were 3.1 (95% CI 2.5-3.8%) new AUD diagnoses per 100 person-years. Compared with PWH only exceeding daily limits at baseline, those only exceeding weekly limits and those exceeding both limits were less likely to reduce and likelier to be diagnosed with AUD during follow-up. PWH exceeding weekly drinking limits, with or without exceeding daily limits, may have a potential need for targeted interventions to address unhealthy alcohol use. abstract_id: PUBMED:35899221 Rethinking Unhealthy Alcohol Use in the United States: A Structured Review. Greater than moderate alcohol use spans a continuum that includes high levels of total alcohol consumed per period (heavy drinking) as well as episodes of intense drinking (binges) and can give rise to alcohol use disorder (AUD) when associated with an inability to control alcohol use despite negative consequences. Although moderate drinking and AUD have standard, operable definitions in the United States (US), a significant "gray area" remains in which an individual may exceed recommended drinking guidelines but does not meet the criteria for AUD (hereafter referred to as unhealthy alcohol use). To address this need, we conducted a structured literature search to evaluate how this gray area is defined and assess its burden within the US. For purposes of this review, we will refer to this gray area as "unhealthy alcohol use." Although numerous terms are used to describe various unsafe drinking practices, our review did not find any studies in which the specific prevalence and/or burden of unhealthy alcohol use was evaluated. That is, we found no studies that focus exclusively on individuals who exceed moderate drinking guidelines but do not meet AUD criteria. Furthermore, we did not discover an established framework for identifying individuals with unhealthy alcohol use. The lack of a consistent framework for identifying unhealthy alcohol users has significant implications for patient management and disease burden assessment. Therefore, we propose the following framework in which unhealthy alcohol use comprises 2 distinct subpopulations: those at risk of experiencing alcohol-related consequences and those who have subthreshold problems associated with use. The former, termed "risky drinkers," are defined by exceeding recommended guidelines for moderate drinking (⩽1 or 2 drinks per day for women and men, respectively). People with subthreshold problems associated with use, defined as exhibiting exactly 1 AUD symptom, would be classified as "problematic drinkers" within this proposed framework. These definitions would help bring the core elements of unhealthy alcohol use into focus, which in turn would help identify and provide management strategies sooner to those affected and reduce the overall burden of unhealthy alcohol use. abstract_id: PUBMED:31368860 The prevalence of unhealthy alcohol and drug use among inpatients in a general hospital. Background: Unhealthy substance use is a growing public health issue. Intersections with the health care system offer an opportunity for intervention; however, recent estimates of prevalence for unhealthy substance use among all types of hospital inpatients are unknown. Methods: Universal screening for unhealthy alcohol or drug use was implemented across a 999-bed general hospital between January 1 and December 31, 2015. Nurses completed alcohol screening using the Alcohol Use Disorders Identification Test alcohol consumption questions (AUDIT-C) with a cutoff of ≥5 for moderate risk and ≥8 for high risk and drug screening using the single-item screening question with ≥1 episode of use considered positive. Results: Out of 35,288 unique inpatients, screens were completed on 21,519. There were 3,451 positive screens (16% of all completed screens), including 1,291 (6%) moderate risk and 1,111 (5%) high risk screens for alcohol and 1,657 (8%) positive screens for drug use. Among screens that were positive for moderate- or high-risk alcohol use, 221 (17%) and 297 (27%), respectively, were concurrently positive for drug use. The majority (61%) of patients with unhealthy alcohol use was on the medical services. Men, those who were white or Hispanic, middle-aged, single, unemployed, or screened positive for drug use were more likely to screen positive for high-risk alcohol use. Those who were younger, single, worked less than full time, or screened high risk for alcohol were more likely to screen positive for drug use. Discordance between diagnosis coding and screening results was noted: 29% of high-risk alcohol use screens had no alcohol diagnosis coding associated with that admission, and 51% of patients with a DSM-IV (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition) diagnosis code of alcohol dependence had AUDIT-C scores of &lt;8. Conclusions: Across a general hospital, 16% of patients screened positive for unhealthy substance use, with the highest volume on medical floors. Nursing-led screening may offer an opportunity to identify and engage patients with unhealthy substance use during hospitalization. abstract_id: PUBMED:28577629 Screening and Counseling for Unhealthy Alcohol Use in Primary Care Settings. Unhealthy alcohol use is a leading causes of preventable death in the United States. Reducing unhealthy alcohol use should be a high priority for health care providers. Well-validated screening instruments are available, and behavioral counseling interventions delivered in primary care can reduce risky drinking. For people with alcohol use disorder, treatment programs with or without medication can reduce consumption and promote abstinence. To overcome barriers to implementation of screening for alcohol use and subsequent delivery of appropriate interventions in primary care settings, support systems, changes in staffing or roles, formal protocols, and additional provider and staff training may be required. abstract_id: PUBMED:34784953 The STUN (STop UNhealthy) Alcohol Use Now trial: study protocol for an adaptive randomized trial on dissemination and implementation of screening and management of unhealthy alcohol use in primary care. Background: Unhealthy alcohol use is a leading cause of preventable deaths in the USA and is associated with many societal and health problems. Less than a third of people who visit primary care providers in the USA are asked about or ever discuss alcohol use with a health professional. Methods/design: This study is an adaptive, randomized, controlled trial to evaluate the effect of primary care practice facilitation and telehealth services on evidence-based screening, counseling, and pharmacotherapy for unhealthy alcohol use in small-to-medium-sized primary care practices. Study participants will include primary care practices in North Carolina with 10 or fewer providers. All enrolled practices will receive a practice facilitation intervention that includes quality improvement (QI) coaching, electronic health record (EHR) support, training, and expert consultation. After 6 months, practices in the lower 50th percentile (based on performance) will be randomized to continued practice facilitation or provision of telehealth services plus ongoing facilitation for the next 6 months. Practices in the upper 50th percentile after the initial 6 months of intervention will continue to receive practice facilitation alone. The main outcome measures include the number (and %) of patients in the target population who are screened for unhealthy alcohol use, screen positive, and receive brief counseling. Additional measures include the number (and %) of patients who receive pharmacotherapy for AUD or are referred for AUD services. Sample size calculations determined that 35 practices are needed to detect a 10% increase in the main outcome (percent screened for unhealthy alcohol use) over 6 months. Discussion: A successful intervention would significantly reduce morbidity among adults from unhealthy alcohol use by increasing counseling and other treatment opportunities. The study will produce important evidence about the effect of practice facilitation on uptake of evidence-based screening, counseling, and pharmacotherapy for unhealthy alcohol use when delivered on a large scale to small and medium-sized practices. It will also generate scientific knowledge about whether embedded telehealth services can improve the use of evidence-based screening and interventions for practices with slower uptake. The results of this rigorously conducted evaluation are expected to have a positive impact by accelerating the dissemination and implementation of evidence related to unhealthy alcohol use into primary care practices. Trial Registration: ClinicalTrials.gov NCT04317989 . Registered on March 23, 2020. abstract_id: PUBMED:28340902 Quality of care measures for the management of unhealthy alcohol use. There is a paucity of quality measures to assess the care for the range of unhealthy alcohol use, ranging from risky drinking to alcohol use disorders. Using a two-phase expert panel review process, we sought to develop an expanded set of quality of care measures for unhealthy alcohol use, focusing on outpatient care delivered in both primary care and specialty care settings. This process generated 25 candidate measures. Eight measures address screening and assessment, 11 address aspects of treatment, and six address follow-up. These quality measures represent high priority targets for future development, including creating detailed technical specifications and pilot testing them to evaluate their utility in terms of feasibility, reliability, and validity. abstract_id: PUBMED:32042600 Online interventions for problem gamblers with and without co-occurring unhealthy alcohol use: Randomized controlled trial. Background And Aims: Problem gambling and unhealthy alcohol use often co-occur. The current trial sought to establish whether adding a brief online intervention for unhealthy alcohol use to an online problem gambling intervention would lead to improvements in gambling and drinking among those with both of these concerns. Methods: Participants were recruited from across Canada using an advertisement targeting those concerned about their gambling who were interested in online help. No mention of unhealthy alcohol use was made in the advertisement. Participants meeting criteria for problem gambling were randomized to either receive just an online intervention for gambling (G-only) or to receive an online gambling intervention plus a brief personalized feedback intervention for unhealthy alcohol use (G + A). Participants were followed up at 3 and 6 months. Results: A total of 282 participants were recruited for the trial. Follow-up rates were good (80% and 84% at 3 and 6 months). There were significant reductions in gambling (p &lt; .001) across time but no significant differences (p &gt; .05) between those who received either the G-only or G + A interventions. Further, for those with unhealthy alcohol use (41% of the sample), there were no significant reductions in alcohol consumption (p &gt; .05) across time or differences between condition. Discussion And Conclusion: The addition of a brief intervention for unhealthy alcohol use to an online intervention for gambling did not appear to improve either gambling or drinking outcomes among people concerned about their gambling. Further research is merited to examine whether a combined intervention (with gambling and drinking components integrated) might result in improved outcomes and whether such an intervention might benefit the subgroup of participants who would specifically seek help for both gambling and alcohol concerns.Trial registration:ClinicalTrials.govNCT03323606; Registration date: October 24, 2017. Answer: The meta-analysis conducted to determine whether the 3-item version of the Alcohol Use Disorders Identification Test (AUDIT-C) is as accurate as the full 10-item AUDIT for detecting unhealthy alcohol use in adults found that there were no statistically significant differences between the overall accuracy of the AUDIT and the AUDIT-C for detecting risky drinking, alcohol use disorders, or unhealthy alcohol use in primary care. The analysis included 14 studies, most of which involved primary care patients in Europe and the United States. The hierarchical summary receiver-operating characteristic curve analysis yielded pooled positive likelihood ratios for both the AUDIT and the AUDIT-C, with the AUDIT-C having a lower ratio for detecting risky drinking. However, the evidence was inconclusive and suggested that the full AUDIT may be superior to the AUDIT-C for identifying unhealthy alcohol use in adults in some settings, particularly for identifying severe conditions such as alcohol dependence (PUBMED:19075207). Therefore, while the 3 questions of the AUDIT-C can be effective for detecting unhealthy alcohol use, especially in primary care settings, the full AUDIT may still be preferable in certain situations, particularly when there is a need to identify more severe alcohol-related conditions.
Instruction: Virulence-associated genotypes of Helicobacter pylori: do they explain the African enigma? Abstracts: abstract_id: PUBMED:12425557 Virulence-associated genotypes of Helicobacter pylori: do they explain the African enigma? Objectives: The aim of this study was to compare the distribution of virulence-associated genotypes of Helicobacter pylori in two Colombian populations with contrasting gastric cancer risk but with similar H. pylori infection prevalence. Methods: Gastric biopsies were taken from 241 subjects from the high gastric cancer risk area of Pasto and from 93 subjects from the low risk area of Tumaco. Four gastric biopsies from each patient were fixed in 10% buffered formalin for histopathologic analysis, and one was frozen immediately in liquid nitrogen and used for genotyping. CagA and vacA genotypes were determined by multiplex polymerase chain reaction and reverse hybridization on a line probe assay. Results: In patients from the population with high risk for gastric cancer, statistically significant higher relative frequencies of cagA positive and vacA s1 and ml genotypes were found as compared to the population from the low risk area. Conclusions: Although H. pylori infection has been recognized as a cause of gastric cancer in humans, some large populations with high prevalence of infection have low gastric cancer rates. This so-called "African enigma" so far remains unexplained. Our findings suggest that virulence-associated genes of H. pylori may partially explain the African enigma. Other factors, including human genetic polymorphisms and diet, are also suspected to play a major role. Further investigations are needed to test this hypothesis. abstract_id: PUBMED:11843958 The relationship between Helicobacter pylori infection, the virulence genotypes of the infecting strain and gastric cancer in the African setting. Background: The relationship between Helicobacter pylori infection and gastric carcinoma remains controversial, especially in the African setting where infection is common, while gastric cancer is perceived to be uncommon, the basis of the so called 'African enigma'. This discrepancy between infection and the development of disease is commonly attributed to differences in host, environment and bacterial factors. Interest in the bacterial factors has focused on heterogeneity in the so-called 'virulence genes'. Aim: The aim of this prospective, case-controlled study was to establish whether H. pylori infection is significantly associated with gastric cancer and to investigate whether gastric cancer is associated with genotypically distinct (as it relates to the candidate virulence genes) organisms in this population. Methods: Patients with histologically confirmed gastric cancer were matched with nonulcer dyspeptic controls for age (within 5 years), gender and ethnicity. Helicobacter pylori status was determined by RUT, histology, culture and serology (locally validated and used as default determinant of H. pylori status). Tumors were classified according to the Lauren classification. The 'virulence genotype' of 17 paired culture samples was determined by previously described and validated molecular techniques (cagA presence, vacA alleles, structure of the cag pathogenicity island and analysis of the iceA alleles). Categorical variables were analysed by the chi2 test. Results: Forty-eight patients (median age 59 years) could be adequately matched to controls. 39/48 (81%) cases and 43/48 (90%) controls were H. pylori positive (NS). Significant differences in the virulence genotypes of infecting strains were noted: vacAs2-controls 24%, cases 0%, p &lt; .00001; vacAs1 present - cases 100%, controls 76%, p &lt; .05; cagA-3'-length &gt; 650 bp - cases 47%, controls 0%, p &lt; .002; cag pathogenicity island intact - cases 82%, controls 43%, p &lt; .04; iceA1 - cases 53%, controls 6%, p &lt; .005. cagA was found in all subjects. Conclusion: This study indicates that, in this African population at least, there is no difference in the prevalence of H. pylori infection when comparing gastric cancer cases with matched controls. However, the findings suggest that gastric cancer may be associated with infection by organisms that are genotypically different from those not associated with disease. abstract_id: PUBMED:18396115 Natural history of Helicobacter pylori infection. This report describes the modalities of chronic gastritis induced by Helicobacter pylori infection in different populations. The full gamut of lesions representing the precancerous cascade is very prevalent in populations of low socioeconomic background experiencing very high gastric cancer risk, as seen in the Latin American Andes Mountains. In populations of high socioeconomic standards and high cancer risk, such as Japan and Korea, the precancerous cascade predominates and "early" cancers are also diagnosed frequently. Some reports describe frequent corpus atrophy, not prominent in the former group. The so-called African enigma is seen in populations of low socioeconomic standards, usually living at low altitudes, with high prevalence of infection but low frequency of cancer and precancerous lesions. In populations in transition from high to low cancer risk, duodenal ulcer and antral non-atrophic gastritis are frequently seen. In affluent societies at low risk of cancer, such as Western Europe, Australia and North America, mild non-atrophic gastritis associated with low virulence Helicobacter pylori genotypes predominate. The varied phenotypes of gastritis may reflect secular changes in the ecology of our species. abstract_id: PUBMED:29494554 Molecular Epidemiology of Helicobacter pylori Infection in a Minor Ethnic Group of Vietnam: A Multiethnic, Population-Based Study. The Helicobacter pylori-induced burden of gastric cancer varies based on geographical regions and ethnic grouping. Vietnam is a multiethnic country with the highest incidence of gastric cancer in Southeast Asia, but previous studies focused only on the Kinh ethnic group. A population-based cross-sectional study was conducted using 494 volunteers (18-78 years old), from 13 ethnic groups in Daklak and Lao Cai provinces, Vietnam. H. pylori status was determined by multiple tests (rapid urease test, culture, histology, and serology). cagA and vacA genotypes were determined by PCR-based sequencing. The overall H. pylori infection rate was 38.1%. Multivariate analysis showed that variations in geographical region, age, and ethnicity were independent factors associated with the risk of H. pylori acquisition. Therefore, multicenter, multiethnic, population based study is essential to assess the H. pylori prevalence and its burden in the general population. Only the E De ethnicity carried strains with Western-type CagA (82%) and exhibited significantly lower gastric mucosal inflammation compared to other ethnic groups. However, the histological scores of Western-type CagA and East-Asian-type CagA within the E De group showed no significant differences. Thus, in addition to bacterial virulence factors, host factors are likely to be important determinants for gastric mucosal inflammation and contribute to the Asian enigma. abstract_id: PUBMED:31333310 Infections with Helicobacter pylori and challenges encountered in Africa. Helicobacter pylori (H. pylori) is the causative agent of gastritis, peptic ulcer disease, mucosa associated lymphoid tissue lymphoma and gastric cancer (GC). While this bacterium infects 50% of the world's population, in Africa its prevalence reach as high as 80% as the infection is acquired during childhood. Risk factors for H. pylori acquisition have been reported to be mainly due to overcrowding, to have infected siblings or parent and to unsafe water sources. Despite this high H. pylori prevalence there still does not exist an African guideline, equivalent to the Maastricht V/Florence Consensus Report of the European Helicobacter and Microbiota Study Group for the management of this infection. In this continent, although there is a paucity of epidemiologic data, a contrast between the high prevalence of H. pylori infection and the low incidence of GC has been reported. This phenomenon is the so-called "African Enigma" and it has been hypothesized that it could be explained by environmental, dietary and genetic factors. A heterogeneity of data both on diagnosis and on therapy have been published. In this context, it is evident that in several African countries the increasing rate of bacterial resistance, mainly to metronidazole and clarithromycin, requires continental guidelines to recommend the appropriate management of H. pylori. The aim of this manuscript is to review current literature on H. pylori infection in Africa, in terms of prevalence, risk factors, impact on human health, treatment and challenges encountered so as to proffer possible solutions to reduce H. pylori transmission in this continent. abstract_id: PUBMED:33362372 Discovery of unique African Helicobacter pylori CagA-multimerization motif in the Dominican Republic. Background: Helicobacter pylori (H. pylori) colonizes the human stomach and is a major cause of peptic ulcer disease and gastric cancer. However, although the prevalence of H. pylori is high in Africa, the incidence of gastric cancer is low, and this phenomenon is called to be African enigma. The CagA protein produced by H. pylori is the most studied virulence factor. The carcinogenic potential of CagA is associated with the Glu-Pro-Ile-Tyr-Ala (EPIYA) patterns and CagA-multimerization (CM) motifs. Aim: To better understand the EPIYA patterns and CM motifs of the cagA gene. Methods: Gastric mucosal biopsy specimens were obtained from 258 patients with dyspepsia living in the Dominican Republic, from which 120 H. pylori strains were cultured. After the bacterial DNA extraction, the EPIYA pattern and CM motif genotypes were determined using a polymerase chain reaction-based sequencing. The population structure of the Dominican Republic strains was analyzed using multilocus sequence typing (MLST). Peptic ulcer disease and gastric cancer were identified via endoscopy, and gastric cancer was confirmed by histopathology. Histological scores of the gastric mucosa were evaluated using the updated Sydney system. Results: All CagA-positive strains carried the Western-type CagA according to the identified EPIYA patterns. Twenty-seven kinds of CM motifs were observed. Although the typical Western CM motif (FPLKRHDKVDDLSKVG) was observed most frequently, the typical East Asian CM motif (FPLRRSAAVNDLSKVG) was not observed. However, "FPLRRSAKVEDLSKVG", similar to the typical East Asian CM motif, was found in 21 strains. Since this type was significantly more frequent in strains classified as hpAfrica1 using MLST analysis (P = 0.034), we termed it Africa1-CM (Af1-CM). A few hpEurope strains carried the Af1-CM motif, but they had a significantly higher ancestral Africa1 component than that of those without the Af1-CM motif (P = 0.030). In 30 cagA-positive strains, the "GKDKGPE" motif was observed immediately upstream of the EPIYA motif in the EPIYA-A segment, and there was a significant association between strains with the hpAfrica1 population and those containing the "GKDKGPE" motif (P = 0.018). In contrast, there was no significant association between the CM motif patterns and histological scores and clinical outcomes. Conclusion: We found the unique African CM motif in Western-type CagA and termed it Africa1-CM. The less toxicity of this motif could be one reason to explain the African enigma. abstract_id: PUBMED:20614477 Helicobacter pylori and gastric cancer in the Middle East: a new enigma? The Middle East is the home of ethnic groups from three main backgrounds: Semitic (Arabs and Jews), Indo-European (Persians and Kurdish) and Turkic (Turkish and Turkmens). Its geographic location, which has been under continuous influences from Asia, Europe and Africa, has made it an ideal site for epidemiological studies on Helicobacter pylori (H. pylori) infection and genotyping. The gastric cancer rate differs in this region from very high in Iran (26.1/10(5)) to low in Israel (12.5/10(5)) and very low in Egypt (3.4/10(5)). Epidemiological studies showed that the prevalence of H. pylori is almost similar in those countries with a high level of infection in childhood. Importantly, the frequency of vacA s1 and m1 regions and cagA+ genotypes were higher in non Semitic populations who inhabit the North than Semitic populations, the inhabitants of Southern parts of the Middle East. H. pylori infection prevalence, distribution pattern of virulence factors, diet and smoking could not have explained the difference in cancer rate. This reflects the multifactorial aetiology of gastric cancer and suggests that H. pylori infection does not always directly correlate with the risk for gastrointestinal disease, such as gastric cancer. Further detailed investigations and international comparative studies of each risk factor need to be performed to investigate whether this represents a true enigma. abstract_id: PUBMED:16036504 Evidence-based examination of the African enigma in relation to Helicobacter pylori infection. Objective: The African enigma describes the dissociation between the prevalence of Helicobacter pylori infection and H. pylori-related diseases. The aim of this study was to use an evidence-based review of endoscopic data from African countries to test whether there are data to support the concept of an African enigma. Material And Methods: A Medline search was carried out to identify prospective endoscopic studies in African populations. Data collected included: the number of endoscopies, age range (or mean age if available), indications for endoscopy, country, years during which data were collected, male to female ratio, and specific outcome of duodenal ulcer, gastric ulcer, or gastric cancer. Results: Forty prospective endoscopic studies from 17 African countries were identified (20,531 patients) and evaluated between 1972 and 2001. Mean ages ranged from 31 to 53.1 years and male to female ratios from 0.67:1 to 4.64:1. H. pylori-related clinical outcomes were common; duodenal ulcers in 4326 patients (21.1%), gastric ulcers in 691 patients (3.4%), and gastric cancers in 503 patients (2.4%). Conclusions: Prospective upper endoscopic trials suggest that the clinical outcomes associated with H. pylori infection in Africa are similar to those seen in industrialized countries. No dissociation between the prevalence of H. pylori infection and H. pylori-related diseases existed; the African enigma as such does not exist and the continued study of the mechanism of a non-existent phenomenon is a misuse of resources. The myth resulted from reliance on anecdotal data and selection bias in populations with extremely limited access to health care and a relatively short life expectancy. abstract_id: PUBMED:26106380 Allergies, Helicobacter pylori and the continental enigmas. Helicobacter pylori, a gastric pathogen, is known to be associated with gastric and duodenal ulcers, and is also a strong risk factor for the development of gastric cancer and lymphoma of the mucosal-associated lymphoid tissue. Ordinarily, this should make a strong case for its eradication at par with any other infectious disease. However, the unique biology of H. pylori and the complexity of its interactions with humans, its only known natural host, do not permit the recommendation of unambiguous preventive and therapeutic measures. Moreover, this organism has co-evolved with humans as a practically universal member of the natural gastric microbiota over at least 100,000 years. H. pylori persists for a lifetime in mostly asymptomatic hosts, and causes clinical disease only in a minority of infections. Therefore, its potential contribution to the maintenance of human immune homeostasis, as is the case with the better-studied members of the intestinal microbiota, is certainly worthy of serious investigation. In this paper, we summarize some interesting and often anecdotal data drawn from recent studies, and examine their significance in the context of the hygiene hypothesis. We also examine whether the lower incidence of gastric cancer over large parts of the world in spite of a high prevalence of infection (the Asian and African enigmas) may be re-interpreted in terms of the hygiene hypothesis. Finally, it is suggested that an evolutionary-ecological approach to the study of H. pylori infection may help in the formulation of strategies for the management of this infection. This may well be an infectious disease wherein medical interventions may have to be personalized to ensure optimal outcomes. abstract_id: PUBMED:33503840 Shedding Light on the African Enigma: In Vitro Testing of Homo sapiens-Helicobacter pylori Coevolution. The continuous characterization of genome-wide diversity in population and case-cohort samples, allied to the development of new algorithms, are shedding light on host ancestry impact and selection events on various infectious diseases. Especially interesting are the long-standing associations between humans and certain bacteria, such as the case of Helicobacter pylori, which could have been strong drivers of adaptation leading to coevolution. Some evidence on admixed gastric cancer cohorts have been suggested as supporting Homo-Helicobacter coevolution, but reliable experimental data that control both the bacterium and the host ancestries are lacking. Here, we conducted the first in vitro coinfection assays with dual human- and bacterium-matched and -mismatched ancestries, in African and European backgrounds, to evaluate the genome wide gene expression host response to H. pylori. Our results showed that: (1) the host response to H. pylori infection was greatly shaped by the human ancestry, with variability on innate immune system and metabolism; (2) African human ancestry showed signs of coevolution with H. pylori while European ancestry appeared to be maladapted; and (3) mismatched ancestry did not seem to be an important differentiator of gene expression at the initial stages of infection as assayed here. Answer: The "African enigma" refers to the observation that despite high prevalence of Helicobacter pylori infection in African populations, the incidence of gastric cancer is relatively low compared to other regions with similar infection rates. The role of H. pylori virulence-associated genotypes in explaining this phenomenon has been the subject of investigation. Studies have shown that there are indeed differences in the distribution of virulence-associated genotypes of H. pylori between populations with contrasting gastric cancer risks. For instance, in two Colombian populations with similar H. pylori infection prevalence but different gastric cancer risks, the high-risk area had statistically significant higher frequencies of cagA positive and vacA s1 and m1 genotypes compared to the low-risk area (PUBMED:12425557). Similarly, in an African setting, significant differences in the virulence genotypes of infecting strains were noted between gastric cancer cases and matched controls, suggesting that gastric cancer may be associated with infection by genotypically distinct H. pylori strains (PUBMED:11843958). However, the African enigma cannot be fully explained by H. pylori virulence factors alone. Other factors, such as human genetic polymorphisms, diet, and environmental factors, are also suspected to play a major role (PUBMED:12425557). For example, in Vietnam, a study found that host factors likely contribute to gastric mucosal inflammation and may help explain the Asian enigma, which is similar to the African enigma (PUBMED:29494554). Furthermore, the discovery of a unique African H. pylori CagA-multimerization motif in the Dominican Republic suggests that the less toxicity of this motif could be one reason for the low incidence of gastric cancer in Africa (PUBMED:33362372). Additionally, an in vitro study indicated that the host response to H. pylori infection is greatly shaped by human ancestry, with African human ancestry showing signs of coevolution with H. pylori (PUBMED:33503840). In conclusion, while virulence-associated genotypes of H. pylori may partially explain the African enigma, it is clear that a multifactorial approach, including host genetics and environmental factors, is necessary to fully understand this complex phenomenon.
Instruction: Is bacterial vaginosis a stronger risk factor for preterm birth when it is diagnosed earlier in gestation? Abstracts: abstract_id: PUBMED:15695989 Is bacterial vaginosis a stronger risk factor for preterm birth when it is diagnosed earlier in gestation? Objective: It is stated commonly that the earlier in pregnancy bacterial vaginosis is diagnosed, the greater is the increase in risk of preterm birth compared with women without bacterial vaginosis. However, this contention is based on small numbers of women. Study Design: In this analysis of 12,937 women who were screened for bacterial vaginosis as part of a previously conducted clinical trial, the odds ratio of preterm birth (&lt;7 weeks of gestation) for asymptomatic bacterial vaginosis-positive versus bacterial vaginosis-negative women was evaluated among women who were screened from 8 to 22 weeks of gestation. Results: The odds ratio of preterm birth among bacterial vaginosis-positive versus bacterial vaginosis-negative women ranged from 1.1 to 1.6 and did not vary significantly according to the gestational age at which bacterial vaginosis was screened. The odds ratio for preterm birth did not vary significantly by gestational age at diagnosis when bacterial vaginosis was subdivided into Gram stain score 7 to 8 or 9 to 10. Conclusion: Although bacterial vaginosis was associated with an increased risk of preterm birth, the gestational age at which bacterial vaginosis was screened for and diagnosed did not influence the increase. abstract_id: PUBMED:12861153 Bacterial vaginosis as a risk factor for preterm delivery: a meta-analysis. Objective: We performed a meta-analysis to evaluate bacterial vaginosis as a risk factor for preterm delivery. Study Design: Selection criteria were (1). the data appeared in original, published English-language reports of prospective studies or control groups of clinical trials that included women at &lt;37 weeks of gestation with intact amniotic membranes, (2). all the women had to have been screened for bacterial vaginosis that was diagnosed by either clinical criteria or criteria that were based on Gram stain findings, and (3). the outcomes were preterm delivery, spontaneous abortion, maternal or neonatal infection, and perinatal death. Results: Eighteen studies with results for 20,232 patients were included. Bacterial vaginosis increased the risk of preterm delivery &gt;2-fold (odds ratio, 2.19; 95% CI, 1.54-3.12). Higher risks were calculated for subgroups of studies that screened for bacterial vaginosis at &lt;16 weeks of gestation (odds ratio, 7.55; 95% CI, 1.80-31.65) or at &lt;20 weeks of gestation (odds ratio, 4.20; 95% CI, 2.11-8.39). Bacterial vaginosis also significantly increased the risk of spontaneous abortion (odds ratio, 9.91; 95% CI, 1.99-49.34) and maternal infection (odds ratio, 2.53; 95% CI, 1.26-5.08). No significant results were calculated for the outcome of neonatal infection or perinatal death. Conclusion: Bacterial vaginosis, early in pregnancy, is a strong risk factor for preterm delivery and spontaneous abortion. abstract_id: PUBMED:27167850 Risk of Vaginal Infections at Early Gestation in Patients with Diabetic Conditions during Pregnancy: A Retrospective Cohort Study. Pregnant women with gestational diabetes mellitus (GDM) are reported to be at increased risk for infections of the genital tract. This study aimed to compare the prevalence of asymptomatic bacterial vaginosis (BV) and Candida colonization at early gestation between pregnant women with and without diabetic conditions during pregnancy. We included data from 8, 486 singleton pregnancies that underwent an antenatal infection screen-and-treat programme at our department. All women with GDM or pre-existing diabetes were retrospectively assigned to the diabetic group (DIAB), whereas non-diabetic women served as controls (CON). Prevalence for BV and Candida colonization was 9% and 14% in the DIAB group, and 9% and 13% in the CON group, respectively (n.s.). No significant difference regarding stillbirth and preterm delivery (PTD), defined as a delivery earlier than 37 + 0 (37 weeks plus 0 days) weeks of gestation was found. We could not find an increased risk of colonization with vaginal pathogens at early gestation in pregnant women with diabetes, compared to non-diabetic women. Large prospective studies are needed to evaluate the long-term risk of colonization with vaginal pathogens during the course of pregnancy in these women. abstract_id: PUBMED:23937716 Multiple pregnancy, short cervix, part-time worker, steroid use, low educational level and male fetus are risk factors for preterm birth in Japan: a multicenter, prospective study. Aim: To examine the relationship between preterm birth and socioeconomic factors, past history, cervical length, cervical interleukin-8, bacterial vaginosis, underlying diseases, use of medication, employment status, sex of the fetus and multiple pregnancy. Methods: In a multicenter, prospective, observational study, 1810 Japanese women registering their future delivery were enrolled at 8⁺⁰ to 12⁺⁶ weeks of gestation. Data on cervical length and delivery were obtained from 1365 pregnant women. Multivariate logistic regression analysis was performed. Results: Short cervical length, steroid use, multiple pregnancy and male fetus were risk factors for preterm birth before 34 weeks of gestation. Multiple pregnancy, low educational level, short cervical length and part-timer were risk factors for preterm birth before 37 weeks of gestation. Conclusion: Multiple pregnancy and cervical shortening at 20-24 weeks of gestation was a stronger risk factor for preterm birth. Any pregnant woman being part-time employee or low educational level, having a male fetus and requiring steroid treatment should be watched for the development of preterm birth. abstract_id: PUBMED:11023166 Bacterial vaginosis and cervical dilation and effacement at 24-29 weeks' gestation. The purpose of this study was to investigate the association between bacterial vaginosis (BV) and cervical dilation and effacement, as measures of impending preterm delivery. The Pregnancy, Infection, and Nutrition Study collected genital tract specimens and documented cervical change from 807 eligible women between 24 and 29 weeks' gestation. BV was assessed with Nugent-scored vaginal smears, and analyzed in relation to cervical measurements. At 24-29 weeks' gestation, &lt;7% of women had a dilated cervix, 31% had a cervix &lt; or =2 cm, and 17.3% had BV. Unadjusted analyses found no associations between BV and cervical measurements. Adjusted logistic regression suggested an association between BV and cervical effacement among women with a sexually transmitted disease (STD) earlier in pregnancy (odds ratio = 1.9, 95% CI 0.8-4.3). Stratified analyses for BV/dilation also suggested interaction with STDs. Overall, BV was not association with cervical dilation or effacement at 24-29 weeks' gestation. abstract_id: PUBMED:17241817 Asymptomatic bacterial vaginosis and intermediate flora as risk factors for adverse pregnancy outcome. We updated a previously published meta-analysis to evaluate bacterial vaginosis (BV) and intermediate vaginal flora as risk factors for adverse pregnancy outcome. Selection criteria were original, published, English-language reports of cohort studies or control groups of clinical trials including women &lt;37 weeks' gestation with intact amniotic membranes. All women had to be screened for BV, diagnosed either by clinical criteria or by criteria based on Gram-stain findings. Outcomes were preterm delivery, late miscarriages, maternal or neonatal infections, and perinatal mortality. Fourteen new studies with results for 10,286 patients were included, so that results for 30,518 patients in 32 studies were available for this meta-analysis. BV more than doubled the risk of preterm delivery in asymptomatic patients (OR: 2.16, 95% CI: 1.56-3.00) and in patients with symptoms of preterm labor (OR: 2.38, 95% CI: 1.02-5.58). BV also significantly increased the risk of late miscarriages (OR: 6.32, 95% CI: 3.65-10.94) and maternal infection (OR: 2.53, 95% CI 1.26-5.08) in asymptomatic patients. No significant results were calculated for the outcomes of neonatal infection or perinatal mortality. Also, intermediate vaginal flora was not significantly associated with any outcome included. The results of this meta-analysis confirm that BV is a risk factor for preterm delivery and maternal infectious morbidity and a strong risk factor for late miscarriage. abstract_id: PUBMED:8333449 Bacterial vaginosis and prematurity in Indonesia: association in early and late pregnancy. Objective: Our objective was to examine the association between preterm delivery and bacterial vaginosis in early and late pregnancy. Study Design: We evaluated 490 pregnant women at three hospitals in Jakarta, Indonesia, for bacterial vaginosis at 16 to 20 weeks' and 28 to 32 weeks' gestation and observed them through delivery. Results: We found significant associations between preterm delivery (gestational age &lt; 37 weeks) and bacterial vaginosis diagnosed at 16 to 20 weeks' gestation (odds ratio 2.0, 95% confidence interval 1.0 to 3.9) but not with bacterial vaginosis diagnosed at 28 to 32 weeks' gestation (odds ratio 1.5, 95% confidence interval 0.7 to 3.0). The rates of preterm delivery were almost doubled for women who had bacterial vaginosis in early pregnancy (20.5%) as compared with women who had bacterial vaginosis only in late pregnancy (10.7%). Conclusion: Only bacterial vaginosis diagnosed early in the second trimester of pregnancy plays a major role as a risk factor for preterm delivery. abstract_id: PUBMED:16517046 Do screening-preventative interventions in asymptomatic pregnancies reduce the risk of preterm delivery--a critical appraisal of the literature. Recent research has suggested that women who experience preterm delivery (PTD) may be identified earlier in pregnancy and before onset of symptoms. Interventions commenced at this earlier asymptomatic stage may offer an opportunity to prevent PTD or lengthen gestation sufficiently to reduce adverse perinatal outcome. Our objective was to examine the evidence that supports or refutes this approach to preventing PTD. We therefore conducted a systematic search and critical appraisal of the identified literature. We found evidence that introducing screening-preventative strategies for asymptomatic pregnancies may reduce the rate of PTD. Evidence for screening and selective treatment exists for: asymptomatic bacteriuria; bacterial vaginosis in low-risk population groups; elective cervical cerclage in high-risk pregnancies; indicated cervical cerclage in women with short cervical length on ultrasound; prophylactic progesterone supplementation in high-risk pregnancies, and smoking cessation. However, for most other strategies, such as increased antenatal attendance, or routine administration of prophylactic micronutrients, the evidence is inconsistent and conflicting. Information on neonatal outcomes apart from PTD (such as serious neonatal morbidity and mortality) was found to be lacking in most studies. It was therefore not possible to establish whether preventing PTD or prolonging gestation would correlate to improved perinatal outcome, and this lessened the potential clinical usefulness of any proposed preventative strategy. No studies were found that evaluated the effectiveness of combining screening-preventative strategies. The review concludes with a suggested an antenatal management plan designed to prevent PTD based on current practice and the evidence presented in this article. abstract_id: PUBMED:12530102 Socioeconomic and environmental risk factors of bacterial vaginosis in early pregnancy. The main aim of this prospective study was to determine the socioeconomic, demographic and environmental factors that may be associated with the occurrence of bacterial vaginosis at early pregnancy in an indigent population from Central Poland. A group of 196 pregnant women was selected randomly from the patients of 10 district maternity units in the Lodz region, Central Poland. Only singleton pregnancies between 8 and 16 week of gestation were qualified for inclusion in the survey. A standard questionnaire covering medical, socioeconomic, demographic, constitutional and environmental items was administered to every subject and was verified with medical records. Cervico-vaginal swabs were collected from the women under study and tested for bacterial vaginosis (BV) according to Spiegel's criteria. Based on the results of Gram stain, BV was diagnosed in 51 women (28.5%), grade I microflora among 66 (36.9%) and grade II among 62 women (34.6%). In the univariate analysis, only single marital status proved to be an important risk factor associated with BV during pregnancy, this was confirmed in the multivariate analysis. Pregnant women who present risk factors for abnormal cervico-vaginal microflora should be covered by comprehensive prenatal surveillance, which enables early detection and treatment of this pathology. Research that identifies the causal pathways and mechanisms through which social disadvantage leads to a higher risk of preterm birth may help to reduce current socioeconomic and demographic disparities and improve pregnancy outcome. abstract_id: PUBMED:10942489 Vaginal fetal fibronectin measurements from 8 to 22 weeks' gestation and subsequent spontaneous preterm birth. Objective: We sought to determine the range of fetal fibronectin values in the vagina from 8 to 22 weeks' gestation, the factors associated with both low and high values, and whether high values are associated with gestational age at birth. Study Design: Vaginal fetal fibronectin was quantitatively determined in a prospective cohort study of 13,360 women being evaluated for participation in the National Institute of Child Health and Human Development Maternal-Fetal Medicine Unit treatment trials for bacterial vaginosis and Trichomonas vaginalis. Fetal fibronectin values were correlated with gestational age at screening, race, the presence of bacterial vaginosis and Trichomonas vaginalis, and gestational age at delivery. Results: Vaginal fetal fibronectin values at each gestational age ranged from unmeasurable to &gt;1000 ng/mL, with median values always being &lt;10 ng/mL. Fetal fibronectin values declined progressively with increasing gestational age at sampling. Bacterial vaginosis and black race were associated with higher values, whereas nulliparity was associated with lower values. High values after 13 weeks' gestation were associated with a 2- to 3-fold increased risk of subsequent spontaneous preterm birth overall and a 4-fold increased risk of very early preterm birth. Conclusion: Elevated vaginal fetal fibronectin levels from 13 to 22 weeks' gestation are associated with a significantly increased risk of spontaneous preterm birth. Answer: The relationship between the timing of bacterial vaginosis (BV) diagnosis and the risk of preterm birth is a subject of investigation in the provided abstracts. According to a meta-analysis, bacterial vaginosis diagnosed early in pregnancy is a strong risk factor for preterm delivery and spontaneous abortion, with higher risks calculated for subgroups of studies that screened for BV at less than 16 weeks of gestation (odds ratio, 7.55; 95% CI, 1.80-31.65) or at less than 20 weeks of gestation (odds ratio, 4.20; 95% CI, 2.11-8.39) (PUBMED:12861153). This suggests that BV is indeed a stronger risk factor for preterm birth when diagnosed earlier in gestation. However, another study found that while BV was associated with an increased risk of preterm birth, the gestational age at which BV was screened for and diagnosed did not influence the increase in risk (PUBMED:15695989). This study analyzed 12,937 women and found that the odds ratio of preterm birth among BV-positive versus BV-negative women did not vary significantly according to the gestational age at which BV was screened. In contrast, a study conducted in Indonesia found significant associations between preterm delivery and BV diagnosed at 16 to 20 weeks' gestation but not with BV diagnosed at 28 to 32 weeks' gestation (PUBMED:8333449). This supports the notion that BV diagnosed early in the second trimester of pregnancy plays a major role as a risk factor for preterm delivery. In summary, the evidence from the meta-analysis (PUBMED:12861153) and the study in Indonesia (PUBMED:8333449) suggests that BV is a stronger risk factor for preterm birth when diagnosed earlier in gestation. However, the study with a large sample size (PUBMED:15695989) did not find a significant variation in risk with the timing of diagnosis, indicating that further research may be needed to fully understand the relationship between the timing of BV diagnosis and the risk of preterm birth.
Instruction: Does enhanced information at cancer genetic counseling improve counselees' knowledge, risk perception, satisfaction and negotiation of information to at-risk relatives? Abstracts: abstract_id: PUBMED:19636983 Does enhanced information at cancer genetic counseling improve counselees' knowledge, risk perception, satisfaction and negotiation of information to at-risk relatives?--a randomized study. Purpose: The aim of the present randomized intervention study was to investigate the effect of receiving extended cancer genetic information on counselees' knowledge, risk perception, information sharing and satisfaction with the service. Methods: In total, 147 counselees, affected by cancer and/or a family history of cancer, were randomized to extended or standard information. The levels of counselees' knowledge and personal risk estimations were measured at four time points. In addition, counselees' satisfaction with the counseling and sharing of the information to at-risk relatives was assessed. The intervention included meeting a specialist nurse, learning the breaking bad news method, receiving written material and video-taped counseling sessions. Results: A significant increase in the level of knowledge in participants in the "breast cancer group" regardless of the randomization was observed over time. The correct estimation of personal risk increased significantly in both groups after two weeks, but declined at the eight month follow-up. Most of the participants had informed at-risk relatives about their visit at the cancer genetic clinic. The majority of respondents in both groups were highly satisfied with the counseling. The only observed effects of the intervention were that counselees in the intervention group were significantly more satisfied with the content of the given information and with the way of informing relatives. Conclusion: Apparently, the current genetic counseling is managed properly and extended information does not seem necessary in all cases. However, some counselees need additional sessions. abstract_id: PUBMED:20577820 Disclosing cancer genetic information within families: perspectives of counselees and their at-risk relatives. Purpose: The aim of the present descriptive study was to investigate the experience of sharing genetic information among cancer genetic counselees and their at-risk relatives. Methods: In total, 147 cancer genetic counselees and 81 of their at-risk relatives answered to a study specific questionnaire and/or were interviewed. Counselees' communication of genetic information to at-risk relatives was assessed with regard to who they informed, how they felt, and how they perceived their relatives' reactions. In addition, at-risk relatives' experiences of receiving genetic information were studied. Results: Most of the counselees had shared the genetic information received at the counseling session personally with their at-risk relatives. The majority of the counselees (68%) reported positive or neutral feelings about sharing the genetic information with their relatives while 9% stated negative feelings. Counselees mostly interpreted the relatives' reactions to the information as positive or neutral (62% of responses), and in few cases as negative (14% of responses). About half of relatives reported positive or neutral reactions (54%) to the received information, while about one-fifth reported negative reactions (22%). Nevertheless, most relatives were satisfied with the received information and half of the relatives intended to seek genetic counseling themselves. Conclusion: Sharing genetic information to at-risk relatives appears to be accomplished without any major difficulties or negative feelings. However, more assistance may be needed to optimize the communication of the genetic information within at-risk families. abstract_id: PUBMED:29789176 How to support cancer genetics counselees in informing at-risk relatives? Lessons from a randomized controlled trial. Objective: In hereditary and familial cancer, counselees are requested to inform their at-risk relatives. We developed an intervention to support counselees in this task. Methods: A randomized controlled trial was conducted aimed at improving cancer genetic counselees' i) knowledge, ii) motivation to disclose information, and ii) self-efficacy in this regard. Eligible participants were randomized to telephonic counseling (n = 148), or standard care (n = 157) and assessed at baseline, 1 week post-intervention, and 4 months after study enrolment. Results: No between-group differences were found in participants' knowledge, motivation, and self-efficacy. Knowledge concerning which second-degree relatives to inform was lower compared to first-degree relatives. About 60% of the participants was of the opinion that they needed to inform more relatives than stated in their summary letter and only about 50% were correctly aware of which information to disclose. Of note, at baseline, almost 80% of the participants had already correctly informed their at-risk relatives. Conclusions: Since, unexpectedly, counselees already informed most of their relatives before the intervention was offered, efficacy of the intervention could not convincingly be determined. Counselees' knowledge about whom to inform about what is suboptimal. Practice Implications: Future interventions should target a more homogeneous sample and address counselees' understanding and recall. abstract_id: PUBMED:17108760 Risk communication in completed series of breast cancer genetic counseling visits. Purpose: There is no consensus on how best to communicate risk in breast cancer genetic counseling. We studied risk communication in completed series of counseling visits and assessed associations with counselees' postcounseling risk perception and satisfaction. Methods: Pre- and postcounseling questionnaires and videorecordings of all visits were available for 51 affected and unaffected women from families with no known BRCA1/2 mutation, who fulfilled criteria for DNA testing. We developed a checklist for assessing risk communication and counselors' behaviors. Results: General risks were mainly communicated in initial visits, while counselee-specific risks were discussed mainly in concluding visits. The risks discussed most often were conveyed only numerically or qualitatively, and most were only stated positively or negatively. Counselors regularly helped counselees to understand the information, but seldom built on counselees' pre-existing perspective. Counselees' breast cancer risk perception after counseling was unrelated to whether this risk had been explicitly stated. The number of general risks discussed was negatively associated with counselees' satisfaction about counseling. Conclusion: Findings suggest that counselors' authority prevails over mutuality with individual counselees, in their communication about risks. abstract_id: PUBMED:26245632 Breast Cancer Risk Perceptions among Relatives of Women with Uninformative Negative BRCA1/2 Test Results: The Moderating Effect of the Amount of Shared Information. The most common result of BRCA1/2 mutation testing when performed in a family without a previously identified mutation is an uninformative negative test result. Women in these families may have an increased risk for breast cancer because of mutations in non-BRCA breast cancer predisposition genes, including moderate- or low-risk genes, or shared environmental factors. Genetic counselors often encourage counselees to share information with family members, however it is unclear how much information counselees share and the impact that shared information may have on accuracy of risk perception in family members. We evaluated 85 sisters and daughters of women who received uninformative negative BRCA1/2 results. We measured accuracy of risk perception using a latent variable model where accuracy was represented as the correlation between perceived risk (indicators = verbal and quantitative measures) and calculated risk (indicators = Claus and BRCAPRO). Participants who reported more information was shared with them by their sister or mother about her genetic counseling session had greater accuracy of risk perception (0.707, p = 0.000) than those who reported little information was shared (0.326, p = 0.003). However, counselees shared very little information; nearly 20 % of family members reported their sister or mother shared nothing with them about her genetic counseling. Family members were generally not aware of the existence of a genetic counseling summary letter. Our findings underscore the need for effective strategies that facilitate counselees to share information about their genetic counseling sessions. Such communication may help their relatives better understand their cancer risks and enhance risk appropriate cancer prevention. abstract_id: PUBMED:21885922 A counselee-oriented perspective on risk communication in genetic counseling: explaining the inaccuracy of the counselees' risk perception shortly after BRCA1/2 test result disclosure. Purpose: Genetic counseling may help counselees understand their genetic risk of developing breast/ovarian cancer. However, many studies have shown that their perception of their risks is inaccurate. Information-oriented variables often predicted the level of accuracy, focusing on specific processes of receiving and processing risks. We examined counselee-oriented predictors about how counselees embed cancer risks in their lives. These predictors reflect the personal meaning of genetic risks and are expected to explain/mediate the impact of genetic counseling on risk-perception-accuracy. Method: We analyzed 248 questionnaires of a prospective study, filled in by probands with breast/ovarian cancer and pathogenic mutations, unclassified variants, or uninformative results (n = 30, 16, and 202, respectively). Mediation regression analyses were performed to examine whether counselee predictors mediated/explained the influence of information predictors on the accuracy. Information-oriented predictors regarded presentation format, communicated information, question format, education, pedigree information, cancer experience, and cognitive processes/heuristics. Counselee-oriented predictors regarded their self/personality, life/existence, and need for certainty about DNA test result, heredity, and cancer. Results: Both information-oriented and counselee-oriented variables significantly predicted the accuracy of the counselees' risk perception, with moderate to large effect sizes. Counselee-oriented variables completely mediated/explained the effects of information-oriented variables on the accuracy. Discussion: Counselees seemed to transform objective cancer risks into personally relevant information. Only through this personal meaning of genetic information, information-oriented processes seemed to cause inaccurate perceptions. Genetic counselors are suggested to focus communication on these personal processes. abstract_id: PUBMED:34455229 Information needs on breast cancer genetic and non-genetic risk factors in relatives of women with a BRCA1/2 or PALB2 pathogenic variant. Objectives: Comprehensive breast cancer (BC) risk models integrating effects of genetic (GRF) and non-genetic risk factors (NGRF) may refine BC prevention recommendations. We explored the perceived information received on BC risk factors, and related characteristics, in female relatives of women with a BRCA1/2 or PALB2 pathogenic variant, undergoing BC risk assessment using the CanRisk© prediction tool. Methods: Of 200 consecutive cancer-free women approached after the initial genetic consultation, 161 (80.5%) filled in questionnaires on their perception of information received and wished further information on BC risk factors (e.g., being a carrier of a moderate risk altered gene, personal genetic profile, lifestyles). Multilevel multivariate linear models were performed accounting for the clinician who met the counselee and exploring the effect of counselees' socio-demographic, familial and psychological characteristics on the perceived extent of information received. Results: Perceived no/little information received and wish for further information were more frequent for NGRF (&gt;50%) than for GRF, especially high-risk genes (&lt;20%). Perceived amount of information received and desire for further information were inversely correlated (p=&lt;0.0001). Higher education level related to lower perceived levels of information received on GRF. Younger counselees' age (β = 0.13, p = 0.02) and less frequent engagement coping (e.g., inclination to solicit information) (β = 0.24, p = 0.02) related to lower perceived information received about NGRF. Other assessed counselees' features were not found to be associated to GRF and NGRF information perception. Conclusions: Awareness of counselees' perceived lack of information on BC risk factors indicates a need to enhance evidence-based information on BC NGRF especially. abstract_id: PUBMED:26321750 Counselees' Expressed Level of Understanding of the Risk Estimate and Surveillance Recommendation are Not Associated with Breast Cancer Surveillance Adherence. We studied counselees' expressed understanding of the risk estimate and surveillance recommendation in the final consultation for breast cancer genetic counseling in relation with their risk perception, worry and cancer surveillance adherence 1 year post-counseling. Consecutive counselees were included from 2008 to 2010. Counselees with an indication for diagnostic DNA-testing for themselves or a breast cancer affected relative were requested to complete online questionnaires before and after counseling and one year after counseling (N = 152-124). Self-reported surveillance was compared to surveillance recommendations. Consultations were videotaped. Counselees' reactions to the risks and recommendations were coded. Statements about the risk perception and surveillance intentions of breast cancer unaffected counselees were transcribed. Associations with outcomes were explored. Almost all breast cancer unaffected counselees (&gt;90 %) reacted to their risk estimate with an utterance indicating understanding and this reaction was not significantly associated with their post-visit risk perception alignment. Over one-third (38.6 %) overestimated their risk post-counseling. Few counselees (5.8 %) expressed surveillance intentions. One year after counseling, about three-quarters (74.0 %) of the breast cancer unaffected counselees had adhered to the surveillance recommendation. Almost one-quarter (23.3 %) had performed more mammograms/MRI scans than recommended, which was associated with prior mammography uptake (n = 47; X (2) = 5.2; p = .02). Counselees' post-counseling overestimation of their risk, high levels of worry and high surveillance uptake were not reflected in their reactions to the counselor's information during the final visit. abstract_id: PUBMED:24649895 Informing relatives about their hereditary or familial cancer risk: study protocol for a randomized controlled trial. Background: Genetic counseling for hereditary breast or colon cancer has implications for both counselees and their relatives. Although counselees are encouraged by genetic counselors to disclose genetic cancer risk information, they do not always share this information with their at-risk relatives. Reasons for not informing relatives may be generally categorized as a lack of knowledge, motivation and/or self-efficacy. Presented here is the protocol of a randomized controlled trial that aims to establish the effectiveness of an intervention focused on supporting counselees in their disclosure of genetic cancer information to their relatives. Methods/design: A multicenter randomized controlled trial with parallel group design will be used to compare the effects of an additional telephone counseling session performed by psychosocial workers to enhance the disclosure of genetic cancer information to at-risk relatives (intervention group) with a control group of standard care. Consecutive index patients with relatives at risk for hereditary or familial breast and/or ovarian cancer or colon cancer, are randomly assigned (block size: 8; 1:1 allocation ratio) to the intervention (n = 132) or control group (n = 132, standard care). Primary outcomes are counselees' knowledge, motivation and self-efficacy regarding informing their relatives. Discussion: This intervention may prove important in supporting counselees to disclose hereditary and/or familial cancer risk information to at-risk relatives and may enable more at-risk relatives to make a well-informed decision regarding genetic services and/or screening. Trial Registration: This trial is registered in the Netherlands National Trial Register (NTR) with trial ID number NTR3745. abstract_id: PUBMED:27103421 Design and Feasibility of an Intervention to Support Cancer Genetic Counselees in Informing their At-Risk Relatives. Cancer genetic counselees receive individualized information regarding heightened risks and medical recommendations which is also relevant for their at-risk relatives. Unfortunately, counselees often insufficiently inform these relatives. We designed an intervention aimed at improving counselees' knowledge regarding which at-risk relatives to inform and what information to disclose, their motivation to disclose, and their self-efficacy. The intervention, offered by telephone by trained psychosocial workers, is based on the principles of Motivational Interviewing. Phase 1 of the intervention covers agenda setting, exploration, and evaluation, and phase 2 includes information provision, enhancing motivation and self-efficacy, and brainstorming for solutions to disseminate information within the family. Fidelity and acceptability of the intervention were assessed using recordings of intervention sessions and by counselee self-report. A total of 144 counselees participated. Psychosocial workers (n = 5) delivered the intervention largely as intended. Counselees highly appreciated the content of the intervention and the psychosocial workers who delivered the intervention. In the sessions, psychosocial workers provided additional and/or corrective information, and brainstorming for solutions was performed in 70 %. These results indicate that this intervention is feasible and warrants testing in clinical practice. For this, a randomized controlled trial is currently in progress to test the intervention's efficacy. Answer: Enhanced information at cancer genetic counseling has been studied to determine its impact on counselees' knowledge, risk perception, satisfaction, and the negotiation of information to at-risk relatives. The results from various studies suggest that while there are some benefits, the overall necessity and impact of extended information may vary. A randomized study found that providing extended cancer genetic information did not significantly change the level of knowledge or the correct estimation of personal risk over time, although there was a temporary increase in both groups after two weeks. Most participants informed their at-risk relatives about their visit to the cancer genetic clinic, and the majority were highly satisfied with the counseling. The intervention group was more satisfied with the content of the information and the way of informing relatives, indicating some benefit to enhanced information (PUBMED:19636983). Another study showed that most counselees shared genetic information with their at-risk relatives, with the majority reporting positive or neutral feelings about sharing this information. While some at-risk relatives reported negative reactions, most were satisfied with the information received and intended to seek genetic counseling themselves. This suggests that sharing genetic information is generally accomplished without major difficulties, but more assistance may be needed to optimize communication within families (PUBMED:20577820). A randomized controlled trial aimed at improving cancer genetic counselees' knowledge, motivation to disclose information, and self-efficacy found no significant differences between the intervention and control groups. This was partly because many counselees had already informed their relatives before the intervention was offered. The study highlighted that counselees' knowledge about whom to inform was suboptimal, suggesting a need for interventions targeting understanding and recall (PUBMED:29789176). In summary, while enhanced information at cancer genetic counseling can improve satisfaction with the information provided and may assist in the way information is communicated to relatives, its impact on knowledge and risk perception is less clear. The current genetic counseling appears to be managed properly, and extended information may not be necessary in all cases, although some counselees could benefit from additional sessions (PUBMED:19636983).
Instruction: Can pleural adenosine deaminase (ADA) levels in pleural tuberculosis predict the presence of pulmonary tuberculosis? Abstracts: abstract_id: PUBMED:27017482 Can pleural adenosine deaminase (ADA) levels in pleural tuberculosis predict the presence of pulmonary tuberculosis? A CT analysis. Aim: To assess the relationship between imaging features of pulmonary tuberculosis at computed tomography (CT) and adenosine deaminase (ADA) values via pleural fluid analysis in patients with pleural tuberculosis. Materials And Methods: This retrospective study enrolled 60 patients who underwent fluid analysis for ADA and chest CT and were diagnosed with tuberculosis by culture or polymerase chain reaction of pleural fluid and sputum. The presence of centrilobular nodules, consolidation, cavitation, and mediastinal lymphadenopathy at CT were evaluated. The relationship between ADA values and the pattern of pulmonary involvement of tuberculosis was analysed. Results: Pulmonary involvement was seen in 42 of the 60 patients. A centrilobular nodular pattern was seen in 37 and consolidation in 22. In 17 patients, both findings were identified. A centrilobular nodular pattern was more common than consolidation or cavitary lesions. When ADA values were high, pulmonary involvement was more frequent (p=0.002). Comparing low and high ADA groups using an obtained cut-off value of 80 IU/l, the high group had more frequent pulmonary involvement (p&lt;0.001). Conclusion: Patients with tuberculous pleurisy who had high ADA values had a higher probability of manifesting pulmonary tuberculosis. High ADA values may help predict contagious pleuroparenchymal tuberculosis. The most common pulmonary involvement of tuberculous pleurisy showed a centrilobular nodular pattern. abstract_id: PUBMED:16669825 Diagnostic value of leptin in tuberculous pleural effusions. It is suggested that leptin may be involved in inflammation. Although relation between leptin levels and active pulmonary tuberculosis has been studied, there is no information about relation between leptin levels and tuberculous pleural effusions (TPE). We evaluated the diagnostic value of pleural fluid and serum leptin levels in TPE and compared them with adenosine deaminase (ADA). Forty-five patients, 17 tuberculous effusion and 28 nontuberculous effusion, with exudative pleural effusions were included. Leptin and ADA levels were measured from serum and pleural fluid in all patients. There were no statistically significant differences between tuberculous and nontuberculous groups with respect to the serum ADA activity and pleural fluid/serum leptin ratio. On the contrary, pleural fluid leptin level, pleural fluid ADA activity, serum leptin level and pleural fluid/serum ADA activity ratio were statistically different between tuberculous and nontuberculous groups. When leptin levels were corrected for body mass index, serum leptin levels did not reach statistical significance. Cut-off points to predict tuberculosis were calculated as 9.85 ng/ml and 35.55 U/l for pleural fluid leptin level and pleural fluid ADA activity, respectively. Sensitivity, specificity and area under the curve +/- standard error were 82.4%, 82.1%, 0.83 +/- 0.07 for pleural fluid leptin levels and 100%, 100%, 1.00 +/- 0.00 for pleural fluid ADA activity, respectively; the difference between these curves was significant (p = 0.01). Pleural fluid leptin levels were lower in tuberculous effusions than in other exudates. Pleural fluid leptin has a diagnostic value for TPE but not as good as that of ADA. abstract_id: PUBMED:27135844 Pleural tuberculosis in a hospital in Northeast Argentina Pleural tuberculosis ranks first in extrapulmonary sites. The aim of this study was to estimate the proportion pleural TB among TB cases, and characterize the presentation, diagnostic methods and outcomes of patients hospitalized in the ángela I. de Llano hospital, Corrientes, Argentina, between January 1, 2011 and June 30, 2014. We performed a descriptive and observational study. Ten patients were diagnosed with TB pleural effusion. The mean age was 48.5 ± 16.9 (16-63) years. The average evolution time before the consultation was 21.3 ± 11.6 (7-45) days. All were unilateral, exudates, with cell count of 2152 ± 687 (84-7000) cells; 8 cases had lymphocyte predominance. The average value of adenosine deaminase determination (ADA) was 92.7 ± 27.0 (60-150) IU/l. The pleural effusion smear was positive in 4 cases; development of Mycobacterium tuberculosis was obtained in 3 cases; histologically, three presented caseating granulomas. One death was recorded. Although pleural TB usually occurs in male patients, middle-aged, as a unilateral exudative pleural effusion with lymphocytic predominance, with less than a month's evolution, diagnostic certainty has its limitations, thus clinical suspicion, epidemiology, imaging, pathology and laboratory tests, and determination of adenosine deaminase levels, represent a valuable contribution to diagnosis. abstract_id: PUBMED:20345583 Update on tuberculous pleural effusion. The possibility of tuberculous pleuritis should be considered in every patient with an undiagnosed pleural effusion, for if this diagnosis is not made the patient will recover only to have a high likelihood of subsequently developing pulmonary or extrapulmonary tuberculosis Between 3% and 25% of patients with tuberculosis will have tuberculous pleuritis. The incidence of pleural tuberculosis is higher in patients who are HIV positive. Tuberculous pleuritis usually presents as an acute illness with fever, cough and pleuritic chest pain. The pleural fluid is an exudate that usually has predominantly lymphocytes. Pleural fluid cultures are positive for Mycobacterium tuberculosis in less than 40% and smears are virtually always negative. The easiest way to establish the diagnosis of tuberculous pleuritis in a patient with a lymphocytic pleural effusion is to generally demonstrate a pleural fluid adenosine deaminase level above 40 U/L. Lymphocytic exudates not due to tuberculosis almost always have adenosine deaminase levels below 40 U/L. Elevated pleural fluid levels of gamma-interferon also are virtually diagnostic of tuberculous pleuritis in patients with lymphocytic exudates. In questionable cases the diagnosis can be established by demonstrating granulomas or organisms on tissue specimens obtained via needle biopsy of the pleura or thoracoscopy. The chemotherapy for tuberculous pleuritis is the same as that for pulmonary tuberculosis. abstract_id: PUBMED:14982595 The management of pleural space infections. Pleural infection is responsible for significant morbidity and mortality worldwide, and its clinical management is challenging. The diagnosis of empyema and tuberculous pleurisy may be difficult, and these conditions may be confused with other causes of exudative pleural effusions. Complicated parapneumonic effusion or empyema may present with 'atypical' clinical features; delays in diagnosis are common and may contribute to the high mortality of these infections. Pleural aspiration is the key diagnostic step; pleural fluid that is purulent or that has a pH &lt; 7.2, or organisms on Gram stain or culture, is an indication for formal intercostal drainage. In order to achieve a definitive diagnosis of tuberculous pleurisy, Mycobacterium tuberculosis must be isolated in the culture of pleural fluid, pleural tissue or sputum; demonstration of granulomas in pleural tissue is also suggestive of tuberculosis. The use of pleural fluid biochemical markers, such as adenosine deaminase, in the diagnosis of tuberculous pleurisy varies among clinicians; the diagnostic value of such markers is affected by the background prevalence of tuberculosis and the likelihood of an alternative diagnosis. Uncertainties also remain regarding the treatment of pleural infection. Treatment of complicated parapneumonic effusion and empyema involves prolonged courses of antibiotics and attention to the patient's nutritional state. The role of intrapleural fibrinolytics and the optimal timing of surgical intervention are unknown. The lack of clear predictors of clinical outcome in empyema contributes to the difficulty in treating this condition. The pharmacological treatment of tuberculous pleurisy is the same as for pulmonary tuberculosis; the precise role of steroids in the treatment of tuberculous pleurisy remains uncertain. abstract_id: PUBMED:29660258 Pleural tuberculosis: A concise clinical review. Tuberculosis (TB) is the leading infectious cause of death worldwide, and the commonest cause of death in people living with HIV. Globally, pleural TB remains one of the most frequent causes of pleural exudates, particularly in TB-endemic areas and in the HIV positive population. Most TB pleural effusions are exudates with high adenosine deaminase (ADA), lymphocyte-rich, straw-coloured and free flowing, with a low yield on mycobacterial culture. TB pleurisy can also present as loculated neutrophil-predominant effusions which mimic parapneumonic effusions. Rarely, they can present as frank TB empyema, containing an abundance of mycobacteria. Up to 80% of patients have parenchymal involvement on chest imaging. The diagnosis is simple if M. tuberculosis is detected in sputum, pleural fluid or biopsy specimens, and the recent advent of liquid medium culture techniques has increased the microbiological yield dramatically. Where the prevalence of TB is high the presence of a lymphocyte-predominant exudate with a high ADA has a positive predictive value of 98%. In low prevalence areas, the absence of an elevated ADA and lymphocyte predominance makes TB very unlikely, and pleural biopsy should be performed to confirm the diagnosis. Pleural biopsy for liquid culture and susceptibility testing must also be considered where the prevalence of drug resistant TB is high. Treatment regimens are identical to those administered for pulmonary TB. Initial pleural drainage may have a role in symptom relief and in hastening the resolution of the effusion. Surgical intervention may be required in loculated effusions and empyemas. abstract_id: PUBMED:29021481 Cryptococcal Pleuritis Presenting with Lymphocyte-predominant and High Levels of Adenosine Deaminase in Pleural Effusions Coincident with Pulmonary Tuberculosis. Co-infection with cryptococcus and tuberculosis has rarely been reported. We herein report a case of an 80-year-old man with cryptococcal pleuritis concurrent with pulmonary tuberculosis. He was admitted for progression of left pleural effusion and consolidation in the left upper lobe. Culture for Mycobacterium tuberculosis was positive in sputum, and analyses of pleural effusion revealed lymphocyte-predominant high levels of adenosine deaminase (ADA). Medical thoracoscopy revealed massive infiltration of Cryptococcus neoformans in pleura without granuloma. This is the first case report of cryptococcal pleuritis coincident with pulmonary tuberculosis. Cryptococcal pleuritis should be ruled out when the adenosine deaminase levels are elevated in pleural effusion. abstract_id: PUBMED:31680227 The role of genexpert in the diagnosis of tubercular pleural effusion in India. Introduction: Tubercular pleural effusion is the second most common extrapulmonary form of tuberculosis in India. Developing nations like India face several health challenges and with limited resources, appropriate planning and channelization of the same is the need of the hour. Material And Methods: The objective of the study was to determine the role of cartridge-based nucleic acid amplification test (CBNAAT) in the diagnosis of tubercular pleural effusion (TPE) and also to study if any association exists between CBNAAT and pleural fluid adenosine deaminase (ADA) and lymphocyte counts. Clinically suspected TPE, lymphocyte predominant (≥ 70%) exudates (according to the Lights criteria) with ADA ≥ 40 U/L and microbiologically confirmed pulmonary tuberculosis patients with aco-existent pleural effusion were included. Pleural fluid CBNAAT was performed on all the samples. Results: Out of atotal of 75 patients, 57 were males and 18 were females. Alymphocyte predominance of ≥ 70% was seen in 73 subjects (97%). Mean ADA was 61.7 U/L ± 16.2 (SD). Pleural fluid CBNAAT was positive for Mycobacterium tuberculosis (MTB) in 24 patients (32%). Out of these patients, rifampicin resistance was detected in 2 individuals (8.3%). Sputum smear for acid fast bacilli (AFB) was positive in 3 (4%) patients, whereas in sputum CBNAAT MTB was detected in 8 (10.6%) persons. Association between pleural fluid ADA, lymphocyte count and CBNAAT positivity was evaluated by Student T-test. There was asignificant association between higher ADA levels and CBNAAT (p value = 0.001). Conclusions: Pleural fluid CBNAAT, owing to its low sensitivity, should not be included in the diagnostic protocol of TPE in high prevalence areas. Ahigh ADA ≥ 40 U/L in combination with Light's criteria to define exudates, with lymphocyte predominance is sufficient evidence to diagnose TPE and initiate anti-tubercular therapy, thereby deferring the need to perform an invasive pleural biopsy. abstract_id: PUBMED:21172672 Pleural tuberculosis in children. Pleural tuberculosis effusion (PTE) in children is a diagnosis which must be considered in isolated pleural effusions in non-toxemic children. It is more common in children over 5 years of age. A history of close contact with an adult with pulmonary tuberculosis reinforces the suspicion for its diagnosis. Pleural effusion without any parenchymal lesion is the characteristic finding on the chest x-ray. However, in 20% to 40% of patients, intrathoracic disease may also occur. Adenosine deaminase, interferon-gamma, analysis of pleural fluid and pleural biopsy are the main tools for diagnostic confirmation. Tuberculin skin test may provide supporting evidence of tuberculous infection. PTE has a good prognosis in children and no long term sequelae are expected. abstract_id: PUBMED:12608433 Diagnostic value of adenosine deaminase in nontuberculous lymphocytic pleural effusions. Adenosine deaminase (ADA) can aid in the diagnosis of tuberculous pleural effusions, but false-positive findings from lymphocytic effusions have been reported. The purpose of this study is to assess the ADA levels in nontuberculous lymphocytic pleural effusions (lymphocyte count &gt; 50%) of different aetiologies. Altogether, 410 nontuberculous lymphocytic pleural fluid samples were consecutively selected. These included malignant effusions (n = 221), idiopathic effusions (n = 76), parapneumonic effusions (n = 35), postcoronary artery bypass graft surgery effusions (n = 6), miscellaneous exudative effusions (n = 21) and transudative effusions (n = 51). The ADA level reached the diagnostic cut-off for tuberculosis (40 U x L(-1)) in seven of the 410 cases (1.71%). The negative predictive value of ADA for the diagnosis of pleural tuberculosis was 99% (403 of 407 cases) in the group of lymphocytic pleural effusions. In five of these seven patients ADA1 and ADA2 were measured, and in all these cases (100%) ADA1/ADA(p) correctly classified these lymphocytic effusions as nontuberculous (ratio &lt; 0.42). This prospective study provides additional evidence that adenosine deaminase levels in nontuberculous lymphocytic pleural effusions seldom exceed the cut-off set for tuberculous effusions. The pleural fluid adenosine deaminase levels were significantly higher in different types of exudative effusions than in transudates. An adenosine deaminase level &lt; 40 IU x L(-1) virtually excluded a diagnosis of tuberculosis in lymphocytic pleural effusions. Adenosine deaminase1/adenosine deaminase(p) correctly classified all nontuberculous lymphocytic pleural effusions with high adenosine deaminase levels. Answer: Yes, pleural adenosine deaminase (ADA) levels in pleural tuberculosis can predict the presence of pulmonary tuberculosis. A study that assessed the relationship between imaging features of pulmonary tuberculosis at computed tomography (CT) and ADA values via pleural fluid analysis in patients with pleural tuberculosis found that patients with tuberculous pleurisy who had high ADA values had a higher probability of manifesting pulmonary tuberculosis. High ADA values may help predict contagious pleuroparenchymal tuberculosis, and the most common pulmonary involvement of tuberculous pleurisy showed a centrilobular nodular pattern (PUBMED:27017482). Furthermore, another study that aimed to determine the role of cartridge-based nucleic acid amplification test (CBNAAT) in the diagnosis of tubercular pleural effusion (TPE) in India also found a significant association between higher ADA levels and CBNAAT positivity, suggesting that high ADA levels in pleural fluid are indicative of TPE and can be used to diagnose and initiate anti-tubercular therapy without the need for an invasive pleural biopsy (PUBMED:31680227). These findings are supported by the fact that pleural tuberculosis is often associated with pulmonary tuberculosis, and ADA is a useful diagnostic marker for tuberculous pleurisy, with levels above 40 U/L generally indicating the condition (PUBMED:20345583). Additionally, pleural tuberculosis usually presents as a lymphocytic exudate with high ADA levels, and the diagnosis is simpler if M. tuberculosis is detected in sputum, pleural fluid, or biopsy specimens (PUBMED:29660258). In summary, high pleural ADA levels are associated with a higher likelihood of pulmonary tuberculosis in patients with pleural tuberculosis, and ADA is a valuable diagnostic marker for this condition.
Instruction: Assessing website pharmacy drug quality: safer than you think? Abstracts: abstract_id: PUBMED:20730049 Assessing website pharmacy drug quality: safer than you think? Background: Internet-sourced drugs are often considered suspect. The World Health Organization reports that drugs from websites that conceal their physical address are counterfeit in over 50 percent of cases; the U.S. Food and Drug Administration (FDA) works with the National Association of Boards of Pharmacy (NABP) to regularly update a list of websites likely to sell drugs that are illegal or of questionable quality. Methods And Findings: This study examines drug purchasing over the Internet, by comparing the sales of five popular drugs from a selection of websites stratified by NABP or other ratings. The drugs were assessed for price, conditions of purchase, and basic quality. Prices and conditions of purchase varied widely. Some websites advertised single pills while others only permitted the purchase of large quantities. Not all websites delivered the exact drugs ordered, some delivered no drugs at all; many websites shipped from multiple international locations, and from locations that were different from those advertised on the websites. All drug samples were tested against approved U.S. brand formulations using Raman spectrometry. Many (17) websites substituted drugs, often in different formulations from the brands requested. These drugs, some of which were probably generics or perhaps non-bioequivalent copy versions, could not be assessed accurately. Of those drugs that could be assessed, none failed from "approved", "legally compliant" or "not recommended" websites (0 out of 86), whereas 8.6% (3 out of 35) failed from "highly not recommended" and unidentifiable websites. Conclusions: Of those drugs that could be assessed, all except Viagra(R) passed spectrometry testing. Of those that failed, few could be identified either by a country of manufacture listed on the packaging, or by the physical location of the website pharmacy. If confirmed by future studies on other drug samples, then U.S. consumers should be able to reduce their risk by relying on credentialing agencies recommended lists and by using common sense when examining packaging and pills. abstract_id: PUBMED:34569940 Development of and Experiences With an Informational Website on Early Labor: Qualitative User Involvement Study. Background: The period of regular contractions before 4 cm of cervical dilatation is often referred to as the latent phase or early labor. Women find it challenging to prepare for and cope with this phase of labor, and easily accessed web-based information from reliable sources may be useful in this preparation. Objective: The aim of this study is to describe the development of a Norwegian website, Latens.no, for people seeking information on early labor and to explore users' experiences with the website to increase its user-friendliness. Methods: We developed a website using an iterative process involving a multidisciplinary research team, health personnel, users, a graphic designer, and an expert in software development. We explored the website's user-friendliness using semistructured individual interviews and the think-aloud method. All interviews were audio recorded and transcribed. We then analyzed the participants' feedback on the website. Results: Participants included women who had recently given birth to their first baby (n=2), women who were pregnant with their first baby (n=4), and their partners (n=2). Results from participants' experiences completing tasks included positive feedback related to the content of Latens.no, positive feedback related to the website's design, and suggestions for improvement. Participants wanted to find information on early labor on the internet. Moreover, they found the information on the website relevant, trustworthy, and easy to read, and the design was attractive and easy to use. Overall, the participants performed the tasks easily, with few clicks and minimal effort. Conclusions: The think-aloud method, while performing tasks, allowed for detailed feedback. The participants confirmed the user-friendliness of the website but at the same time provided information enabling improvement. We expect that changes made based on this user-centered design study will further increase the usability and acceptability of Latens.no. abstract_id: PUBMED:23983491 Attrition of Canadian Internet pharmacy websites: what are the implications? Background: The unavailability of Internet pharmacy websites may impact a consumer's drug purchases and health care. Objective: To address the issue of attrition, a defined set of Canadian Internet pharmacy websites was examined at three separate time intervals. Methods: In February to March 2006, 117 distinct, fully functional "Canadian Internet pharmacy" websites were located using the advanced search options of Google and the uniform resource locator (URL) for each website was recorded. To determine website attrition, each of the 117 websites obtained and recorded from the previous study was revisited at two later periods of time within a 4-year period. Results: After approximately 4 years and 5 months, only 59 (50.4%) sites were found in the original state. Thirty-four sites (29.1%) had moved to a new URL address and were not functioning as the original Internet pharmacy. For 24 sites (20.5%) the viewer was redirected to another Canadian Internet pharmacy site. Conclusion: Of concern for patients if Internet pharmacy sites were suddenly inaccessible would be the disruption of continuity of care. abstract_id: PUBMED:24860689 Using Think Aloud Protocols to Assess E-Prescribing in Community Pharmacies. Introduction: Think aloud protocol has rarely been used as a method of data collection in community pharmacies. Purpose: The aim of the report is to describe how think aloud protocols were used to identify issues that arise when using e-prescribing technology in pharmacies. In this paper, we report on the benefits and challenges of using think aloud protocols in pharmacies to examine the use of e-prescribing systems. Methods: Sixteen pharmacists and pharmacy technicians were recruited from seven community pharmacies in Wisconsin. Data were collected using direct observation alongside think aloud protocol. Direct observations and think aloud protocols took place between January-February, 2011. Participants were asked to verbalize their thoughts as they process electronic prescriptions. Results: Participants identify weaknesses in e-prescribing that they had previously not conceived. This created heightened awareness for vigilance when processing e-prescriptions. The main challenge with using think aloud protocols were interruptions in the pharmacy. Some participants found it difficult to remember to continue verbalizing during think aloud sessions. Conclusion: The use of think aloud protocols as method of data collection is a new way for understanding the issues related to technology use in community pharmacy practice. Think aloud protocol was beneficial in providing objective information on e-prescribing not based on pharmacist's or technician's opinion of the technology. This method provided detailed information and also a wide variety of real time challenges with e-prescribing technology in community pharmacies. Using this data collection method can help identify potential patient safety issues when using e-prescribing and suggestions for redesign. abstract_id: PUBMED:26706404 Are patient's simply unaware? A cross-sectional study of website promotion of community pharmacy adherence-related services in Tennessee. Background: Medication adherence rates can impact patient outcomes, quality of life, and health care costs. Community pharmacists who offer adherence-related services are well positioned to bolster medication adherence rates. One means for patients to learn more about these services is through the pharmacy's website. However, the prevalence of pharmacies' promotion of adherence services through the internet is unknown. Objectives: The present study aimed to quantify the online promotion of pharmacies' adherence-related services. Methods: This was a cross-sectional, observational study of websites representing licensed community pharmacies. One-hundred and sixty-nine community pharmacy websites, representing 1161 of community pharmacies in Tennessee (U.S. State), were included in the observational analysis. Results: The most commonly promoted adherence-related service was online refills (81.1%). Auto refill, medication synchronization, and packaging systems were promoted on &lt;20% pharmacy websites. Types of promoted adherence services differed between chain and independent pharmacy websites. Conclusions: Despite 67% of pharmacies offering adherence-related services, only a fraction of them promote these services online. Patient awareness represents a hidden and often unaddressed variable in increasing adherence. Future studies should investigate whether increasing website promotion of these services would increase service use, and potentially improve adherence rates. abstract_id: PUBMED:35272985 Cognitive and Metacognitive Processes Demonstrated by Pharmacy Students When Making Therapeutic Decisions. Objective. To characterize the types of cognitive and metacognitive processes demonstrated by third-year pharmacy students during a therapeutic reasoning activity.Methods. A qualitative, descriptive study following a think-aloud protocol was used to analyze the cognitive (analytical) and metacognitive processes observed by third-year pharmacy students as they completed a 25-minute therapeutic reasoning activity. Using a deductive codebook developed from literature about reasoning, two independent coders characterized processes from students' audio-recorded, transcribed think-aloud episodes while making therapeutic decisions about simulated clinical cases.Results. A total of 40 think-aloud episodes were transcribed among the cohort. Categorization of the think-aloud transcriptions revealed a series of cognitive analytical and metacognitive processes demonstrated by students during the therapeutic decision-making activity. A total of 1792 codes were categorized as analytical processes, falling into six major themes: 69% gathering information (1232/1792), 13% processing information (227/1792), 7% making assessments (133/1792), 1% synthesizing information (19/1792), 7% articulating evidence (117/1792), and 4% making a recommendation (64/1792). In comparison to gathering information, a much lower frequency of processing and assessment was observed for students, particularly for those that were unable to resolve the case. Students' movement between major analytical processes co-occurred commonly with metacognitive processes. Of the 918 codes categorized as metacognitive processes, two major themes arose: 28% monitoring for knowledge or emotions (257/918) and 72% controlling the planning of next steps or verification of correct information (661/918). Sequencing the codes and co-occurrences of processes allowed us to propose an integrated cognitive/metacognitive model of therapeutic reasoning for students.Conclusion. This study categorizes the cognitive (analytical) and metacognitive processes engaged during pharmacy students' therapeutic reasoning process. The findings can inform current instructional practices and further research into educational activities that can strengthen pharmacy students' therapeutic reasoning skills. abstract_id: PUBMED:27102265 How pharmacists check the appropriateness of drug therapy? Observations in community pharmacy. Background: In Alberta (Canadian province), the Chat Check Chart (CCC) model was developed to help pharmacists understand how patient care standards fit into routine practice. The CCC model outlines a process that covers the following: 1) Chart-asking three prime questions to gather information; 2) Check-perform pharmacotherapy workup by assessing prescription's indication, efficacy, safety and manageability; and 3) Chart-document findings of the pharmacotherapy workup. Objectives: To characterize how pharmacist collect patient information and apply the pharmacotherapy workup when evaluating routine prescriptions in community pharmacy settings. Methods: An observational study with a mixed methods analysis was employed. Participants were surveyed and audio-recorded talking with patients, as well as thinking aloud while evaluating medication therapy. Quantitative analysis was conducted to describe the proportion of time allotted to clinical or technical duties when using the 3 prime questions and pharmacotherapy workup in routine practice. A generic qualitative approach was conducted to describe how the pharmacists evaluated prescriptions and counseled patients. Results: Nine pharmacists from five different pharmacy stores participated in this study. Fifteen recordings of consults and 14 think-alouds were eligible for analysis. Pharmacists allotted 16% of their think-aloud on clinical related issues for new and chronic medications, and the remainder on technical dispensing activities. All pharmacists checked if the medication was safe, but fewer than half checked if the prescription was indicated, effective, or useable (i.e., adherence). Pharmacists covered more content areas when checking the appropriateness for new prescriptions in comparison to chronic refills. Four overarching themes arose from the qualitative analysis: missed opportunities, absence of personalized assessments, reliance on routine pharmacist activities, and non-specific questions. Conclusions: Pharmacists gathered insufficient clinical information to assess their patients' medication therapy, as they overly relied on the patients' profiles, asked patients non-specific questions, and missed patient cues. Routine activities appeared to shape practice including a bias against refill prescriptions and a focus on technical dispensing activities. abstract_id: PUBMED:36011198 Usefulness of a Drug Information Resources Website (SAGASU-DI) Created Based on Inquiries to Clinical Pharmacists. The use of drug information is increasing as the role of pharmacists expands. However, pharmacists spend a huge amount of time collecting drug information, and there has not been any drug information resource website that aims to improve the efficiency of community pharmacists with regard to drug information operations. The purpose of this study was to evaluate the usefulness of a drug information resources website (SAGASU-DI). SAGASU-DI articles were created based on inquiries to clinical pharmacists. The usage statistics of the SAGASU-DI were monitored using Google Analytics between May and July 2021. In addition, a pop-up questionnaire was used to evaluate the usefulness of the SAGASU-DI in answering the questions of viewers. Statistics showed that our website had 25,447 users and 33,000 page views, with a browsing time of 29 s on average. Visitors accessed the website through desktop (51.9%) and mobile (44.3%) systems. Community pharmacists accounted for 40.2% of the visitors. The pop-up questionnaire showed that 23.2% of the viewers were satisfied with our website, and 1.5% of the viewers were not. Although 65.1% of the viewers refused to answer the questionnaire, the average percentage of the viewers who were satisfied with the site was 94.1%. The SAGASU-DI was found to be a valuable resource website for drug information services, mainly for community pharmacists. abstract_id: PUBMED:31466883 Connecting one's own illness story to the illness experiences of others on a website-An evaluation study using the think aloud method. Objective: It is well described how telling one's illness story can help to cope with illness, but little is known about the processes of reception of other people's stories. This study aimed to analyse patients' reception processes of other patients' experiences while using a website (www.krankheitserfahrungen.de - DIPEx Germany). Methods: A text analysis of think aloud transcripts was conducted, using data from a usability study of the website krankheitserfahrungen.de. Twenty patients with the same conditions as presented on the website (chronic pain, diabetes type 2, inflammatory bowel disease, epilepsy) were assigned to the study, asked to use the website and concurrently to think aloud. The sessions were audio recorded, transcribed and analysed using grounded theory methodology. Results: Study participants started to talk about their own illness experiences while using the website. They constantly compared their experiences with those they read about. Participants' verbalised experiences were categorised according to three underlying themes: significant emotions, unresolved problems, and inevitability and acceptance. Conclusion: Reception of a variety of illness experiences presented online led patients to consider their own challenges in coping with their illness. Practice Implications: Providing other patients' experiences in information and education materials helps patients to deal with their own illness. abstract_id: PUBMED:28181321 Availability of drug information on community pharmacy websites in Tennessee. Objective: This study reports the availability and type of drug information resources on community pharmacy websites in Tennessee. Methods: A list of pharmacies was obtained from the Tennessee Board of Pharmacy. All community pharmacies located in Tennessee with a website were evaluated. Community pharmacy websites that listed a drug information resource were further analysed into pharmacy type and drug information provider. Key Findings: One hundred and sixty-nine websites were evaluated. Thirty-six websites listed a drug information resource. Fifty per cent of those sites used the drug information provider RxWiki. Other providers included drugs.com, First Data Bank, A.D.A.M., RxList, Gold Standard Multimedia and Express Scripts. Conclusion: Community pharmacy websites are underutilized as a provider for credible drug information for the public. Answer: The study described in abstract PUBMED:20730049 investigated the quality of drugs purchased from various internet pharmacy websites, which were stratified by ratings from the National Association of Boards of Pharmacy (NABP) or other ratings. The study found that while prices and conditions of purchase varied widely across websites, and not all websites delivered the exact drugs ordered, the quality of the drugs that could be assessed was generally high. None of the drugs from "approved", "legally compliant" or "not recommended" websites failed Raman spectrometry testing (0 out of 86 samples). However, 8.6% (3 out of 35) of the drugs from "highly not recommended" and unidentifiable websites failed the test. The study concluded that U.S. consumers could reduce their risk by relying on lists from credentialing agencies and using common sense when examining packaging and pills. This suggests that while there are risks associated with purchasing drugs from internet pharmacies, especially those that are "highly not recommended" or unidentifiable, consumers can find safer options by using websites that have been vetted by credentialing agencies. The study's findings indicate that not all internet-sourced drugs are of poor quality, and with careful selection of the source, consumers can access medications that pass quality assessments. Therefore, assessing website pharmacy drug quality might be safer than commonly perceived, provided that consumers are discerning in their choices and utilize recommended resources to guide their purchases.
Instruction: Cardiac arrest leadership: in need of resuscitation? Abstracts: abstract_id: PUBMED:34093075 Education for resuscitation These European Resuscitation Council education guidelines are based on the 2020 International Consensus on Cardiopulmonary Resuscitation Science with Treatment Recommendations. This section provides guidance to citizens and healthcare professionals with regard to teaching and learning the knowledge, skills and attitudes of resuscitation with the ultimate aim of improving patient survival after cardiac arrest. abstract_id: PUBMED:34984492 Guidelines of the European Resuscitation Council (ERC) on cardiopulmonary resuscitation 2021: update and comments The European guidelines on cardiopulmonary resuscitation, which are divided into 12 chapters, have recently been published. In addition to the already known chapters, the topics "epidemiology" and "life-saving systems" have been integrated for the first time. For each chapter five practical key statements were formulated. In the present article the revised recommendations on basic measures and advanced resuscitation measures in adults as well as on postresuscitation treatment are summarized and commented on. abstract_id: PUBMED:36424475 Ethical aspects of mechanical resuscitation in a child : Results of an expert workshop within the BMBF project CLAIRE-Children's Automated Intelligent Resuscitation Background: While the use of mechanical resuscitation devices can be considered for adult resuscitation, the European Resuscitation Council guidelines do not yet mention their use for pediatric resuscitation. Only one device has been partially approved for use in children; further pediatric appliances are currently being used off-label. Ethical considerations arising from the use of mechanical resuscitation devices have not yet been presented in a structured way. Objective: To elaborate ethical considerations in the development phase of mechanical resuscitation devices for children. Methods: Based on several fictitious case reports, an interdisciplinary expert focus group discussion was conducted. This was followed by a moderated discussion, summarizing the results. Guiding principles and research desiderata were formulated using these results as well as existing literature. Results: According to the group of experts, ethical considerations regarding mechanical resuscitation devices in pediatrics predominantly concern the subject of indication and discontinuation criteria. Ethical aspects concerning psychosocial impacts on affected families and intervention teams cannot be generalized and need to be analyzed on a case-by-case basis. Conclusion: The considerations presented regarding the use of mechanical resuscitation devices in the pediatric context, which is still in its developmental stage, could also have practical implications for adult out-of-hospital resuscitation decisions. Concerning ethical aspects of out-of-hospital resuscitation decisions, especially using mechanical resuscitation devices, the need for accompanying empirical research is substantial. abstract_id: PUBMED:38261877 The Young European Resuscitation Council Resuscitation Science Masterclass - Concept and implementation. Nurturing the development of the next generation of resuscitation scientists is essential for creating a vibrant and enabled community equipped with the necessary knowledge, attitudes, and skills to transform resuscitation practice and improve outcomes. In this concept paper we will describe the development and implementation of the first Young European Resuscitation Council Resuscitation Science Masterclass. The masterclass aims to connect, inspire, and support the growth of the next generation of resuscitation scientists through education, networking, and joint scientific work. The masterclass provides 20 international, multi-professional early career resuscitation scientists with the opportunity to expand their knowledge and network as well as conduct joint scientific work over the course of one year. This is achieved by interactive webinars, innovative online workshops, engaging online journal clubs as well as an educational in-person event to conclude the masterclass. The Young European Resuscitation Council Resuscitation Science Masterclass aims to strengthen the global resuscitation community and next generation of resuscitation scientists by facilitating opportunities of broader international initiatives and collaboration for early career resuscitation scientists, potentially leading to accelerated recruitment of future resuscitation leaders. Ultimately, this masterclass may enable early career researchers to produce high impact research that can shape the future of resuscitation science and improve cardiac arrest patient care globally. abstract_id: PUBMED:9918442 Decisions to terminate resuscitation. Resuscitation Committee. To gain more insight into decision making around the termination of resuscitation (CPR), we studied factors which influenced the time before discontinuing resuscitation, and the criteria on which those decisions were based. These criteria were compared with those of the European Resuscitation Council (ERC) and the American Heart Association (AHA). For this study, we reviewed the audiotapes of resuscitation attempts in a hospital. A total of 36 attempts were studied, involving 27 men and nine women, mean (S.D.) age 64 (18) years. A total of 19 patients received resuscitation on general wards, and 17 in the emergency room after an out-of-hospital circulatory arrest. The median interval time (range) from start to termination was 33 min (8-81 min). Results from multiple linear regression showed that a delay greater than 5 min in first advanced life support measures, drawing a sample for biochemical analysis, and the patient's response shown by return of spontaneous circulation were independently associated with the time of terminating resuscitation. The team used a number of criteria which can be found in the guidelines of the ERC and the AHA, but also used additional criteria. The ERC and the AHA criteria were not sufficient to cover all termination decisions. We conclude that the point in time to terminate resuscitation is not always rationally chosen. Updating of the current guidelines for terminating resuscitation and training resuscitation teams to use these guidelines is recommended. abstract_id: PUBMED:28145206 An overview of adult cardiopulmonary resuscitation equipment. Nurses have many roles and responsibilities in relation to cardiopulmonary resuscitation (CPR), including: recognising that a patient is deteriorating; recognising cardiac arrest and commencing CPR while waiting for the resuscitation team to arrive; ensuring the contents of the resuscitation trolley are present, in date and in full working order; and completing documentation for the National Cardiac Arrest Audit in participating healthcare organisations. Many patient safety incidents involving resuscitation trolley equipment and resuscitation have been reported, and predominantly relate to a lack of equipment, missing equipment and inadequately stocked trolleys. This article provides an overview of the contents of the standard resuscitation trolley and a rationale for the use of each item. It discusses the importance of checking and restocking the resuscitation trolley, as well as the documentation of CPR efforts. abstract_id: PUBMED:20873123 Cardiopulmonary resuscitation in pregnancy--European Resuscitation Council guidelines Cardio-pulmonary resuscitation is a life-saving technique that should be familiar to all people, even those without medical education. There are two basic life-saving levels: BLS (Basic Life Support) and ALS (Advanced Life Support). ALS a medical procedure that is restricted to medical practitioners. Cessation of circulation may happen to anyone. Cardiac arrest in case of pregnant women is a very specific state. The cause of life-threatening states during pregnancy can be connected with new infections, exacerbation of chronic diseases, as well as changes connected with the pregnancy itself. In those situations, due to physiological and anatomical changes which occur during pregnancy some modifications in the procedure of resuscitation are necessary abstract_id: PUBMED:35723698 The new 2021 resuscitation guidelines and the importance of lay resuscitation Lay resuscitation is one of the most important measures to increase the survival rate of patients after out-of-hospital cardiac arrest. While European countries, and especially Scandinavian countries, achieve lay resuscitation rates of over 80%, the rate in Germany is only around 40%. The 2021 Resuscitation Guidelines updated by the European Resuscitation Council give special weight to Systems Saving Lives and focus on resuscitation by laypersons. The Systems Saving Lives emphasize the interplay between all actors involved in the chain of survival and thereby specify the link between the emergency service and the general population.Based on the BIG FIVE survival strategies after cardiac arrest, five key strategies are outlined that can achieve the greatest improvement in survival. These are (1) increasing lay resuscitation rates through campaigns and KIDS SAVE LIVES school-based resuscitation training, (2) implementing telephone resuscitation in dispatch centers, (3) first responder systems, (4) advanced life support, and (5) specialized cardiac arrest centers. abstract_id: PUBMED:35077856 Resuscitation highlights in 2021. Background: This review is the latest in a series of regular annual reviews undertaken by the editors and aims to highlight some of the key papers published in Resuscitation during 2021. Methods: Hand-searching by the editors of all papers published in Resuscitation during 2021. Papers were selected based on then general interest and novelty and were categorised into themes. Results: 98 papers were selected for brief mention. Conclusions: Resuscitation science continues to evolve and incorporates all links in the chain of survival. abstract_id: PUBMED:28109621 The latest in paediatric resuscitation recommendations Cardiac arrest has a high mortality in children. To improve the performance of cardiopulmonary resuscitation, it is essential to disseminate the international recommendations and the training of health professionals and the general population in resuscitation. This article summarises the 2015 European Paediatric Cardiopulmonary Resuscitation recommendations, which are based on a review of the advances in cardiopulmonary resuscitation and consensus in the science and treatment by the International Council on Resuscitation. The Spanish Paediatric Cardiopulmonary Resuscitation recommendations, developed by the Spanish Group of Paediatric and Neonatal Resuscitation, are an adaptation of the European recommendations, and will be used for training health professionals and the general population in resuscitation. This article highlights the main changes from the previous 2010 recommendations on prevention of cardiac arrest, the diagnosis of cardiac arrest, basic life support, advanced life support and post-resuscitation care, as well as reviewing the algorithms of treatment of basic life support, obstruction of the airway and advanced life support. Answer: The abstracts provided do not directly address the question of whether cardiac arrest leadership is in need of resuscitation. However, they do offer insights into various aspects of resuscitation practices, guidelines, and education, which could be indirectly related to the quality of leadership in cardiac arrest situations. For instance, the European Resuscitation Council (ERC) guidelines are updated regularly to reflect the latest consensus on cardiopulmonary resuscitation science and treatment recommendations, which could influence leadership practices in resuscitation (PUBMED:34093075, PUBMED:34984492). The guidelines cover a range of topics, including basic and advanced resuscitation measures, post-resuscitation treatment, and the importance of systems that save lives, such as lay resuscitation (PUBMED:34984492, PUBMED:35723698). Ethical considerations, particularly in the context of pediatric resuscitation and the use of mechanical resuscitation devices, also play a role in decision-making and leadership during resuscitation efforts (PUBMED:36424475). Decisions to terminate resuscitation are complex and may not always align with existing guidelines, suggesting a potential area for improvement in leadership and decision-making protocols (PUBMED:9918442). The development of resuscitation scientists through initiatives like the Young European Resuscitation Council Resuscitation Science Masterclass indicates an investment in the future leadership of resuscitation science, which could ultimately impact cardiac arrest outcomes (PUBMED:38261877). Furthermore, the importance of proper equipment and training for nurses, who are often on the front lines of recognizing and responding to cardiac arrest, highlights the need for strong leadership in ensuring readiness and adherence to best practices (PUBMED:28145206). Lastly, the emphasis on disseminating international recommendations and training for both health professionals and the general population in pediatric resuscitation suggests an ongoing effort to improve leadership and outcomes in this area (PUBMED:28109621). In summary, while the abstracts do not directly address the state of cardiac arrest leadership, they do suggest that leadership is a critical component of effective resuscitation practices, and there are ongoing efforts to improve resuscitation outcomes through updated guidelines, ethical considerations, training, and scientific development.
Instruction: Better survival in female patients with hepatocellular carcinoma: oral contraceptive pills related? Abstracts: abstract_id: PUBMED:15942442 Better survival in female patients with hepatocellular carcinoma: oral contraceptive pills related? Background: Hepatocellular carcinoma (HCC) has an indisputable male predominance. "Gender" as an independent prognostic factor for survival is, however, controversial. Goals: Determine the influence of gender on survival in HCC patients, and identify factors that may account for the difference. Methods: A retrospective analysis on a prospectively collected database in a 15-year period, from 1989 to 2003. Results: A total of 3,171 HCC patients were managed in our institution (946 with curative treatment, 1,388 with palliative treatment, and 837 with supportive treatment) and studied. Female patients (n = 520) were 4.3 years older (P = 0.000), had a lower proportion of smokers and drinkers (P = 0.000), and were less likely to be hepatitis B carriers (P = 0.000). There was no difference in Child-Pugh status, tumor size, and the use of different treatments between genders. The overall median survival was 25.7 months longer in females after curative treatment (73.6 vs. 47.9 months; P = 0.012). The survival benefit in female patients was observed in early-stage diseases and persisted when only hepatitis B surface antigen-positive patients were analyzed (96.4 vs. 47.9 months; P = 0.044). With multivariate analysis, gender, indocyanine green test value at 15 minutes, number of tumor nodules, size of tumor, major vascular invasion, invasion of adjacent organs, and tumor rupture were the independent variables for survival. More importantly, in female patients, history of using oral contraceptive was an independent factor with survival benefit (P = 0.004). Conclusion: Gender is an independent variable for survival after curative treatment of HCC. A survival benefit was observed in females. History of using oral contraceptive is associated with a better long-term survival in female patients. abstract_id: PUBMED:28579181 Estrogen Replacement Reduces Risk and Increases Survival Times of Women With Hepatocellular Carcinoma. Background & Aims: Environmental factors have been identified that affect risk of hepatocellular carcinoma (HCC), but little is known about the effects of sex hormones on liver cancer development or outcome. The authors investigated whether menopause hormone therapy (MHT) affects risk, age at onset, or outcome of HCC. Methods: We performed a case-control study of 234 female patients treated for HCC at a tertiary medical center and with 282 healthy women (controls) from January 1, 2004 through May 31, 2015. We collected detailed information on environmental exposures, ages of menarche and menopause, hysterectomies, and uses of birth control and MHT. We performed multivariable logistic and Cox regression analyses to determine the independent effects of factors associated with women on risk and clinical outcome in HCC. The primary outcomes were effect of MHT on HCC risk, the relationship between MHT with hepatitis virus infection on HCC development, and effect of MHT on age at HCC onset or survival after diagnosis of HCC. Results: The estimated adjusted odds ratio (AOR) for HCC in women who ever used estrogen was 0.53 (95% confidence interval [CI], 0.32-0.88). This association was supported by the older age of HCC onset among estrogen users (mean, 64.5 ± 0.9 years) vs nonusers (mean 59.2 ± 1.1 years; P = .001) and the reduced risk of HCC among long-term users (more than 5 years) (AOR, 0.36; 95% CI, 0.20-0.63). Users of estrogen also had a reduced risk for hepatitis-associated HCC: AOR for users, 4.37 (95% CI, 1.67-11.44) vs AOR for nonusers, 17.60 (95% CI, 3.88-79.83). Estrogen use reduced risk of death from HCC (hazard ratio, 0.55; 95% CI, 0.40-0.77; P = .01). Median overall survival times were 33.5 months for estrogen users (95% CI, 25.7-41.3 months) and 24.1 months for nonusers (95% CI, 19.02-29.30 months; P = .008). Conclusion: In a case-control study of women with HCC vs female control subjects at a single center, we associated use of estrogen MHT with reduced risk of HCC and increased overall survival times of patients with HCC. Further studies are needed to determine the benefits of estrogen therapy for women and patients with HCC, and effects of tumor expression of estrogen receptor. abstract_id: PUBMED:8382515 Female hormone utilisation and risk of hepatocellular carcinoma. The relationship between female hormone use and primary liver cancer was analysed using data from a case-control study conducted between 1984 and 1992 in Milan on 82 female incident cases with histologically or serologically confirmed hepatocellular carcinoma and 368 controls admitted to hospital for acute non-neoplastic, non-hormone-related diseases. An elevated relative risk (RR) or primary liver cancer was observed in oral contraceptive (OC) users (RR 2.6, for ever versus never users, 95% confidence interval, CI 1.0-7.0). The RR was directly related to duration of use (RR 1.5 for &lt; or = 5 years and 3.9 for &gt; 5 years) and persisted for longer than 10 years after stopping use (RR 4.3%, 95% CI 1.0-18.2). The RR were below unity, although not significantly, for women ever using oestrogen replacement therapy (RR 0.2, 95% CI 0.03-1.5) and female hormones for indications other than contraception and menopausal therapy (RR 0.4, 95% CI 0.1-1.5). The long-lasting, association between risk of hepatocellular carcinoma and OC use has potential implications on a public health scale, since primary liver cancer is a relatively rare disease among young women, but much more common at older ages. This study provides limited but reassuring evidence on the possible relationship between oestrogen replacement treatment and subsequent risk of hepatocellular carcinoma. abstract_id: PUBMED:6207557 Hepatocellular carcinoma in the non-cirrhotic liver: a comparison with that complicating cirrhosis. The clinicopathological features of 50 patients with hepatocellular carcinoma arising in a non-cirrhotic liver are described and compared with those of 100 patients in whom the tumour arose as a complication of cirrhosis. The non-cirrhotic patients were significantly younger, more often female and had a less strong association with serum markers of hepatitis B virus infection. Liver function tests and serum AFP were less often abnormal and survival was significantly better than in the cirrhotic group. The different aetiological factors, clinical features and prognosis of hepatocellular carcinoma arising in the non-cirrhotic liver compared to the more common form of hepatocellular carcinoma which complicates cirrhosis justifies detailed investigation by liver biopsy and other techniques. abstract_id: PUBMED:9363673 Etiology and clinical profile of hepatocellular carcinoma in Bangladesh. Sixty-four consecutive subjects with hepatocellular carcinoma were prospectively studied in the department of Hepatology, IPGMR, Dhaka. Their mean age was 50.11 years. Fifty-two were male and 12 female. Cirrhosis was present in 12 (18.75%) subjects. Thirty subjects (46.88%) had HBsAg in their sera. Seven (58.33% of females) patients gave history of use of oral contraceptives. Cirrhosis, HBV infection, male sex, middle age, and probably the use of oral contraceptives in females appeared to be important risk factors for development of HCC in Bangladesh. Majority of patients presented with upper abdominal pain, weight loss and anorexia. Hepatomegaly was invariably present. Alpha fetoprotein was significantly higher in cirrhotic HCC patients than in non-cirrhotic ones. Median survival was two months. None of the clinical or laboratory features predicted the prognosis. abstract_id: PUBMED:3021108 Spontaneous rupture of an hepatic adenoma in pregnancy with survival of mother and fetus. A case of spontaneous rupture of an hepatic adenoma during pregnancy is reported with survival of both mother and baby. The mother had previously taken the contraceptive pill for 9 years. Attention is drawn to the relationship between the contraceptive pill and the previously rare condition of benign liver adenoma, the need for prompt recognition, and for an intensive team effort in the management of rupture of this neoplasm. abstract_id: PUBMED:26058379 Hepatocellular adenoma with malignant transformation in male patients with non-cirrhotic livers. Introduction: Hepatocellular adenomas (HCAs), with a risk of malignant transformation into hepatocellular carcinoma (HCC), classically develop in young women who are taking oral contraceptives. It is now clear that HCAs may also occur in men. However, it is rarely reported that HCAs with malignant transformation occur in male patients with non-cirrhotic livers. This study aimed to characterize the malignancy of HCAs occurring in male patients. Methods: All patients with HCAs with malignant transformation who underwent hepatectomy at the Cancer Institute and Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College between January 1, 1999 and December 31, 2011 were enrolled in the study. The clinical characteristics as well as radiologic and pathologic data were reviewed. Results: HCAs with malignant transformation were observed in 5 male patients with non-cirrhotic livers, but not in female patients. The alpha-fetoprotein (AFP) levels were higher in patients with HCAs with malignant transformation than in patients with HCAs without malignant transformation. The diameters of the tumors with malignant transformation were larger than 5 cm in 3 cases and smaller than 5 cm in 2 cases. The 5 patients were all alive without recurrence by the end of the study period. The disease-free survival times of the 5 patients were 26, 48, 69, 69, and 92 months. Conclusion: Our results indicate that resection would be advised even if the presumptive diagnosis is adenoma smaller than 5 cm in diameter, especially in male patients. abstract_id: PUBMED:2456977 Fibrolamellar carcinoma of the liver: an immunohistochemical study of nineteen cases and a review of the literature. Hepatocellular carcinoma (HCC) is a rapidly fatal neoplasm of high worldwide prevalence. Fibromellar carcinoma (FLC), a variant of HCC, lacks the dismal prognosis of "ordinary" HCC (O-HCC) and is characterized by a diagnostic histologic appearance. The current study analyzes the clinical characteristics, immunohistochemistry, and treatment of nineteen cases of FLC. These data, together with a detailed review of the literature, further characterize this unique variant. FLC affects younger patients and lacks the male predominance of O-HCC. Also, FLC lacks specific association with cirrhosis, hepatitis B virus infection, use of oral contraceptives, and alcohol abuse, all of which are implicated in other hepatic tumors. This, along with differences in serum tumor marker prevalence (AFP, B12 binding protein) suggests that its pathogenesis differs from that of O-HCC. Despite these differences, FLC shares a common differentiation with O-HCC. The increased amounts in FLC of stainable alpha-1-antitrypsin, fibrinogen, and C-reactive protein, all of which are acute phase reactants and normal hepatocyte products, implies better differentiation of FLC cells. Finally, the better prognosis of FLC is supported by this study, since only two of the 19 patients died because of tumor. This contrasts with the reported survival of patients with O-HCC, usually measured in weeks. Hepatic transplantation may hold promise for future patients with "surgically unresectable" FLC as procedure-related complications are overcome. abstract_id: PUBMED:3020682 Estrogen and progesterone receptors in human liver: does their study contribute to a better understanding of benign contraceptive-associated liver tumors? The estrogen and progesterone receptor content of liver cytosol was measured in female patients with focal nodular hyperplasia associated with oral contraceptive use and compared with the receptor content of non-tumorous liver and of primary hepatocellular carcinomas. Receptors were found in very low concentrations or were not measurable at all. In one case of focal nodular hyperplasia the estrogen receptor content of the tumor was higher than that in the adjacent normal liver. Malignant liver tumors and the male liver were characterized by a low or non-measurable receptor content. The study of nuclear receptors combined with the use of monoclonal antibodies may be more helpful in elucidating the complex relationship between oral contraceptive use, benign liver tumors and hepatic steroid receptors. abstract_id: PUBMED:212185 Role of exogenous female hormones in altering the risk of benign and malignant neoplasms in humans. The epidemiological and clinical evidence for various forms of exogenous estrogens altering the risk of neoplasms of the female genital system, breast, and liver are reviewed and evaluated. It is virtually certain that in utero exposure to diethylstilbestrol can cause clear cell adenocarcinomas of the vagina and cervix. There is strong evidence that various estrogens given for treatment of menopausal symptoms can cause endometrial carcinoma and that sequential oral contraceptives probably also do so. Oral contraceptives very probably reduce the risk of both cystic disease and fibroadenoma of the breast and increase the risk of liver cell adenomas. Studies to date do not provide consistent and convincing evidence that any form of exogenous estrogen alters the risk of cancers of the breast or ovary or that oral contraceptives alter the risk of cervical neoplasia or focal nodular hyperplasia of the liver, although recent reports suggest that continued vigilance is warranted. Specific topics requiring further epidemiological investigation are suggested. Answer: The relationship between oral contraceptive pills (OCPs) and survival in female patients with hepatocellular carcinoma (HCC) is complex and has been explored in various studies with differing conclusions. Some studies suggest that there may be a positive association between OCP use and better survival outcomes in female HCC patients. For instance, a retrospective analysis on a prospectively collected database over a 15-year period found that female patients with HCC had a longer overall median survival after curative treatment compared to males, and that history of using oral contraceptive pills was an independent factor associated with better long-term survival in female patients (PUBMED:15942442). This suggests that gender is an independent variable for survival after curative treatment of HCC, and that OCP use may contribute to the observed survival benefit in females. Another study, a case-control study of 234 female patients treated for HCC, found that use of estrogen menopause hormone therapy (MHT) was associated with reduced risk of HCC and increased overall survival times of patients with HCC (PUBMED:28579181). This supports the idea that sex hormones, such as those found in OCPs and MHT, may influence liver cancer development and patient outcomes. However, it is important to note that other studies have reported conflicting results. A case-control study conducted in Milan reported an elevated relative risk of primary liver cancer in oral contraceptive users, with the risk being directly related to the duration of use (PUBMED:8382515). This indicates that OCP use may actually increase the risk of developing HCC. The evidence is not entirely consistent across all studies, and the relationship between OCP use and HCC survival may be influenced by various factors, including the presence of other risk factors such as hepatitis virus infection, cirrhosis, and other environmental exposures. Further research is needed to fully understand the benefits and risks of estrogen therapy for women and patients with HCC, as well as the effects of tumor expression of estrogen receptor (PUBMED:28579181). In conclusion, while some studies suggest that OCP use may be related to better survival in female patients with HCC, the evidence is not unequivocal, and the potential implications of OCP use on HCC risk and survival require further investigation.
Instruction: Foreign body impaction in the esophagus: are there underlying motor disorders? Abstracts: abstract_id: PUBMED:9595234 Foreign body impaction in the esophagus: are there underlying motor disorders? Unlabelled: We observed in our practice several cases of impaction with meat boluses without bony edges, in patients with patent esophageal lumen. The aim of this study was to search for eventual underlying motor disorders which could be responsible for this impaction. We included 19 patients who attended the endoscopy service for meat bolus impaction without organic esophageal stenosis. This group was compared with 18 control volunteers. Both groups underwent UGI series, UGI endoscopy and low-compliance perfusion standard esophageal manometry. Results: Compared with the control group, the impacted subjects presented marked reduction in amplitude and duration of esophageal contraction in the proximal esophagus. Conclusions: These motor disorders could be responsible for the foreign body impaction in the esophagus. However, we believe this patient group should be further studied by 24-hour esophageal manometry to reach a more accurate diagnosis by studying each patient's entire circadian cycle. abstract_id: PUBMED:29040492 Foreign body impaction in the esophagus: are there underlying motor disorders? Results: Compared with the control group, the impacted subjects presented marked reduction in amplitude and duration of esophageal contraction in the proximal esophagus. Conclusions: These motor disorders could be responsible for the foreign body impaction in the esophagus. However, we believe this patient group should be further studied by 24-hour esophageal manometry to reach a more accurate diagnosis by studying each patient's entire circadian cycle. abstract_id: PUBMED:10866085 Esophageal rings and strictures. Manometric characteristics in patients with food impaction. Purpose: The purpose was to investigate the manometric characteristics in patients with lower esophageal rings or strictures with special reference to food impaction. Material And Methods: The material comprised 344 patients (158 female and 186 male). Lower esophageal rings or strictures were diagnosed radiologically by the full column technique. Manometry was performed with triple lumen catheters connected to a hydraulic capillary system and external transducers. Results: Forty patients had rings, and 21 patients strictures. The reference group comprised 283 patients. Dysphagia was seen most frequently in patients with rings. Food impaction was seen only in patients with rings or strictures, whereas chest pain and heartburn appeared with the same incidence in all 3 groups. Non-specific motor disorders were seen most frequently in patients with strictures, but as delayed esophageal emptying with no influence on the tendency to food impaction. Only ring diameter, but not stricture diameter, was of any significance with a higher incidence of food impaction in patients with narrow rings. Conclusion: Radiology rather than manometry should be the first diagnostic step in patients with benign dysphagia suffering from food impaction. abstract_id: PUBMED:35445288 The distorted body: The perception of the relative proportions of the body is preserved in Parkinson's disease. Given humans' ubiquitous visual experience of their own body, one reasonable assumption is that one's perceptions of the lengths of their body parts should be accurate. However, recent research has shown that large systematic distortions of the length of body parts are present in healthy younger adults. These distortions appear to be linked to tactile sensitivity such that individuals overestimate the length of body parts of low tactile sensitivity to a greater extent than body parts of high tactile sensitivity. There are certain conditions featuring reduced tactile sensitivity, such as Parkinson's disease (PD) and healthy older ageing. However, the effect of these circumstances on individuals' perceptions of the lengths of their body parts remains unknown. In this study, participants visually estimated the length of their body parts using their hand as a metric. We show that despite the reductions in tactile sensitivity, and potential alterations in the cortical presentation of body parts that may occur in PD and healthy older ageing, individuals with mild-moderate PD and older adults of comparable age experience body size distortions comparable to healthy younger controls. These findings demonstrate that the ability to perceive the length of one's body parts is well preserved in mild-moderate PD. abstract_id: PUBMED:18942900 Eosinophilic esophagitis in the adult--clinical, endoscopic, pH-metric, and manometric findings Objectives: Eosinophilic esophagitis (EE) is a condition characterized by dysphagia and frequent food impaction in young adults. The aim of our study was to evaluate the clinical aspects, endoscopic features, pH-metric and motility disorders in EE. Patients And Methods: Adult patients with EE were prospectivity included. Endoscopy with biopsy, stationary esophageal manometry, and 24-hour pH-metry were performed. We analyzed the duration of disease, allergies, blood peripheral eosinophilia, prevalence of dysphagia, number of food impaction episodes, and complications during the endoscopic procedure. Results: Eleven male patients with a mean age of 35 years were followed. Endoscopy showed esophageal disorders in all cases: 5 esophageal felinizations, mucosal abnormalities in 4 cases, distal rings in 3 cases, and 2 esophageal stenoses. In two cases mucosal tearing during the endoscopic procedure was described. In 6 patients the manometric study showed motor disorders affecting the esophageal body, 5 of them displaying hypomotility. Two patients showed pathological gastroesophageal reflux during pH-monitoring. Blood peripheral eosinophilia was detected in 3 patients. Conclusion: Although endoscopic abnormalities are frequently found, they do not usually explain dysphagia and food impaction episodes in EE. Ineffective esophageal peristalsis is the most prevalent manometric disorder associated with this entity, although it is not clearly related to symptom worsening either. abstract_id: PUBMED:3568685 Esophageal dysfunction in a weanling thoroughbred. A 6-month-old Thoroughbred colt was examined because of persistent dysphagia noted since birth. Moderately severe regurgitation occurred when the colt ate semi-solid food or drank. Complete esophageal impaction developed when the colt ate solid material. Endoscopic examination revealed ulceration, dilatation and lack of peristalsis in the area of the previous impaction. Barium esophagram demonstrated the dilatation in the area of the previous impaction. Esophageal manometry revealed prolonged simultaneous contractions throughout the esophagus suggesting the presence of a motor abnormality. The colt was maintained on a slurry of complete pelleted feed, but the esophageal dysfunction persisted until euthanasia at 17 months of age. Gross post mortem and histologic examinations showed no abnormalities in the muscularis mucosa, myenteric plexus, vagus nerve, or brain stem. The history, signs, and manometric findings suggested esophageal dysfunction in this colt. Motor disorders of the esophagus should be considered in horses with persistent dysphagia or recurrent episodes of choke and esophageal manometry can help characterize these disorders. abstract_id: PUBMED:35426475 Hypothalamic Atrophy, Expanded CAG Repeat, and Low Body Mass Index in Spinocerebellar Ataxia Type 3. Background: Spinocerebellar ataxia type 3 (SCA3) is an inherited motor disorder that is characterized by low body mass index (BMI). Considering the role of the hypothalamus in regulating appetitive behaviors and metabolism, low BMI may result from hypothalamic degeneration. Objectives: To examine hypothalamic volume changes in SCA3 by comparing patients and matched healthy controls and to identify potential mediating effects of hypothalamic pathology on CAG repeats for BMI. Methods: Magnetic resonance imaging datasets of hypothalamic volumes from 41 SCA3 patients and 49 matched controls were analyzed. Relationships among CAG repeat number, hypothalamic volumes, and BMI were assessed using correlation and mediation analyses. Results: SCA3 patients exhibited significant hypothalamic atrophy. Tubular hypothalamic volume was significantly associated with BMI. Mediation analysis revealed an indirect effect of CAG repeat number on BMI via tubular hypothalamic atrophy. Conclusions: Low BMI in SCA3 is related to neurodegeneration within the tubular hypothalamus, providing a potential target for energy-based treatment. © 2022 International Parkinson and Movement Disorder Society. abstract_id: PUBMED:19517247 Efficacy and safety of memantine in Lewy body dementia. Lewy body dementia (LBD) is a progressive brain disease manifest as dementia and parkinsonism, along with psychotic and autonomic disorders. Although studies in recent years have demonstrated the positive effects of cholinesterase inhibitors in LBD, the search for therapeutic agents with other mechanisms of action remains relevant. An open, controlled, 16-week study was performed with the aim of evaluating the efficacy and safety of memantine in patients with clinically diagnosed LBD (criteria of McKeith et al., 1999). The study included 23 patients (mean age 69.2 +/- 5.9 years), who were divided into two groups: 14 patients received memantine at a dose of 20 mg/day and nine patients constituted the control group. Efficacy was evaluated using a battery of quantitative neurospychological tests, clinical scales for assessment of fluctuations in mental states, scales for assessment of behavioral and psychotic disorders, and the general clinical impression scale. The results demonstrated that memantine had positive effects on the patients' general status and cognitive functions (increases on the mini mental state examination by 1.5 points), mainly because of improvements in attention and control functions. There were also reductions in the severity of fluctuations in mental state, aggressivity, lack of spontaneity, and disinhibition. The severity of psychotic and motor disorders did not change significantly. Tolerance of the agent was good, only two patients withdrawing from the study because of episodes of confusion during the dose titration period. abstract_id: PUBMED:18061217 Huntington's disease impairs recognition of angry and instrumental body language. Patients with Huntington's disease (HD) exhibit motor impairments as well as cognitive and emotional deficits. So far impairments in the ability to recognize emotional stimuli have mostly been investigated by using facial expressions and emotional voices. Other important emotional signals are provided by the whole body. To investigate the impact of motor deficits on body recognition and the relation between motor disorders and emotion perception deficits, we tested recognition of emotional body language (instrumental, angry, fearful and sad) in 19 HD patients and their matched controls with a nonverbal whole body expression matching task. Results indicate that HD patients are impaired in recognizing both instrumental and angry whole body postures. Furthermore, the body language perception deficits are correlated with measures of motor deficit. Taken together the results suggest a close relationship between emotion recognition (specifically anger) and motor abilities. abstract_id: PUBMED:29912447 Body Structure, Function, Activity, and Participation in 3- to 6-Year-Old Children Born Very Preterm: An ICF-Based Systematic Review and Meta-Analysis. Background: The World Health Organization's International Classification of Functioning, Disability, and Health framework, Children and Youth Version (ICF-CY), provides a valuable method of conceptualizing the multidomain difficulties experienced by children born very preterm (VP). Reviews investigating motor outcomes at preschool age across ICF-CY domains are lacking. Purpose: The purpose of this review is to identify and compare motor outcomes of 3- to 6-year-old children born VP and children born full-term (FT) within the ICF-CY framework. Data Sources: Four electronic databases and reference lists of included and key articles were searched. Study Selection: Studies comparing motor outcomes of 3- to 6-year-old children born VP (&lt;32 weeks' gestation or birth weight &lt;1500 g) with peers born FT were included. Data Extraction: Two independent authors extracted data and completed quality assessments. Data Synthesis: Thirty-six studies were included. Activity motor performance of children born VP was consistently poorer compared with peers born FT: standardized mean difference (SMD) was -0.71 (95% CI = -0.80 to -0.61; 14 studies, 2056 participants). Furthermore, children born VP had higher relative risk (RR) of motor impairment (RR = 3.39; 95% CI = 2.68 to 4.27; 9 studies, 3466 participants). Body structure and function outcomes were largely unable to be pooled because assessment tools varied too widely. However, children born VP had higher RR of any neurological dysfunction (Touwen Neurological Examination) (RR = 4.55; 95% CI = 1.20 to 17.17; 3 studies, 1363 participants). There were no participation outcome data. Limitations: Limitations include the lack of consistent assessment tools used in VP follow-up at preschool age and the quality of the evidence. Conclusions: Children born VP experience significant motor impairment across ICF-CY activity and body structure and function domains at preschool age compared with peers born FT. Evidence investigating participation in VP preschool-age populations relative to children born at term is sparse, requiring further research. Answer: Yes, there are indications that underlying motor disorders may be responsible for foreign body impaction in the esophagus. Studies have found that patients who experienced meat bolus impaction without organic esophageal stenosis presented with a marked reduction in amplitude and duration of esophageal contraction in the proximal esophagus when compared to control groups. These motor disorders could potentially be responsible for the impaction of foreign bodies in the esophagus (PUBMED:9595234, PUBMED:29040492). Additionally, manometric studies in patients with lower esophageal rings or strictures, which are conditions associated with food impaction, have shown that non-specific motor disorders were most frequently seen in patients with strictures. However, these motor disorders were characterized as delayed esophageal emptying and did not influence the tendency to food impaction. It was noted that only the diameter of the rings, not the strictures, was significant in the incidence of food impaction (PUBMED:10866085). In the context of eosinophilic esophagitis (EE), a condition characterized by dysphagia and frequent food impaction, manometric studies have shown that ineffective esophageal peristalsis is the most prevalent motor disorder associated with this entity. However, the relationship between these motor disorders and symptom worsening is not clear (PUBMED:18942900). These findings suggest that while motor disorders are associated with foreign body impaction in the esophagus, the exact mechanisms and their impact on the risk of impaction may vary depending on the specific condition and individual patient characteristics. Further studies, such as 24-hour esophageal manometry, are recommended to reach a more accurate diagnosis by studying each patient's entire circadian cycle (PUBMED:9595234, PUBMED:29040492).
Instruction: Can large relative mortality differences between socio-economic groups among Swedish men be explained by risk indicator-associated social mobility? Abstracts: abstract_id: PUBMED:16051656 Can large relative mortality differences between socio-economic groups among Swedish men be explained by risk indicator-associated social mobility? Background: The Nordic countries, profiled as welfare states, are shown to have comparatively large relative socio-economic differences in mortality and comparatively high intergenerational mobility. The aim of this study was to analyse the role of risk indicator-associated social mobility (from childhood through to adulthood) in socio-economic mortality differences among Swedish men aged 35-50 years. Methods: We used data on risk indicators for adult mortality (risk use of alcohol, smoking, low emotional control, psychiatric diagnosis, medication for nervous problems, contact with police and child care, experience of unemployment, low body height, low education) collected at compulsory conscription for military training among Swedish men at aged 18-20, fathers' socio-economic status at subjects' ages 9-11 years, data on subjects' socio-economic status at ages 34-36 years, and follow-up data on mortality during 1986-1999 (at ages 35-50 years). Results: Persons in manual occupations in 1985 showed an elevated relative risk (RR) of mortality compared with stable non-manual employees regardless of the social position of their father (RR 1.75 among stable manual workers, and RR 1.74 among the downwardly mobile). In multivariate analyses, taking into account the risk indicators first operating in late adolescence, the increased mortality risk among stable manual workers and also among the downwardly mobile diminished considerably (RR 1.32 and 1.39, respectively). Conclusions: These results suggest that a substantial part of the socio-economic differences in mortality among middle-aged men had their origin in childhood circumstances. Risk indicator-associated social mobility was found to contribute substantially to an increase in the relative difference in mortality between male manual workers and non-manual employees. abstract_id: PUBMED:24068547 Socio-economic inequalities in all-cause mortality in Europe: an exploration of the role of heightened social mobility. The larger than expected socio-economic inequalities in health in more egalitarian countries might be explained by a heightened social mobility in these countries. Therefore, the aim of this explorative study was to examine the associations between country-level social mobility, income inequality and socio-economic differences in all-cause mortality, using country-level secondary data from 12 European countries. Both income equality and social mobility were found to be associated with larger socio-economic differences in mortality, particularly in women. These findings suggest that social mobility and income equality, beside their shiny side of improving population health, might have a shady side of increasing socio-economic health inequalities. abstract_id: PUBMED:22444320 Social and health-related correlates of intergenerational and intragenerational social mobility among Swedish men and women. Objective: To explore the pattern and determinants of inter- and intragenerational occupational mobility among Swedish men and women. Study Design: A Swedish 14-year prospective longitudinal study (response rate 96.5%). Methods: Detailed information on 546 men and 495 women regarding their occupation, health status, health-related behaviour, psychosocial environment at home and school, material recourses and ethnicity prior to mobility were available at 16, 21 and 30 years of age. Odds ratios and 99% confidence intervals were calculated using logistic regression to determine social mobility. Results: The results indicated that being popular at school predicted upward mobility, and being less popular at school predicted downward mobility. Additionally, material deprivation, economic deprivation, shorter height (women) and poor health behavioural factors predicted downward mobility. Among this cohort, being less popular at school was more common among subjects whose parents had low socio-economic status. Occupational mobility was not influenced by ethnic background. Conclusions: Apart from height (women), health status was not associated with mobility for men or women either inter- or intragenerationally. Unfavourable school environment was a consistent predictor of mobility for both genders. The results indicate that schools should be used as a setting for interventions aimed at reducing socio-economic health inequities. Targeted school interventions that are designed to assist higher educational attainment of socio-economically disadvantaged youth would help to break the social chain of risk experienced during this time, and thereby alter their life course in ways that would reduce subsequent social inequities in health and well-being. abstract_id: PUBMED:33421109 Family socio-economic status and children's executive function: The moderating effects of parental subjective socio-economic status and children's subjective social mobility. Family socio-economic status (SES) is significantly related to disparities in children's executive function. Children from economically disadvantaged backgrounds perform worse on executive function tasks than their peers from high-SES families. The protective factors in the relationship between SES and executive function have not been sufficiently investigated, especially from the perspective of parents' and children's perceptions and expectations regarding SES. Therefore, this study aimed to examine whether parental subjective SES and children's subjective social mobility separately moderated the relationship between family SES and children's executive function among 885 participants aged 9-13 years. The results showed that family SES was positively related to the three components of executive function (cognitive flexibility, inhibitory control, and working memory). Moreover, the relationship between SES and cognitive flexibility was weak among the children with a high level of subjective social mobility or those whose parents had high levels of subjective SES. Among children from families with economic hardship, subjective social mobility is a potential protective factor mitigating the negative effects of low family SES on their cognitive flexibility. abstract_id: PUBMED:25947541 Women's relative immunity to the socio-economic health gradient: artifact or real? Background: Individual and area socio-economic status (SES) are significant predictors of morbidity and mortality in developed and developing countries. However, the span in health from poorest to richest, that is, the socio-economic gradient, appears steeper for men than women. Objective: Our aim is to understand women's apparent immunity to the health harms of the SES gradient. Design: Findings from a non-systematic search of Medline for population-based, SES gradient studies reporting results for both men and women and with health outcomes of morbidity, mortality or self-rated health (SRH) were reflectively analyzed. Results: The 36 papers reviewed generally showed women to be relatively immune to the SES gradient for all but cardiovascular health outcomes. However, addressing the interconnected nature of socio-economic circumstances, exploring whether some measures of SES had ambiguous meanings for either women or men, including modifiers of SES such as household circumstances, social capital or area gender equity, or using indicators of area SES that were contextual rather than aggregates of individual, compositional measures increased the SES gradient for women. Outcome measures that combined mental and physical health, accounted for gender differences in SRH and adjusted for sex-specific differences in causes of mortality also explained some of the observed amelioration of the SES gradient among women. Conclusions: Socio-economic circumstances have a real and sustained impact on individual health. The SES gradient appears stronger for men than for women for all health outcomes other than heart disease. However, some of the observed variability between men and women may be an artifact of biased methodology. Considering webs of causation rather than individual markers of SES along with other sources of gender bias can explain much of women's blunted socio-economic gradient and deepen understanding of the pathways from SES to morbidity and mortality overall. abstract_id: PUBMED:9868752 Socioeconomic differences in health indices, social networks and mortality among Swedish men. A study of men born in 1933. Background: In previous survey we found large socioeconomic differences in mortality among urban Swedish men which remained unexplained after controlling for smoking and standard coronary risk factors. The present analysis was undertaken in order to investigate a broader set of possible explanatory factors in another cohort of Swedish men. Study Population And Methods: Occupation was coded into five occupational classes for 717 of 776 participant men from a random population sample of 1016 men who were born in 1933. All were living in Göteborg and were 50 years old at the baseline examination. After 12 years' follow-up, 68 of the 717 men had died (9.5%). Results: Low occupational class was associated with a higher prevalence of smoking at baseline, but no association was found with systolic blood pressure, body mass index, waist to hip ratio, serum triglycerides or serum cholesterol. Subjects from higher socioeconomic strata were taller, had higher maximum peak respiratory flow, lower plasma fibrinogen and lower body temperature. Low occupational class was associated with low social integration, low home activity levels, low levels of activity outside home and low social activity levels (p = 0.001 for all) and with low emotional support (p = 0.018). There were also associations between low occupational class and poor self-perceived health, as well as with several cardiovascular symptoms. During 12 years' follow-up, there was a graded and inverse relationship between occupational class and mortality from all causes. The highest mortality was found among the men who could not be classified (23 per 1,000 person years) Of the men in the lowest occupational class, 12 per 1,000 died, compared to 3 per 1,000 in the highest class (relative risk 3.7 (1.4-9.8)). After controlling for smoking, the relative risk decreased to 3.2 (1.2-8.6) and after further adjustment for emotional support, self-perceived health, activity level at home, and peak expiratory flow, the relative risk was still twofold but not significantly so (RR 2.1 (0.8-5.8)). Conclusion: We were able to confirm earlier results as to the wide mortality differentials in urban middle-aged men in Sweden. There were also large differences in several other factors, including constitutional factors, health variables, lifestyle and social support indices, which explained important parts of the social mortality gradient, the most prominent being smoking, respiratory function, social network factors and subjective health. abstract_id: PUBMED:29368946 Socio-economic predictors of depressive symptoms around old age retirement in Swedish women and men. Objectives: To estimate trajectories of depression around old age retirement in Swedish women and men and examine if socio-economic status predicted the trajectories Methods: The analytic sample comprised 907 women and 806 men from the Swedish Longitudinal Occupational Survey of Health. B-spline smoothers and group-based trajectory modelling were used to identify groups of individuals with similar trajectories of depressive symptoms around retirement. Multinomial regression analyses were conducted to investigate if socio-economic factors were associated with odds of belonging to trajectory groups with higher depression scores. Results: Four depressive symptoms trajectories were identified in both genders, all showing similar symptom levels across the retirement transition. Low levels of depressive symptoms were observed in the three largest groups. In the last trajectory group among women (2.5%) depression scores were moderate to severe and among men (3.3%) depression scores were persistent moderate. Higher educational level and lower subjectively rated social status were associated with higher odds of belonging to trajectory groups with higher levels of depressive symptoms in both genders. Conclusion: Retirement transition was not associated with symptoms of depression. Higher educational level and lower subjective social status may predict higher depressive symptom levels the years around old age retirement. abstract_id: PUBMED:29349235 Social pathways to health: On the mediating role of the social network in the relation between socio-economic position and health. Good health is one of the key qualities of life, but opportunities to be and remain healthy are unequally distributed across socio-economic groups. The beneficial health effects of the social network are well known. However, research on the social network as potential mediator in the pathway from socio-economic position (SEP) to health is scarce, while there are good reasons to expect a socio-economical patterning of networks. We aim to contribute to our understanding of socio-economic inequalities in health by examining the mediating role of structural and functional characteristics of the social network in the SEP-health relationship. Data were from the second wave of the Norwegian study on the life course, aging and generation study (NorLAG) and comprised 4534 men and 4690 women aged between 40 and 81. We applied multiple mediation models to evaluate the relative importance of each network characteristic, and multiple group analysis to examine differences between middle-aged and older men and women. Our results indicated a clear socio-economical patterning of the social network for men and women. People with higher SEP had social networks that better protect against loneliness, which in turn lead to better health outcomes. The explained variance in health in older people by the social network and SEP was only half of the explained variance observed in middle-aged people, suggesting that other factors than SEP were more important for health when people age. We conclude that it is the function of the network, rather than the structure, that counts for health. abstract_id: PUBMED:33730966 The gendered widowhood effect and social mortality gap. With few exceptions, greater disparities in mortality risk by socio-economic status (SES) have been found among men than among women. Most research has also shown that the higher mortality risk after widowhood (the widowhood effect) is greater for men. However, a different picture appears when examining these associations jointly. Based on Swedish register data, this study shows that widowhood weakens, or even reverses, the sex differences in socio-economic disparities in mortality. The overall findings also indicate that higher SES elevates the widowhood effect for men but diminishes it for women, and that the widowhood effect is greater for women than men in the lowest SES categories. These results imply that men with higher SES are more vulnerable after widowhood, perhaps because of their previous relatively privileged situation. The disadvantage of widows in lower SES categories may reflect exposure to financial strains after spousal loss and inequalities in the healthcare system. abstract_id: PUBMED:27280020 The geographic pattern of Belgian mortality: can socio-economic characteristics explain area differences? Background: Country averages for health outcomes hide important within-country variations. This paper probes into the geographic Belgian pattern of all-cause mortality and wishes to investigate the contribution of individual and area socio-economic characteristics to geographic mortality differences in men aged 45-64 during the period 2001-2011. Methods: Data originate from a linkage between the Belgian census of 2001 and register data on mortality and emigration during the period 2001-2011. Mortality rate ratios (MRRs) are estimated for districts and sub-districts compared to the Belgian average mortality level using Poisson regression modelling. Individual socio-economic position (SEP) indicators are added to examine the impact of these characteristics on the observed geographic pattern. In order to scrutinize the contribution of area-level socio-economic characteristics, random intercepts Poisson modelling is performed with predictors at the individual and the sub-district level. Random intercepts and slopes models are fitted to explore variability of individual-level SEP effects. Results: All-cause MRRs for middle-aged Belgian men are higher in the geographic areas of the Walloon region and the Brussels-Capital Region (BCR) compared to those in the Flemish region. The highest MRRs are observed in the inner city of the BCR and in several Walloon cities. Their disadvantage can partially be explained by the lower individual SEP of men living in these areas. Similarly, the relatively low MRRs observed in the districts of Halle-Vilvoorde, Arlon and Virton can be related to the higher individual SEP. Among the area-level characteristics, both the percentage of men employed and the percentage of labourers in a sub-district have a protective effect on the individual MRR, regardless of individual SEP. Variability in individual-level SEP effects is limited. Conclusions: Individual SEP partly explains the observed mortality gap in Belgium for some areas. The percentage of men employed and the percentage of labourers in a sub-district have an additional effect on the individual MRR aside from that of individual SEP. However, these socio-economic factors cannot explain all of the observed differences. Other mechanisms such as public health policy, cultural habits and environmental influences contribute to the observed geographic pattern in all-cause mortality among middle-aged men. Answer: The study by Hallqvist et al. (PUBMED:16051656) suggests that a substantial part of the socio-economic differences in mortality among middle-aged Swedish men can be attributed to risk indicator-associated social mobility, which has its origins in childhood circumstances. The study found that manual workers and those who were downwardly mobile had an elevated relative risk of mortality compared to stable non-manual employees. However, when taking into account risk indicators from late adolescence, the increased mortality risk among these groups diminished considerably, indicating that social mobility associated with these risk indicators contributes significantly to the relative difference in mortality between socio-economic groups. This finding is consistent with the broader European context, where socio-economic inequalities in health are associated with heightened social mobility, as explored by Mackenbach et al. (PUBMED:24068547). They found that both income equality and social mobility were associated with larger socio-economic differences in mortality, particularly among women, suggesting that while social mobility and income equality can improve population health, they may also increase socio-economic health inequalities. Furthermore, the study by Almquist et al. (PUBMED:22444320) indicates that factors such as popularity at school and material deprivation can predict social mobility, which in turn may influence socio-economic health disparities. The study suggests that interventions aimed at reducing socio-economic health inequities should target the school environment to assist socio-economically disadvantaged youth. In summary, the evidence suggests that risk indicator-associated social mobility plays a significant role in explaining the large relative mortality differences between socio-economic groups among Swedish men. Interventions targeting early life circumstances and school environments may be effective in reducing these disparities (PUBMED:16051656; PUBMED:24068547; PUBMED:22444320).
Instruction: Is the follow-up of small renal angiomyolipomas a necessary precaution? Abstracts: abstract_id: PUBMED:24837696 Is the follow-up of small renal angiomyolipomas a necessary precaution? Aim: To investigate the natural history and rationalize follow-up of renal angiomyolipomas (AMLs). Materials And Methods: A prospectively gathered radiology database was scrutinized to identify patients with renal AMLs over a 3 year period (January 2006 to December 2008). Radiological investigations were examined to identify those AMLs exhibiting change during surveillance. Results: A total of 135 patients were identified. Mean age at first detection was 49.6 years and patients were followed up for a median 21.8 months (6-85.3 months). Small AMLs (≤20 mm) were less likely to grow than their larger counterparts [odds ratio 13.3, confidence interval (95% CI) 1.4-123.9, p = 0.02] and exhibited a slower growth rate (0.7 versus 9.2 mm/year). Patients with AMLs that increased in size were significantly younger (median age 43 versus 52 years, p &lt; 0.001). Multiple AMLs or those associated with genetic conditions grew at a significantly greater rate (3 versus 0.1 mm/year, p &lt; 0.001). AMLs with a large extra-renal component are less reliably measured on ultrasound (median error 7 versus 1 mm, p &lt; 0.001). Conclusion: This is the first study with the primary purpose to investigate growth of small AMLs (≤20 mm). Small, solitary AMLs (≤20 mm) do not require follow-up due to their low probability of growth. Patients with multiple AMLs and younger patients require closer monitoring due to their comparatively greater AML growth rate. Ultrasound-detected AMLs with an extra-renal component may require computed tomography (CT) to confirm their size. abstract_id: PUBMED:6603773 Small angiomyolipoma of the kidney: sonographic-CT evaluation. Small fat-containing intraparenchymal renal lesions, 0.5 to 1.3 cm in diameter, were detected by sonography and computed tomography in six patients. In five of the six cases, the lesions were found serendipitously. None of the patients underwent surgery. Follow-up studies, performed over a period of 6-24 months, showed no change in the size of the lesions. These small fat-containing nodules, hitherto unrecognized by imaging studies, are not an uncommon finding at routine autopsy. Since these lesions do not show growth in the adult population, surgical intervention does not seem to be indicated. abstract_id: PUBMED:7660504 Small echogenic renal masses: how often is computed tomography used to confirm the sonographic suspicion of angiomyolipoma? Objectives: Although renal angiomyolipoma (AML) has a typical ultrasound appearance, many authorities suggest that a computed tomography (CT) scan be obtained to confirm the diagnosis because small echogenic renal cell carcinomas can simulate AML. Our study evaluates the actual follow-up in such patients and factors that may affect whether CT confirmation is recommended or obtained. Methods: From 1986 through 1992, 36 patients had an ultrasound diagnosis of probable renal AML (well-circumscribed, homogenously echogenic mass). In each case the patient's age, symptoms, ultrasound results and recommendations, and imaging follow-up were recorded. Results: CT confirmation of the sonographic diagnosis was recommended in only 11 of 36 (31%) patients. Only 7 of these 11 patients actually underwent CT, 5 of whom had the diagnosis confirmed by CT detection of intratumoral fat. Ten of 23 patients (43%) over 50 years of age had CT recommended, whereas only 1 of 13 (8%) patients under age 50 years did (P &lt; 0.05). CT confirmation was recommended for 5 of 13 (38%) lesions greater than 10 mm and for 6 of 23 (26%) smaller masses. None of the 9 patients under age 50 years with small masses (less than 10 mm) had CT recommended. Conclusions: Although many authorities recommend CT to confirm the sonographic diagnosis of renal AML, this algorithm is rarely followed in everyday clinical practice, especially in patients under age 50 years with masses less than 10 mm. abstract_id: PUBMED:23392196 Patterns of renal angiomyolipoma regression post embolisation on medium- to long-term follow-up. Objective: To assess the patterns of regression of renal angiomyolipoma (AML) post embolisation and report the outcomes related to the use of different embolic materials. Methods: A retrospective review of all patients who underwent embolisation for renal AML at our institution between January 2004 and April 2012. Results: 13 patients underwent 16 episodes of embolisation. Coils were used as the primary embolisation material in 10 episodes and microspheres in 6 episodes. The size reduction rate highly correlated on CT follow-up between the two groups, with 25.6% vs 22.7% reduction at 12 months, 27.5% vs 25.1% at 24 months, 35.0% vs 33.0% at 36 months and 35.0% vs 36.8% at 48 months. During follow-up, all tumours reduced in size with one patient requiring subsequent embolisation whose tumour reduced by only 6.5% after 1 year and subsequently exhibited regrowth after 4 years. Two patients presented with rebleeding and underwent repeat embolisation. Our overall retreatment rate (23%) is well within the literature range (up to 37%). None of the patients underwent surgery. Conclusion: The majority of AML shrinkage occurs within the first year following embolisation and appears to plateau after 3 years, which could have an impact on follow-up strategy. The percentage reduction at 1 year may reflect the long-term effect of embolisation with tumours demonstrating minor size reduction more likely to relapse at long-term follow-up. Embolisation of renal AML produces durable long-term results regardless of the choice of embolic agent. Advances In Knowledge: These findings provide information to guide CT follow-up of renal AML post embolisation. abstract_id: PUBMED:3759077 Angiomyolipoma with regional lymph node involvement and long-term follow-up study. The first long-term follow-up study of a patient who had renal angiomyolipoma with regional lymph node involvement is reported. The absence of recurrence after 15 years favors the view that regional lymph node involvement represents a multicentric hamartomatous change and is not a metastasis from a malignant renal tumor. abstract_id: PUBMED:9205224 Renal angiomyolipoma: selective arterial embolization--effectiveness and changes in angiomyogenic components in long-term follow-up. Purpose: To evaluate the efficacy of selective arterial embolization in symptomatic renal angiomyolipoma (AML) and the change in angiomyogenic components during long-term follow-up after embolization. Materials And Methods: Fourteen adult patients with symptomatic AMLs underwent 16 selective arterial embolizations. The embolic materials used were absolute alcohol with (n = 5) or without (n = 3) iodized oil, Gianturco coils (n = 4), and polyvinyl alcohol foam powder with gelatin sponge (n = 2). Follow-up ultrasonography and computed tomography (CT) were performed in six and 14 patients, respectively. The effectiveness of selective arterial embolization was evaluated on the basis of the area of the angiomyogenic components in the AML on initial and follow-up images and clinical improvement. Results: All patients showed devascularization of the tumor on the postembolization angiograms. In 13 patients, clinical symptoms disappeared. The follow-up period was 7-72 months (mean, 33 months). One patient underwent nephrectomy at 7 months after embolization because of a large cystic lesion found at 1 month. In long-term CT follow-up (&gt; or =12 months) in 12 patients, nearly all angiomyogenic components disappeared, but fatty components partially shrank with liquefactive necrosis in tumors. Conclusion: Selective arterial embolization is an effective and safe treatment of AML. The angiomyomatous components crucial for the prevention of bleeding were very sensitive to the embolization. abstract_id: PUBMED:31618658 Selective Arterial Embolization for Large or Symptomatic Renal Angiomyolipoma: 10 Years of Follow-up. Objective: To assess long-term outcome after selective arterial embolization (SAE) as first-line treatment for large or symptomatic AML. Design, Setting, And Participants: Data from a prospectively maintained database on 71 patients who underwent SAE for large or symptomatic AML were reviewed. Patients with sporadic and tuberous-sclerosis-complex (TSC) were included. Outcome Measurements: The main endpoints were re-embolization rates, occurrence of clinical events related to AML, size of AML, and renal function. Results: Thirteen (19.1%) patients reported at least 1 major clinical event. Major complications affected 2 patients (2.9%), both ending in complete loss of renal unit function. Four renal units (5.9%) were eventually treated surgically. The re-embolization rate was 41.1%, with an average time from the initial to a repeat SAE of 2.18 years (range 0.31-10.65 years). The size of the tumor prior to SAE and after 5 and 10 years of follow-up were 8.9 cm (7-12), 6.5 cm (4-7.5), 7 cm (4-7.8), respectively [median (IQR)]. These results are translated to a size reduction of 27% in 10 years follow-up. Patients with TSC had larger tumors on long-term follow-up (77.8 vs 41.3 mm, P = .045). The long-term follow-up estimated average glomerular filtration rate was 81.97 (range 26-196). No patient needed renal replacement therapy, and disease-specific survival was 100%. Conclusions: SAE is a safe treatment option for patients with symptomatic or large AML. It represents a minimally invasive intervention with good long-term outcome. SAE may be offered as first-line treatment in most cases, though, it is associated with high retreatment rates. abstract_id: PUBMED:10478660 Renal angiomyolipoma: a clinicopathologic, immunohistochemical, and follow-up study of 46 cases. We reviewed 46 cases of renal angiomyolipoma covering the period from 1977 to 1997. Eight cases were associated with tuberous sclerosis and one with lymphangiomyomatosis. Histologically, the lesions were most often classic with the three usual components, i.e., mature adipose tissue, thick-walled blood vessels, and smooth muscle. Seven cases were particularly misleading: three cases were entirely adipose mimicking liposarcoma: two cases had an exclusively smooth-muscle component, one mimicking lymphangiomyomatosis and one with epithelioid cells; another case had a monophasic epithelioid pleomorphic component ("REON": renal epithelioid oxyphilic neoplasm) and proved to be fatal; and another case was associated with collecting duct carcinoma. The immunohistochemical profile showed the coexpression of alpha-smooth-muscle actin and HMB45. Our study is the first to show positivity of estrogen and progesteron receptors or both in more than 25% of cases. Of 35 cases with follow-up information, only one patient died of malignant spread of angiomyolipoma. abstract_id: PUBMED:29439672 Renal manifestations of tuberous sclerosis complex: patients' and parents' knowledge and routines for renal follow-up - a questionnaire study. Background: Renal angiomyolipomas (AMLs) are a major clinical feature in patients with tuberous sclerosis complex (TSC). Spontaneous bleeding can be life threatening, and appropriate information and proper surveillance and management are important to limit morbidity and mortality. Because TSC is a rare disease, patients are at risk of suboptimal medical management. Our aim was to investigate patients' and parents' knowledge about renal angiomyolipomas (AMLs) in Tuberous Sclerosis Complex (TSC) and to identify current routines for renal follow-up. Methods: A questionnaire survey was initiated by the French Reference Centre on TSC. It was distributed in France through university hospitals and the patients' association (2009-2011), and to patients registered by the Norwegian National Centre for Rare Epilepsy-Related Disorders (2013-2014). Contingency tables with Chi-Square test for independence (with Yates Continuity Correction) and Pearson-Chi-Square value were used for correlation statistics. Results: We included 357 patients (France, n=257; Norway n=100). Most participants knew that TSC is associated with AMLs. However, 42 % did not know about the risk of AMLrelated bleeding, and 37 % had been informed about the risk of bleeding only after the age of 15 years. Furthermore, 14 % did not know whether they themselves or their child had AMLs. Patients had less knowledge than parents. Medical consultations and patient associations were the main sources of information. Among 30 % of patients, renal imaging was not received at all, or not conducted every 1-3 years, as recommended by current guidelines. Regular imaging was more frequent in patients with AMLs &lt; 15 years, than in patients with AMLs ≥ 15 years. Ultrasound was the most frequently used imaging modality. Conclusions: Knowledge of renal AML in TSC patients and their parents was lower than expected, and follow-up by renal imaging was suboptimal for a substantial proportion of patients. Patients and parents should be informed about the risk and symptoms of renal bleeding, at the latest when the patient is 15 years. Monitoring the growth of AMLs should be standardized to comply with guidelines. Transition between adolescence and adulthood is a high-risk period and ensuring appropriate follow-up at this time is particularly important. abstract_id: PUBMED:2271813 Angiomyolipoma of the kidney--value of imaging procedures The angiomyolipoma of the kidney is composed by blood vessels, smooth muscle tissue, and fat. It is important to differentiate this benign tumor from a malignant one. Two cases demonstrate the diagnostic value of the different imaging procedures. In 'typical' cases computed tomography shows the density of fat; histological examination is not necessary and follow up control with ultrasonography is sufficient. In other cases with a low part of fat histological examination cannot be missed. Angiography is not helpful for the differentiation from a malignant tumor. Answer: The follow-up of small renal angiomyolipomas (AMLs) is a topic of some debate. According to the study by Flum et al. (PUBMED:24837696), small, solitary AMLs (≤20 mm) may not require follow-up due to their low probability of growth. This study found that small AMLs were less likely to grow and exhibited a slower growth rate compared to larger AMLs. It also noted that patients with multiple AMLs and younger patients required closer monitoring due to a comparatively greater AML growth rate. Additionally, the study suggested that ultrasound-detected AMLs with an extra-renal component may require computed tomography (CT) to confirm their size. Supporting this view, another study (PUBMED:6603773) found that small fat-containing renal lesions detected serendipitously showed no change in size over a follow-up period of 6-24 months, suggesting that surgical intervention may not be indicated for these small lesions. However, the study by Silverman et al. (PUBMED:7660504) indicated that while many authorities recommend CT to confirm the sonographic diagnosis of renal AML, this practice is rarely followed in clinical practice, especially in patients under age 50 years with masses less than 10 mm. In cases where embolization is performed for AML, a study (PUBMED:23392196) found that the majority of AML shrinkage occurs within the first year following embolization and appears to plateau after 3 years, which could impact the follow-up strategy. Overall, the necessity of follow-up for small renal AMLs seems to depend on various factors, including the size of the AML, the presence of multiple AMLs, patient age, and whether the AML is associated with genetic conditions. While some studies suggest that small, solitary AMLs may not require rigorous follow-up, other factors may necessitate closer monitoring. It is important for clinicians to consider individual patient circumstances when determining the appropriate follow-up strategy for renal AMLs.
Instruction: Is cyclophotocoagulation an option in the management of glaucoma secondary to Fuchs' uveitis syndrome? Abstracts: abstract_id: PUBMED:24390401 Is cyclophotocoagulation an option in the management of glaucoma secondary to Fuchs' uveitis syndrome? Background: Glaucoma is one of the sight-threatening complications of Fuchs' uveitis syndrome (FUS) and the most difficult to manage. The goal of this study was to assess the efficacy and safety of cyclophotocoagulation (CPC) in the management of glaucoma secondary to FUS. Methods: In a retrospective analysis, the charts of all patients with FUS referred to our clinic from January 2002 to December 2012 were reviewed. In patients with glaucoma or ocular hypertension, controlled eye pressure was defined using two alternative upper limits of 6 ≤ IOP ≤ 21 mmHg and 6 ≤ IOP ≤ 18 mmHg at 1 year follow-up. Results: One hundred and seventy-six patients with FUS were included in this study. Of those, 28 had ocular hypertension (OHT) or glaucoma. Mean maximal intraocular pressure (IOP) of patients with glaucoma/OHT was 40.8 mmHg. Twenty-three patients (82.1 %) had maximal IOP levels of 35 mmHg or higher. Sixteen patients with glaucoma/OHT underwent CPC alone (ten patients) or in combination with other surgical procedures (six patients). After 1 year, control of IOP for both upper limits (6 ≤ IOP ≤ 18 mmHg) and (6 ≤ IOP ≤ 21 mmHg) was achieved in six of ten patients (60 %) who received CPC alone, and in five of six patients (83.3 %) who required additional surgery after CPC. The mean number of cycloablative procedures was 1.3 (range 1-2) in the CPC alone group and 1.2 (range 1-2) in patients for whom CPC was used as adjunct therapy. There was no exacerbation of intraocular inflammation, no postoperative hypotony and no phthisis bulbi in the 16 patients who underwent CPC. Conclusions: CPC is a safe and effective procedure that should be considered if medical treatment has failed to control glaucoma in FUS. abstract_id: PUBMED:31622627 A literature review on Fuchs uveitis syndrome: An update. Fuchs uveitis syndrome is a chronic, typically unilateral, ocular condition characterized by an asymptomatic mild inflammatory syndrome that can result in cataract and secondary glaucoma. Diagnosis is largely clinical because the etiology remains obscure. Fuchs uveitis syndrome is most likely the result of different insults or pathogenic mechanisms, including infections, autoimmune diseases, hereditary and sympathetic dysfunction. According to the infectious theory, the infection of herpes simplex virus, ocular toxoplasmosis, cytomegalovirus, rubella virus, and other viruses have been implicated in the pathogenesis of the disease. There is no indication for corticosteroid therapy in Fuchs patients, and treatment should be mostly reserved to cataract and glaucoma. Phacoemulsification with intraocular lens implantation is a safe procedure with good visual outcomes in patients with Fuchs uveitis syndrome. Glaucoma is often unresponsive to treatment and should actively be monitored both preoperatively and postoperatively in these patients. We discuss the current understanding of Fuchs uveitis syndrome from its clinical presentations through recommended management and provide a comprehensive description and the updated knowledge of its pathophysiology. abstract_id: PUBMED:31268809 Alteration of MCP-1 and MMP-9 in Aqueous Humor Is Associated with Secondary Glaucoma in Fuchs Uveitis Syndrome. Purpose: To analyze changes in concentrations of pro- or anti-inflammatory cytokines, chemokines, or metalloproteinases (MMP) in the aqueous humor (AH) in Fuchs uveitis syndrome (FUS) patients with (FUSwG) or without (FUSwoG) secondary glaucoma. Methods: AH samples were collected from 43 eyes of Caucasian subjects (FUSwoG: n = 11; FUSwG: n = 8; control eyes: n = 24). Concentrations of IL-8, MCP-1, MMP-1,-2,-3,-9, SAA, TGFß-1,-2,-3, and TNF-α were measured by multiplex bead assay analysis. Results: Compared with the control group, levels of IL-8, MCP-1, MMP-3, and MMP-9 in the AH were significantly increased in FUSwG and FUSwoG patients. In contrast to FUSwoG patients, MCP-1 and MMP-9 level were lower in FUSwG, while expression of MMP-2, MMP-3, and TGFβ-1 was increased. Conclusion: In our experiments, glaucoma in FUS patients was associated with low levels of MCP-1 and MMP-9 in the AH, while expression of MMP-2, MMP-3, and TGFβ-1 increased. The alteration in these molecular patterns may contribute to the development of glaucoma in FUS. abstract_id: PUBMED:27191797 Outcomes of Ahmed Glaucoma Valve Implantation for Glaucoma Secondary to Fuchs Uveitis Syndrome. Purpose: To present the outcomes of Ahmed glaucoma valve implantation (AGV) in glaucoma secondary to Fuchs uveitis syndrome (FUS). Methods: In this retrospective chart review, two definitions of success were used: 6 mmHg ≤intraocular pressure (IOP) ≤21 mmHg (success 1), and 6 mmHg ≤ IOP ≤21 mmHg and at least 25% reduction from baseline (success 2). Depending on the need of postoperative antiglaucoma medication, success was defined as either complete or qualified. Results: In total, 17 eyes of 17 patients were included. Complete success rates (both definitions) were 23.5% (n = 17) after 1 year and 23% (n = 13) after 3 years. Qualified success rates (both definitions) were 58.3% (n = 17) after 1 and 38.4% (n = 13) after 3 years. Encapsulated bleb formation was the most common complication (47% of eyes). Conclusions: AGV was moderately successful in the management of glaucoma secondary to FUS. Success rates are improved by medications, needling, and cycloablative procedures. abstract_id: PUBMED:16264346 Update on Fuchs' uveitis syndrome. Purpose Of Review: This update aims to summarize the current knowledge on Fuchs' uveitis syndrome and review publications in the last 10 years. Theories on the pathogenesis and etiology of Fuchs' uveitis syndrome are revisited and the management of cataract and glaucoma is updated to reflect recent surgical change. Recent Findings: Retrospective reviews have highlighted that patients with Fuchs' uveitis syndrome are often initially misdiagnosed. Studies comparing local inflammatory mediators and cell types have found differences in Fuchs' uveitis syndrome, but the differences in steroid response and degree of inflammation remain poorly understood. Local production of antibodies to rubella has been recently reported in the aqueous of all patients with Fuchs' uveitis syndrome and no controls. Excellent visual outcomes from phacoemulsification have been reported with reduced complications compared with extracapsular cataract extraction. Summary: Although a single etiological agent and a sensitive laboratory test for the diagnosis of Fuchs' uveitis syndrome is alluring, the diagnosis of Fuchs' uveitis syndrome remains clinical, at least for now. Phacoemulsification has increased the safety of cataract extraction, and the use of intraocular lens is generally safe. The ideal lens material and design are not yet known, but silicone lenses may be best avoided. Glaucoma is often resistant to treatment and should actively be screened for in patients with Fuchs' uveitis syndrome. Medical and surgical treatment for reducing intraocular pressure should be especially aggressive in these patients. Vitrectomy appears to be safe in patients with visually significant vitreous opacification. abstract_id: PUBMED:30899842 Surgical management of glaucoma in Fuchs uveitis syndrome: Trabeculectomy or Ahmed glaucoma valve. Purpose: To evaluate the outcome of trabeculectomy versus Ahmed glaucoma valve (AGV) surgery in patients with Fuchs uveitis Syndrome (FUS). Methods: Twenty-eight eyes with uncontrolled glaucoma and at least 6 months of follow-up were enrolled. In 16 eyes trabeculectomy and in 12 eyes AGV implant were performed. The primary outcome measure was surgical success defined as 5 &lt; intraocular pressure (IOP) ≤ 21 mmHg (criterion A) and 5 &lt; IOP ≤ 16 mmHg (criterion B), with at least 20% reduction in IOP, either with no medication (complete success) or with no more than preoperative medications (qualified success). The sum of complete and qualified success was defined as cumulative success. Results: The mean age of the patients in the trabeculectomy group and the AGV group was 44.92 ± 9.02 and 45.76 ± 7.10 years, respectively (P = 0.79). The mean duration of follow-up was 23.06 ± 12.03 months in the trabeculectomy group and 22.83 ± 13.63 months in the AGV group (P = 0.96). The baseline mean IOP in trabeculectomy was 26.81 ± 6.69 mmHg which decreased to 11.61 ± 4.15 mmHg at last visit (P &lt; 0.001). In the AGV group, mean IOP was 31.41 ± 6.76 at baseline that changed to 22.41 ± 5.09 at last visit (P = 0.005). According to criterion A, cumulative success rates were 100% and 91% at 6 months and 76% and 9% at 36 months in the trabeculectomy and the AGV group, respectively. Cumulative success rates at 6 months were 93% and 58% and 65% and 7% at 36 months according to criterion B in the trabeculectomy and the AGV group, respectively.Kaplan-Meier survival analysis revealed a significant association between surgical method and cumulative success rate over 36 months (based on criteria A: P = 0.02, and based on criteria B: P = 0.007). Conclusion: The success rate of trabeculectomy was higher than AGV in the surgical management of glaucoma in FUS during a medium-term follow-up. abstract_id: PUBMED:15947963 Fuchs uveitis syndrome--heterochromia is no "conditio sine qua non" Fuchs uveitis syndrome (FUS) is typically a unilateral, chronic, low-grade inflammation of the anterior segment which manifests in young adulthood. It is underdiagnosed because of its variable clinical spectrum. Although it can mimic various forms of anterior uveitis, it is important to make the correct diagnosis, based on clinical grounds, because both the management and prognosis differ from those of other uveitides. While its etiology remains unknown, it is possible that the disease has multiple causes that lead through different pathogenic mechanisms to the same clinical entity. Although many patients do not require treatment, it is not a benign condition, as often perceived. The high incidence of glaucoma makes it mandatory that all patients with FUS should be screened at regular intervals, even if they are not being actively treated and are relatively asymptomatic. abstract_id: PUBMED:3283429 Heterochromia complicata Fuchs We are indebted to I. Loewenfeld and her ophthalmologist colleague H. S. Thompson for having called for an explanation of the clinical picture of Fuchs's heterochromic cyclitis in their "Critical Review." In the present author's view, the reason why so many different opinions are held concerning the clinical picture and the etiology is that only a few authors (Franceschetti, François, Georgiades, Hollwich, Huber, Kimura, Hogan and Thygeson, Perkins, Sugar and others) have been able to follow up collectives of their own, of up to 50 patients and more, for many years. None of the investigators deny the presence of typical precipitates, observed both under direct light and retroillumination; typical because they are only found in Fuchs's heterochromia and its variation Posner-Schlossman syndrome. The course is inflammation-free, since, in contrast to all other forms of diseases of the anterior uvea, neither external signs of inflammation nor posterior synechiae occur. Etiologically, according to the behavior of the pupil (François 1949, 1954), there is sympathicoparalysis, while according to Amsler and Huber as well as Verrey, Franceschetti and Herrmann there is a corresponding pathologically increased fluorescein permeability of the vessels in the anterior segment and an extreme tendency to bleeding, as shown by the filiform bleeding when the anterior chamber is opened. The sympathicoparalysis also explains the inflammation-free vascular fragility, with escape of cell elements, primarily protein (albumins) and lymphocytes (Verrey, Matteucci, Franceschetti and Hermann, and François) into the aqueous and vitreous. Therefore, the syndrome should no longer be termed "heterochromic cyclitis" but rather "heterochromia complicata" as proposed by E. Fuchs. In view of the somatic features, amounting to a status dysraphicus, the condition is probably connected with a congenital developmental anomaly of the sympathetic nerve (François); these features have been described by Franceschetti, Hollwich, Passow, Perkins, Sugar, Huber and many others. There may also be immunologic factors (Loewenfeld and Thompson); however, research into these is still only at an early stage. abstract_id: PUBMED:36250082 Factors associated with low prevalence of Fuchs' uveitis syndrome in Japan. Aim: To investigate the causes of low prevalence of Fuchs' uveitis syndrome (FUS) in Japan. Methods: Medical records of 160 patients diagnosed with FUS at 14 uveitis specialty facilities in Japan were reviewed retrospectively. Results: In 160 FUS patients, mean follow-up period before referral to our uveitis facilities was 31.6 ± 50.9 months. The most common reason for referral was idiopathic uveitis (61.9%), followed by cataract (25.0%), high intraocular pressure (IOP) including glaucoma (16.3%), and FUS (14.4%). Unilateral involvement was 96.9%. The most frequent ocular finding of FUS was anterior inflammation (91.9%), followed by stellate-shaped keratic precipitates (88.1%), cataract/pseudophakia (88.1%), diffuse iris atrophy (84.4%), vitreous opacity (62.5%), heterochromia (53.1%) and high IOP including glaucoma (36.3%). As treatments of these ocular findings, cataract surgery was performed in 52.5%, glaucoma surgery in 10.6%, and vitrectomy in 13.8%. Mean logMAR VA was 0.28 ± 0.59 at the initial visit, and decreased significantly to 0.04 ± 0.32 at the last visit. Proportions of FUS patients with BCVA &lt;0.1 and 0.1 to &lt;0.5 decreased, while that of ≥0.5 increased at the last visit compared with the initial visit. Conclusions: Ocular findings of FUS in Japanese FUS patients were consistent with the characteristic features. The low prevalence of FUS in Japan may be a result of being overlooked and misdiagnosed as mild idiopathic uveitis, cataract, and/or glaucoma. abstract_id: PUBMED:35918942 Efficacy and safety of subliminal transscleral cyclophotocoagulation in uncontrolled glaucoma. Purpose: To determine the efficacy of subliminal cyclophotocoagulation in reducing intraocular pressure (IOP) and to assess the safety profile of the procedure. Methods: We reviewed the charts of all patients who underwent subliminal cyclophotocoagulation between August 2019 and August 2020. The IOP, number of antiglaucoma medications, and visual acuity were compared at baseline and at 6 months. The post-laser complications were noted. Paired t-test, Wilcoxon rank-sum test, and McNemar test were used for analysis. Results: This study included 40 eyes of 40 patients. The most frequent diagnosis was neovascular glaucoma (55%), followed by primary open-angle glaucoma (17.5%). The mean IOP reduced from 32 to 21 mmHg (mean IOP reduction: 32%, 95% confidence interval [CI]: 27%-37%, P &lt; 0.001). Mean number of antiglaucoma medications declined from 3.2 to 1.9 (P &lt; 0.001). Use of oral acetazolamide decreased from 62% to 5% (P &lt; 0.001). The success rate of the procedure at 6 months was 55%. One patient had unexplained visual acuity decline after the procedure. Conclusion: Subliminal sub-cyclophotocoagulation is effective in treating refractory glaucoma in the Indian population. It decreases the medication burden. However, subliminal lasers can also cause vision-threatening complications such as hypotony, uveitis, and optic nerve hypoperfusion. So, clinicians should be cautious when using sub-cyclophotocoagulation in patients with good vision. Answer: Yes, cyclophotocoagulation (CPC) is an option in the management of glaucoma secondary to Fuchs' uveitis syndrome (FUS). A study assessing the efficacy and safety of CPC in managing glaucoma secondary to FUS found that CPC is a safe and effective procedure that should be considered if medical treatment has failed to control glaucoma in FUS. In this retrospective analysis, control of intraocular pressure (IOP) was achieved in 60% of patients who received CPC alone and in 83.3% of patients who required additional surgery after CPC. There was no exacerbation of intraocular inflammation, no postoperative hypotony, and no phthisis bulbi in the patients who underwent CPC (PUBMED:24390401). This finding is consistent with the understanding that glaucoma is often unresponsive to treatment in FUS patients and should be actively monitored. Surgical interventions, including CPC, are considered when medical management is insufficient (PUBMED:31622627). Moreover, the study on Ahmed Glaucoma Valve (AGV) implantation for glaucoma secondary to FUS also suggests that surgical interventions, including cycloablative procedures like CPC, can be part of the management strategy to control IOP in these patients (PUBMED:27191797). In summary, CPC is a viable surgical option for managing glaucoma in patients with Fuchs' uveitis syndrome when medical treatments are not adequate in controlling the disease.
Instruction: Autonomic markers and cardiovascular and arrhythmic events in heart failure patients: still a place in prognostication? Abstracts: abstract_id: PUBMED:22851700 Autonomic markers and cardiovascular and arrhythmic events in heart failure patients: still a place in prognostication? Data from the GISSI-HF trial. Aims: To investigate the prognostic value of autonomic variables in patients with symptomatic chronic heart failure (HF) treated according to current recommendations. Methods And Results: We analysed 24 h time-domain [standard deviation of all normal-to-normal RR intervals (SDNN)], frequency-domain [very low frequency and low frequency power (VFLP and LFP)], and non-linear [detrended fluctuation analysis (DFA)] heart rate variability, deceleration capacity (DC), and heart rate turbulence (HRT) in 388 sinus rhythm HF patients enrolled in the GISSI-HF Holter substudy [82% males, age 65 ±10 years, New York Heart Association (NYHA) functional class III-IV 20%, left ventricular ejection fraction (LVEF) 33 ±8%]. Cardiovascular (CV) mortality and combined sudden death + implantable cardioverter defibrillator (ICD) discharge were assessed as a function of continuous variables in the entire population and in patients with LVEF &gt;30% in univariate and multivariable Cox proportional hazards models. After a median of 47 months, 57 patients died of CV causes and 47 experienced the arrhythmic endpoint. For CV mortality, VLFP, LFP, and turbulence slope (TS) improved predictive discrimination (c-index) and risk classification [integrated discrimination improvement (IDI)] when added to clinical variables [age ≥70 years, LVEF, non-sustained ventricular tachycardia (NSVT), serum creatinine], while for arrhythmic mortality although the c-index increased in all three autonomic markers, the results of the IDI were statistically significant only for TS when added to NSVT, serum creatinine, and ischaemic aetiology. In 194 patients with LVEF &gt;30% (20 arrhythmic events), the hazard ratio of an impaired TS (&lt;2.5 msper RR interval) was 3.81 (95% confidence interval 1.35-10.7,P = 0.012) after adjustment for serum creatinine. Conclusions: Autonomic indexes still have independent predictive value on long-term outcome in HF patients. HRT may help in identifying patients with LVEF &gt;30% at increased arrhythmic risk. Trial registration NCT00336336. abstract_id: PUBMED:34973286 The renin-angiotensin system and cardiovascular autonomic control in aging. Aging is the greatest independent risk factor for developing hypertension and cardiovascular-related diseases including systolic hypertension, vascular disease, ischemic events, arrhythmias, and heart failure. Age-related cardiovascular risk is associated with dysfunction of peripheral organ systems, such as the heart and vasculature, as well as an imbalance in the autonomic nervous system characterized by increased sympathetic and decreased parasympathetic neurotransmission. Given the increasing prevalence of aged individuals worldwide, it is critical to better understand mechanisms contributing to impaired cardiovascular autonomic control in this population. In this regard, the renin-angiotensin system has emerged as an important hormonal modulator of cardiovascular function in aging, in part through modulation of autonomic pathways controlling sympathetic and parasympathetic outflow to cardiovascular end organs. This review will summarize the role of the RAS in cardiovascular autonomic control during aging, with a focus on current knowledge of angiotensin II versus angiotensin-(1-7) pathways in both rodent models and humans, pharmacological treatment strategies targeting the renin-angiotensin system, and unanswered questions for future research. abstract_id: PUBMED:17505630 Cardiovascular impact of the autonomic neuropathy of diabetes mellitus The neuropathic complications related to Diabetes may affect the somatic, sympathetic and parasympathetic nervous system. As a result, there are several clinical manifestations of diabetic neuropathy. They can be related to nervous system lesions of the genital, urinary, gastro-intestinal, skin and cardiovascular tissues. The results of these alterations are loss in the quality of life as well as increase of mortality indexes related to sudden death with cardiac arrhythmias and other causes. The cardiovascular autonomic neuropathy probably contributes to the bad prognosis of the coronary heart disease and of the heart failure in type 1 and type 2 diabetic patients. For diabetologists, the nervous complications of diabetes are the result of an increase influx of glucose to the neuronal and endothelial cells. Evidences show that, with the aim of preventing these complications, the diabetic patients should receive a precocious diagnosis and be instructed for having a good metabolic and blood pressure control. Use of angiotensin converting enzyme inhibitors and beta adrenergic blockers are probably of impact in the prevention of the cardiac autonomic complications of diabetes. abstract_id: PUBMED:33395034 Cardiovascular autonomic nervous system dysfunction in chronic kidney disease and end-stage kidney disease: disruption of the complementary forces. Purpose Of Review: Several nontraditional risk factors have been the focus of research in an attempt to understand the disproportionately high cardiovascular morbidity and mortality in chronic kidney disease (CKD) and end-stage kidney disease (ESKD) populations. One such category of risk factors is cardiovascular autonomic dysfunction. Its true prevalence in the CKD/ESKD population is unknown but existing evidence suggests it is common. Due to lack of standardized diagnostic and treatment options, this condition remains undiagnosed and untreated in many patients. In this review, we discuss current evidence pointing toward the role of autonomic nervous system (ANS) dysfunction in CKD, building off of crucial historical evidence and thereby highlighting the areas in need for future research interest. Recent Findings: There are several key mediators and pathways leading to cardiovascular autonomic dysfunction in CKD and ESKD. We review studies exploring the mechanisms involved and discuss the current measurement tools and indices to evaluate the ANS and their pitfalls. There is a strong line of evidence establishing the temporal sequence of worsening autonomic function and kidney function and vice versa. Evidence linking ANS dysfunction and arrhythmia, sudden cardiac death, intradialytic hypotension, heart failure and hypertension are discussed. Summary: There is a need for early recognition and referral of CKD and ESKD patients suspected of cardiovascular ANS dysfunction to prevent the downstream effects described in this review.There are many unknowns in this area and a clear need for further research. abstract_id: PUBMED:37166736 Autonomic control of ventricular function in health and disease: current state of the art. Purpose: Cardiac autonomic dysfunction is one of the main pillars of cardiovascular pathophysiology. The purpose of this review is to provide an overview of the current state of the art on the pathological remodeling that occurs within the autonomic nervous system with cardiac injury and available neuromodulatory therapies for autonomic dysfunction in heart failure. Methods: Data from peer-reviewed publications on autonomic function in health and after cardiac injury are reviewed. The role of and evidence behind various neuromodulatory therapies both in preclinical investigation and in-use in clinical practice are summarized. Results: A harmonic interplay between the heart and the autonomic nervous system exists at multiple levels of the neuraxis. This interplay becomes disrupted in the setting of cardiovascular disease, resulting in pathological changes at multiple levels, from subcellular cardiac signaling of neurotransmitters to extra-cardiac, extra-thoracic remodeling. The subsequent detrimental cycle of sympathovagal imbalance, characterized by sympathoexcitation and parasympathetic withdrawal, predisposes to ventricular arrhythmias, progression of heart failure, and cardiac mortality. Knowledge on the etiology and pathophysiology of this condition has increased exponentially over the past few decades, resulting in a number of different neuromodulatory approaches. However, significant knowledge gaps in both sympathetic and parasympathetic interactions and causal factors that mediate progressive sympathoexcitation and parasympathetic dysfunction remain. Conclusions: Although our understanding of autonomic imbalance in cardiovascular diseases has significantly increased, specific, pivotal mediators of this imbalance and the recognition and implementation of available autonomic parameters and neuromodulatory therapies are still lagging. abstract_id: PUBMED:26914959 Autonomic Modulation by Electrical Stimulation of the Parasympathetic Nervous System: An Emerging Intervention for Cardiovascular Diseases. The cardiac autonomic nervous system has been known to play an important role in the development and progression of cardiovascular diseases. Autonomic modulation by electrical stimulation of the parasympathetic nervous system, which increases the parasympathetic activity and suppresses the sympathetic activity, is emerging as a therapeutic strategy for the treatment of cardiovascular diseases. Here, we review the recent literature on autonomic modulation by electrical stimulation of the parasympathetic nervous system, including vagus nerve stimulation, transcutaneous auricular vagal stimulation, spinal cord stimulation, and ganglionated plexi stimulation, in the treatment of heart failure, atrial fibrillation, and ventricular arrhythmias. abstract_id: PUBMED:33414727 Autonomic Modulation for Cardiovascular Disease. Dysfunction of the autonomic nervous system has been implicated in the pathogenesis of cardiovascular disease, including congestive heart failure and cardiac arrhythmias. Despite advances in the medical and surgical management of these entities, progression of disease persists as does the risk for sudden cardiac death. With improved knowledge of the dynamic relationships between the nervous system and heart, neuromodulatory techniques such as cardiac sympathetic denervation and vagal nerve stimulation (VNS) have emerged as possible therapeutic approaches for the management of these disorders. In this review, we present the structure and function of the cardiac nervous system and the remodeling that occurs in disease states, emphasizing the concept of increased sympathoexcitation and reduced parasympathetic tone. We review preclinical evidence for vagal nerve stimulation, and early results of clinical trials in the setting of congestive heart failure. Vagal nerve stimulation, and other neuromodulatory techniques, may improve the management of cardiovascular disorders, and warrant further study. abstract_id: PUBMED:16301095 Sleep and cardiovascular disease. Sleep is an important modulator of cardiovascular function, both in physiological conditions and in disease states. In individuals without a primary sleep disorder, sleep may exert significant effects on the autonomic nervous system, systemic hemodynamics, cardiac function, endothelial function, and coagulation. Some of these influences can be directly linked to specific modulatory effects of sleep stages per se; others result from the natural circadian rhythm of various physiological processes. There is a temporal association between physiological sleep and occurrence of vascular events, cardiac arrhythmias, and sudden death. Epidemiological and pathophysiological studies also indicate that there may be a causal link between primary sleep abnormalities (sleep curtailment, shift work, and sleep-disordered breathing) and cardiovascular and metabolic disease, such as hypertension, atherosclerosis, stroke, heart failure, cardiac arrhythmias, sudden death, obesity, and the metabolic syndrome. Finally, sleep disturbances may occur as a result of several medical conditions (including obesity, chronic heart failure, and menopause) and may therefore contribute to cardiovascular morbidity associated with these conditions. Further understanding of specific pathophysiological pathways linking sleep disorders to cardiovascular disease is important for developing therapeutic strategies and may have important implications for cardiovascular chronotherapeutics. abstract_id: PUBMED:30584680 Pathogenesis and prevention of risk of cardiovascular events in patients with pneumococcal community-acquired pneumonia. It is now well recognized that cardiovascular events (CVE) occur quite commonly, both in the acute phase and in the long-term, in patients with community-acquired pneumonia (CAP). CVE have been noted in up to 30% of patients hospitalized with all-cause CAP. One systematic review and meta-analysis of hospitalized patients with all-cause CAP noted that the incidence rates for overall cardiac events were 17.7%, for incident heart failure were 14.1%, for acute coronary syndromes were 5.3% and for incident cardiac arrhythmias were 4.7%. In the case of pneumococcal CAP, almost 20% of patients studied had one or more of these cardiac events. Recent research has provided insights into the pathogenesis of the acute cardiac events occurring in pneumococcal infections. With respect to the former, key involvements of the major pneumococcal protein virulence factor, pneumolysin, are now well documented, whilst systemic platelet-driven neutrophil activation may also contribute. However, events involved in the pathogenesis of the long-term cardiovascular sequelae remain largely unexplored. Emerging evidence suggests that persistent antigenaemia may predispose to the development of a systemic pro-inflammatory/prothrombotic phenotype underpinning the risk of future cardiovascular events. The current manuscript briefly reviews the occurrence of cardiovascular events in patients with all-cause CAP, as well as in pneumococcal and influenza infections. It highlights the close interaction between influenza and pneumococcal pneumonia. It also includes a brief discussion of mechanisms of the acute cardiac events in CAP. However, the primary focus is on the prevalence, pathogenesis and prevention of the longer-term cardiac sequelae of severe pneumococcal disease, particularly in the context of persistent antigenaemia and associated inflammation. abstract_id: PUBMED:11822975 Heart rate variability and cardiovascular mortality. The autonomic nervous system plays a major role in affecting the cardiac milieu and promoting malignant ventricular activity. The measurement of heart rate variability (HRV) is a noninvasive tool for assessing the status of the autonomic nervous system. A depressed HRV among post-myocardial infarction patients is a well-established risk factor for arrhythmic death. A reduced HRV has also been used to identify diabetic patients with autonomic neuropathy. This paper presents recent developments in the use of HRV, focusing on further refinement and validation of the use of both linear and nonlinear dynamics for sudden death prognostication, evaluation of the effect of specific pharmacologic agents on HRV, and assessment of HRV in health and in specific disease states that have been associated with an increased mortality risk. Answer: Yes, autonomic markers still have a place in prognostication for cardiovascular and arrhythmic events in heart failure patients. The GISSI-HF trial investigated the prognostic value of various autonomic variables, such as heart rate variability (HRV), deceleration capacity (DC), and heart rate turbulence (HRT), in patients with symptomatic chronic heart failure (HF). The study found that autonomic indexes like very low frequency power (VLFP), low frequency power (LFP), and turbulence slope (TS) improved predictive discrimination and risk classification for cardiovascular mortality when added to clinical variables. Specifically, HRT was helpful in identifying patients with left ventricular ejection fraction (LVEF) greater than 30% who were at increased arrhythmic risk (PUBMED:22851700). Moreover, the role of the autonomic nervous system in cardiovascular diseases is well recognized, with dysfunction contributing to the progression of heart failure and cardiac arrhythmias. Neuromodulatory techniques, such as vagal nerve stimulation, have emerged as potential therapeutic approaches for managing cardiovascular disorders by modulating autonomic activity (PUBMED:33414727, PUBMED:26914959). The importance of autonomic function in cardiovascular health is further supported by evidence linking autonomic dysfunction to increased cardiovascular morbidity and mortality in conditions like chronic kidney disease (CKD), end-stage kidney disease (ESKD) (PUBMED:33395034), and diabetes mellitus (PUBMED:17505630). Additionally, sleep has been identified as an important modulator of cardiovascular function, influencing the autonomic nervous system and potentially linking primary sleep abnormalities to cardiovascular and metabolic diseases (PUBMED:16301095). In summary, autonomic markers continue to be relevant in the prognostication of cardiovascular and arrhythmic events in heart failure patients, and their modulation through various therapies offers a promising avenue for the treatment of cardiovascular diseases.
Instruction: Do the parent-child relationship and parenting behaviors differ between families with a child with and without chronic illness? Abstracts: abstract_id: PUBMED:23660152 Do the parent-child relationship and parenting behaviors differ between families with a child with and without chronic illness? A meta-analysis. Objective: The present meta-analysis compared the quality of the parent-child relationship as well as parenting behaviors and styles of families with a child with chronic physical illness with families of healthy children or test norms. Methods: Empirical studies were identified with the help of electronic databases and cross-referencing. Based on 325 included studies, random-effects meta-analysis was performed. Results: Although most effect sizes were small or very small, the parent-child relationship tended to be less positive if a child had a chronic physical illness (g = -.16 standard deviation units). In addition, lower levels of parental responsiveness (emotional warmth; g = -.22) as well as higher levels of demandingness (control, monitoring; g = .18) and overprotection (g = .39) were observed in these families. However, effect sizes were heterogeneous and only significant for a limited number of diseases. There was also some evidence for higher levels of authoritarian (g = .24) and neglectful parenting (g = .51) as well as lower levels of authoritative parenting compared with families with healthy children (g = -.13). Effect sizes varied, in part, by length of illness, child age, rater, assessment method, and target of comparison. Conclusions: We conclude that most families with a child with chronic physical illness adapt well with regard to the parent-child relationship and parenting behaviors/styles. Nonetheless, some families of children with specific diseases-such as epilepsy, hearing impairment, and asthma-may have difficulties finding appropriate levels of protective behaviors, control, and parental warmth and building positive mutual relationships between parents and children. abstract_id: PUBMED:18049321 Examining the effects of maternal chronic illness on child well-being in single parent families. Objective: Chronic illness is highly prevalent among adults with children. It is therefore important to understand how parental illness may or may not have an impact on affected families. Findings thus far have suggested that differences between children with and without a sick parent are minimal, but there are individual and familial moderators of outcome. It is unclear whether these results are generalizable to single-parent families. The purpose of the present study was to examine whether maternal chronic illness affects multiple aspects of child functioning in a large, ethnically diverse sample of single-parent families compared to those not affected by illness. Potential moderators of differences, including maternal distress, parenting variables (aggravation and warmth), functional impairment related to illness, and demographic characteristics were also tested. Methods: Using data from the Child Development Study (CDS), 812 mother-child pairs were studied. Mothers completed measures of child internalizing, externalizing, and positive behaviors, while children completed a measure of depression. Results: The results indicated that overall there were no differences between children with or without a sick mother on the measures of well-being. Higher symptom levels among both cohorts were associated with maternal distress and aggravation in parenting only. However, children with a sick mother were more likely to have a consultation due to emotional difficulties. Conclusion: Several areas for future work on how illness affects single-parent families were identified such as prospectively studying illnesses with a variable course and determining which protective factors promote resiliency for children in this difficult situation. abstract_id: PUBMED:29172624 Parenting behaviors and the well-being of children with a chronic physical condition. Introduction: Numerous studies have identified the importance of parenting behaviors to the well-being of children with chronic physical conditions. Synthesizing the findings of these studies has potential to identify which parenting behaviors are associated with specific aspects of child well-being. Method: We retrieved research reports addressing the relationship between parenting behaviors and well-being in children with chronic physical conditions, and categorized parenting behaviors based on Skinner, Johnson, and Snyder's (2005) core dimensions of parenting (warmth, rejection, structure, chaos, autonomy support, and coercion) Through meta-analysis, we examined relationships between parenting dimension and child well-being variables. Results: Fifty-four reports from 47 unique studies met inclusion criteria. Parent warmth was associated with less child depression, better quality of life, better physical functioning, and fewer externalizing behavior problems. Parent rejection was associated with more child depression, internalizing/externalizing behavior problems, and poorer physical functioning. Parent structure was associated with better child physical functioning. Parent chaos was associated with poorer child physical functioning. Parent autonomy support was associated with better quality of life and fewer externalizing behavior problems. Parent coercion was associated with more child depression, poorer quality of life, poorer physical function, and more internalizing behavior problems. Conclusion: The results identify multiple, potentially modifiable parenting dimensions associated with well-being in children with a chronic condition, which could be targeted in developing family-focused interventions. They also provide evidence that research using Skinner's core dimensions could lead to conceptualization and study of parenting behaviors in ways that would enable comparison of parenting in a variety of health and sociocultural contexts. (PsycINFO Database Record abstract_id: PUBMED:31697435 Families with pediatric type 1 diabetes: A comparison with the general population on child well-being, parental distress, and parenting behavior. Aims: The aim of this study was to compare families with a child (2-12 years) with type 1 diabetes (T1D) to families which are not confronted with chronic illness, with regard to children's well-being, parental distress, and parenting behavior. In addition, differences were explored between families whose child has optimal vs suboptimal glycemic control. Methods: Mothers, fathers, and children of 105 families with pediatric T1D completed questionnaires assessing child well-being, parental distress, and parenting. The control group consisted of 414 families without chronic illness. Results: With regard to child well-being, children with T1D had more adjustment difficulties (as reported by mothers) and lower quality of life (QoL) (as reported by mothers and fathers), whereas children themselves (8-12 years) reported higher QoL compared to controls. In terms of parental distress, mothers, but not fathers, of children with T1D reported more stress, anxiety symptoms, and depressive symptoms than controls. With regard to parenting behavior, parent reports revealed less protectiveness in fathers and less autonomy support and responsiveness in both parents as compared to controls. No differences were found in parent-reported psychological control between parents of children with and without T1D, but children with T1D perceived lowered parental psychological control. Lastly, secondary analyses indicated that especially families with suboptimal child glycemic control showed more maternal distress and worse child well-being (according to parents). Conclusions: Families confronted with pediatric T1D differ from families without chronic illness: childhood T1D impacts parental perceptions of child well-being and differentially affects mothers' and fathers' distress levels and behaviors. abstract_id: PUBMED:31960342 The Role of General Parenting Style in Child Diet and Obesity Risk. Purpose Of Review: The childhood obesity epidemic is widely considered to have reached pandemic proportions. Across the world, children with obesity are facing numerous psychological and physiological issues that follow them into adulthood, frequently leading to chronic illness and early death. In an effort to combat the compounding effects of childhood overweight, researchers are attempting to identify biological and environmental contributors to child weight. Parenting styles are one recognized influence on child diet and body mass index (BMI). This review is a comprehensive examination of the literature on the influence of parenting style on childhood diet and BMI over the past 5 years. Recent Findings: Current research continues to support the use of traditional parenting style categories (i.e., authoritative, authoritarian, permissive, uninvolved/neglectful): however, newer subcategories of Baumrind's styles and the inclusion of previously underrepresented groups (e.g., fathers, cross cultural comparison studies) are shedding more light on the nuance of parenting's relationship with child weight. Parenting styles that focus on the balance of warmth and control (e.g., authoritative) in contrast to the styles dedicated to only one of these constructs (e.g., permissive, authoritarian) seem to promote the healthiest dietary habits and may be protective of child BMI. abstract_id: PUBMED:28333531 Randomized controlled trial of Triple P for parents of children with asthma or eczema: Effects on parenting and child behavior. Objective: Parents play an important role in children's illness management, in promoting child adjustment and reducing behavior problems. Little research has focused on the evaluation of parenting interventions in the context of childhood chronic illness. The aim of this study was to test the efficacy of a brief, group parenting intervention (Healthy Living Triple P) in improving parenting skills and parent adjustment, and reducing child behavioral and emotional difficulties in the context of childhood asthma and eczema. Method: One hundred seven parents of children with a diagnosis of asthma and/or eczema were randomly assigned to intervention (n = 52) or care as usual (CAU; n = 55). Parents completed self-report measures of their child's behavioral and emotional adjustment, their own parenting, and their own level of adjustment at pre- and postintervention and at 6-month follow-up. Parent-child interactions were observed and coded at each time point. The intervention consisted of 2 group sessions of 2 hr each delivered by trained, accredited practitioners. Results: Attrition was low, with T2 and T3 assessment completed by 84.6% and 80.8% of intervention families and 92.7% and 81.8% of CAU families, respectively. Intention-to-treat analyses indicated that overall parent-reported ineffective parenting as well as parental overreactivity reduced as a result of intervention. Parent report of child behavior problems also decreased, but there were no changes in children's emotional adjustment. No changes in observed parent or child behavior were found. Stress reduced for parents in the intervention group compared to the CAU group, but there were no changes in parental anxiety or depression. Effects showed evidence of reliable and clinical change and were maintained at 6-month follow-up. Conclusions: The intervention shows promise as an addition to clinical services for children with asthma and eczema and may have broader application to other chronic health conditions. (PsycINFO Database Record abstract_id: PUBMED:17634187 The relationship of parental overprotection, perceived child vulnerability, and parenting stress to uncertainty in youth with chronic illness. Objective: To examine the relationship of parent-reported overprotection (OP), perceived child vulnerability (PCV), and parenting stress (PS) to youth-reported illness uncertainty, and to explore potential developmental differences. Method: Eighty-two children and 82 adolescents (n = 164) diagnosed with Type 1 diabetes mellitus (DM1) or asthma, completed a measure of illness uncertainty, while their parents completed measures of OP, PCV, and PS. Results: After controlling for demographic and illness parameters, both PCV and PS significantly predicted youth illness uncertainty in the combined sample. Within the child group, only PS significantly predicted illness uncertainty, whereas only PCV significantly predicted uncertainty for adolescents. Conclusion: Specific parenting variables are associated with youth-reported illness uncertainty; however, their relationship varies according to developmental level. Although OP has been identified as a predictor of child psychological outcomes in other studies, it does not appear to be associated with illness uncertainty in youth with DM1 or asthma. abstract_id: PUBMED:28834111 Parenting stress in caregivers of children with chronic physical condition-A meta-analysis. On the basis of the parenting stress model we compared levels of parenting stress in families with and without a child with a chronic physical condition and analysed correlates of parenting stress in families with a child with a chronic condition. A systematic search through electronic databases identified 547 relevant studies that were included in a random-effects meta-analysis. Parents of children with a chronic condition showed small to moderate elevations of general parenting stress and stress related to the parent-child relationship in particular. They showed moderate to large elevations in health-related parenting stress. Parents of children with cancer, cerebral palsy, HIV infection or AIDS, and spina bifida showed the highest levels of parenting stress. Stress levels also varied by illness severity and duration, child age, parental gender and mental health, marital status, marital quality, and levels of perceived support. Behaviour problems of the child and low parental mental health were the strongest correlates of parenting stress. The present results assist with identifying parents at highest needs for interventions aimed at reducing parenting stress. These interventions should address the reduction of child behaviour problems, the promotion of parental mental health, the increase in marital quality and social support in general, and skills for dealing with stressors. abstract_id: PUBMED:37043221 Relationships between parenting and illness factors and child behaviour difficulties in children with asthma and/or eczema. Chronic health conditions such as asthma and eczema are common and are associated with significant psychosocial sequelae for children and their families. A number of parenting variables have been implicated in child health outcomes; however, there are gaps in understanding of the relationships between parenting and child adjustment in the context of chronic illness. This study examined the role that modifiable parenting factors including parenting style, self-efficacy, and adjustment play in explaining general and illness-related child behaviour and emotional problems. Parents (N = 107) of children diagnosed with asthma only (n = 22), eczema only (n = 59), or both conditions (n = 26) completed a range of parenting and child adjustment measures. The majority of the modifiable parenting factors (parents' self-efficacy with managing their child's internalising, asthma-related, and eczema-related behaviours; parent adjustment; and use of ineffective parenting strategies) made significant contributions to explaining variance child behaviour. Parenting variables consistently explained greater proportions of variance in general and illness-related child behaviour difficulties compared to demographic and illness factors and represent important intervention targets. abstract_id: PUBMED:18552704 Relationship of sleep parameters, child psychological functioning, and parenting stress to obesity status among preadolescent children. Objective: Insufficient sleep may be a significant contributing factor to the increase in pediatric obesity and thus may also contribute to adult obesity and chronic illness. Previous research has been based on large survey studies with consideration of demographics and lifestyle factors (e.g., snacking and TV watching) but not of child psychological/behavioral functioning and parenting factors. Method: This study investigated the relationship of sleep duration to obesity status in 819 children ages 8 to 11 years old, with consideration of demographics, clinical elevations in child psychological/behavioral functioning, and parenting stress. Results: In unadjusted and adjusted analyses, parent-reported child sleep duration was significantly associated with the odds of obesity with an increase of 41% for each 1-hour reduction in sleep duration. In addition to sleep duration, only median neighborhood income was significantly related to obesity status. Indices of child psychological/behavioral functioning and parenting stress were associated with sleep duration but not with obesity, and adjusting for these behavioral and parenting characteristics did not appreciably alter the relationship between sleep duration and obesity status. Exploratory gender-specific analyses found that mean sleep duration was significantly associated with the odds of obesity for boys but not for girls. Conclusions: These results show that the relationship of shorter sleep duration to a greater likelihood of being obese persists even after adjusting for potential confounders of child psychological/behavioral functioning and parenting stress. Gender-specific associations are similar to findings reported in samples that include adolescents. Answer: Yes, the parent-child relationship and parenting behaviors do differ between families with a child with chronic illness and those without. A meta-analysis comparing the quality of the parent-child relationship and parenting behaviors/styles found that families with a child with chronic physical illness tend to have a less positive parent-child relationship (g = -.16), lower levels of parental responsiveness (g = -.22), and higher levels of demandingness (g = .18) and overprotection (g = .39). There was also evidence for higher levels of authoritarian (g = .24) and neglectful parenting (g = .51), as well as lower levels of authoritative parenting (g = -.13) compared to families with healthy children. However, these effects were heterogeneous and only significant for a limited number of diseases (PUBMED:23660152). In families with a child with a chronic condition like type 1 diabetes, children reported more adjustment difficulties and lower quality of life, while mothers reported more stress, anxiety symptoms, and depressive symptoms than controls. Parenting behavior also differed, with less protectiveness in fathers and less autonomy support and responsiveness in both parents compared to controls (PUBMED:31697435). Parenting stress was found to be higher in parents of children with a chronic condition, with moderate to large elevations in health-related parenting stress. The levels of stress varied by illness severity and duration, child age, parental gender and mental health, marital status, marital quality, and levels of perceived support (PUBMED:28834111). In the context of childhood asthma and eczema, parenting interventions like Healthy Living Triple P showed efficacy in improving parenting skills and reducing child behavioral problems and parental stress (PUBMED:28333531). Overall, these findings suggest that chronic physical illness in a child can impact the parent-child relationship and parenting behaviors, leading to increased stress and challenges in parenting. However, most families with a child with chronic physical illness adapt well, although some may have difficulties in finding appropriate levels of protective behaviors and building positive mutual relationships between parents and children (PUBMED:23660152).
Instruction: Does methotrexate confer a significant advantage over misoprostol alone for early medical abortion? Abstracts: abstract_id: PUBMED:17439563 Does methotrexate confer a significant advantage over misoprostol alone for early medical abortion? A retrospective analysis of 8678 abortions. Objective: The objective of this study was to compare efficacy for four medical abortion regimens used in one clinic setting: (1) misoprostol alone, (2) oral methotrexate + buccal misoprostol, (3) oral methotrexate + vaginal misoprostol, and (4) intramuscular methotrexate + vaginal misoprostol. Design: Retrospective analysis of data from clinical records. Setting: An anonymous women's health centre in Latin America, providing medical abortion services since 2001 in a highly restrictive setting. Population: A total of 8678 women with gestations &lt;56 days, who sought a medical abortion between April 2002 and December 2004. Methods: Chi-square test was performed to compare patient characteristics by abortion outcome (success/failure). The impact of selected variables on method success was explored through logistic regression. A second regression analysis was conducted with a subsample (n = 4022), for which data on parity and previous abortion(s) were available. Main Outcome Measure: Abortion outcome (success/failure) at 2-week follow up. Results: Success rates for the three methotrexate regimens ranged from 81.7 to 83.5% and did not differ significantly; misoprostol-alone regimen had a success rate of 76.8%. Efficacy was significantly higher for the three combined methotrexate regimens compared with misoprostol alone and remained so in the multivariate model (OR = 1.35). In the final regression, lower gestational age, being nulliparous, and having no previous abortions were positively correlated with method success. Conclusions: In this real-use setting, methotrexate appears to confer a significant advantage over misoprostol alone for early medical abortion. This finding is important for settings where mifepristone remains unavailable. Additional factors such as gestational age limits and patient preference should be considered in regimen selection. abstract_id: PUBMED:24363712 Misoprostol alone or in combination with methotrexate for termination of pregnancy at first trimester. Abortion is an important problem in obstetrics throughout the world. The common and standard method for pregnancy termination at first trimester is surgery (curettage). Nowadays, an effective method of pregnancy termination at first trimester is medical treatments. The aim of this study is to compare misoprostol alone or in combination with methotrexate for pregnancy termination at first trimester. This study is a randomized clinical trial. A total of 200 pregnant women at first trimester were randomizedly divided into two groups for termination of pregnancy. The first group received 800 μg vaginal misoprostol. If conceptus residual remained, the same dose of misoprostol was repeated. The second group received 50 mg/m² intramuscular methotrexate, and then 800 μg vaginal misoprostol was administered after 72 h. If conceptus residual remained, the same dose of misoprostol was repeated after 24 h. Abdominal ultrasonography was performed at seventh day for both groups. Should conceptus residual remained or if pregnancy continued, curettage was performed. The results were analyzed statistically in terms of chi-square, and student's t-test, using the SPSS software. A P-value equal or smaller than 0.05, was considered statistically significant. In this study, 83% of the first group and 81% of the second group had successful abortion. There was a significant correlation between the dose of misoprostol and abortion (P = 0.001) and between type of pregnancy and need for curettage (P &lt; 0.000) in both groups, but there was no significant correlation between gestational age and the numberof doses administered (P = 0.932).In conclusion it seems that pregnancy termination by misoprostol alone or in combination with methotrexate is a safe and cost-effective method. abstract_id: PUBMED:10518234 Methotrexate and misoprostol used alone or in combination for early abortion. The purpose of this study was to compare the outcome and side effects of using the drugs methotrexate and misoprostol, alone or in combination, to induce abortion. A total of 108 subjects who had requested elective termination of pregnancy and medical abortion at 9 weeks gestation or less were randomized into three groups. The first group received 50 mg/m2 intramuscular (i.m.) methotrexate on day 1 and, if the hCG level had risen by &gt; 50% of the initial level on day 4, a second dose was given. They were then followed-up at weekly intervals up to day 21. Group 2 received 800 micrograms vaginal misoprostol on day 1 and, if ultrasound showed a gestational sac on day 4, they received a repeat dose and were re-examined on day 7. Group 3 received 50 mg/m2 methotrexate intramuscularly followed 3 days later by 800 micrograms vaginal misoprostol and were re-examined on day 7. Complete abortion occurred in 25 (69%) of the 36 subjects in group 1, 21 (58%) of the 36 subjects in group 2, and 32 (89%) of the 36 subjects in group 3. The complete abortion rate in group 3 was significantly higher than that of both group 1 and group 2 (p &lt; 0.05). The incomplete abortion rate was significantly higher in group 2 as compared with both of the other groups (p &lt; 0.05). There were significant differences between the mean gestational age of the successful abortions and the failures in group 1 (no abortion occurred at more than 49 days gestation), but not in groups 2 or 3. Vaginal bleeding in subjects who successfully aborted began within 16 +/- 4 days in group 1 after the first dose, and within 24 h in 18 (86%) of the 21 subjects in group 2 and 27 (84%) of the 32 subjects in group 3 after the misoprostol dose. The drugs caused no serious or prolonged side effects. The combination of methotrexate and misoprostol is a more effective abortifacient regimen than when either drug is used alone. abstract_id: PUBMED:7523739 Methotrexate and misoprostol vs misoprostol alone for early abortion. A randomized controlled trial. Objective: To compare the safety and efficacy of early abortion by administration of methotrexate and misoprostol vs administration of misoprostol alone. Design: Randomized controlled trial. Setting: San Francisco (Calif) General Hospital. Patients: Pregnant women at 56 days' gestation or less seeking elective abortion. Sixty-three women volunteered for the trial; 61 completed the study and are included in the analysis. Intervention: Intramuscular administration of 50 mg of methotrexate per square meter of body surface area followed 3 days later by vaginal administration of 800 micrograms of misoprostol (group 1) or the same dose of misoprostol given alone (group 2). The misoprostol dose was repeated 24 hours later if abortion had not occurred. Main Outcome Measures: Successful abortion, duration of vaginal bleeding, side effects, and change in beta-human chorionic gonadotropin (beta-hCG) level. An abortion was considered successful if the pregnancy ended without requiring a surgical procedure. Results: Complete abortion occurred in 28 (90%) of 31 patients in group 1 and 14 (47%) of 30 patients in group 2 (P &lt; .001). Seventeen (61%) of the 28 women in group 1 who aborted did so the same day as misoprostol administration; vaginal bleeding lasted a mean (+/- SD) of 10 (+/- 4) days, and beta-hCG level was less than or equal to 10 IU/L by a mean of 31 (+/- 6) days after methotrexate administration. The 11 other women in group 1 who aborted did so after a mean delay of 29 (+/- 11) days; vaginal bleeding lasted 7 (+/- 4) days, and beta-hCG level was less than or equal to 10 IU/L by a mean of 24 (+/- 11) days after the abortion. There were three treatment failures in group 1: two ongoing pregnancies (6%) and one incomplete abortion (3%). For the 14 women with successful abortions in group 2, vaginal bleeding lasted a mean of 10 (+/- 6) days and beta-hCG level was less than or equal to 10 IU/L by mean of 39 (+/- 18) days after the misoprostol. There were 16 treatment failures in group 2: eight ongoing pregnancies (27%), and eight incomplete abortions (27%). Methotrexate side effects were minimal. Misoprostol side effects were diarrhea in 18% and nausea and vomiting in 5%. Conclusions: Methotrexate and vaginal misoprostol are more effective than misoprostol alone. Both drugs are available throughout the United States, and both drugs are inexpensive. This combination may offer an alternative to the use of antiprogestin and prostaglandin for medical abortion. abstract_id: PUBMED:10576234 Medical management of non-viable early first trimester pregnancy. Objective: To compare the efficacy of intramuscular methotrexate plus vaginal misoprostol to vaginal misoprostol alone in completing abortion in women with non-viable early first trimester pregnancy. Method: Twenty-one women with non-viable pregnancy up to 49 days gestation were randomized to receive intramuscular methotrexate, followed 2 days later by vaginal misoprostol or misoprostol alone. We also collected patient satisfaction information. Result: Complete abortion occurred in all 12 (100%) women in the combined group and eight of nine (89%, RR = 1.13, CI 0.89-1.42) women in the misoprostol only group. Of the women, 75% rated their experience as good and would choose medical management again. Conclusion: Either methotrexate plus misoprostol or misoprostol alone effectively completed abortion in women with non-viable early pregnancy and represent acceptable medical alternatives to surgery or expectant management. abstract_id: PUBMED:18389885 Good results of early medicinal pregnancy termination in a general practice on Curaçao Objective: To evaluate the results of early medicinal pregnancy termination. Design: Prospective, non-blinded, clinical study. Method: Between 2001 and 2005, 371 women presented to a medium-sized solo general practice in the centre of Willemstad, Curaçao, to request termination of an undesired pregnancy. Ofthese women, 74 first indicated a preference for abortion via curettage, and the remaining 297 chose to be treated medicinally. In the first trimester of the pregnancy, they were given methotrexate i.m. and misoprostol vaginally 3-5 days later, or only misoprostol vaginally when methotrexate was unavailable. The result was evaluated by abdominal echography two days after the treatment. Results: The result of the medicinal pregnancy termination could be determined in 271 of the 297 women. In 219 (81%) of these, the pregnancy had been terminated after a single treatment, and in 29 (11%) success was achieved after a second vaginal administration of misoprostol. The total success rate of the medicinal pregnancy termination was 92% (248/271). Curettage was required in 23 women (8%) after the medicinal treatment had failed. There was no significant difference in success rate between treatment with the methotrexate-misoprostol combination and treatment with misoprostol alone. Conclusion: Medicinal pregnancy termination with methotrexate and misoprostol, or with misoprostol alone, was a safe method with a relatively small risk of an ongoing pregnancy. It was a good alternative for surgical abortion. abstract_id: PUBMED:10342080 Comparing abortion induced with methotrexate and misoprostol to methotrexate alone. This was a dual-cohort study performed to determine if, by using methotrexate alone, abortions could be induced with fewer side effects, without sacrificing effectiveness and patient satisfaction. The subjects were 358 consecutive patients who requested elective medical termination of pregnancy at &lt; 7 weeks gestation. Women were given information and allowed to choose between methotrexate alone (50 mg/m2 intramuscularly) or methotrexate followed 4 days later by 800 micrograms misoprostol vaginally. The two groups were compared with respect to the number of surgical aspirations required, the number of days until the abortion was completed, side effects, and the number of women who would choose the same method again. The surgery rate for methotrexate was 17.2%, whereas for the combination it was 10.9%. The mean numbers of days until the abortion were complete was 23 and 12, respectively. Side effects, mean pain scores, total days of bleeding, and satisfaction rates were not significantly different in the two groups. The high failure rate with methotreaxate alone suggests that it should not be offered despite the high level of satisfaction. abstract_id: PUBMED:9166323 Medical abortion in early pregnancy: a review of the evidence. Objective: To review the literature on medical abortion in early pregnancy. Data Sources: I performed a MEDLINE search, supplemented by bibliographies of articles and textbooks. In addition, investigators in the field were consulted to identify other sources. The review was limited to reports in English or French concerning antiprogestins or methotrexate used either alone or in combination with a prostaglandin. Methods Of Study Selection: Only those mifepristone studies with 100 or more participants were included. Those participants who received the prostaglandin sulprostone were excluded because this drug is no longer used with mifepristone. Methotrexate studies of any size were included. All reports were categorized by study type, and the evidence was evaluated using the U.S. Preventive Services Task Force rating system. Tabulation, Integration, And Results: Both mifepristone and methotrexate, when used with a prostaglandin, can induce abortion safely in early pregnancy. Class I evidence supports a class A (good) recommendation that oral, single mifepristone doses of 200 mg and 600 mg have similar efficacy when used with a prostaglandin. Sequential and single-dose regimens have comparable efficacy. Vaginal misoprostol at 800 micrograms as an augmenting agent appears superior to the same dose given orally. With methotrexate abortion, 800 micrograms of misoprostol given vaginally 7 days after methotrexate is superior to the same dose given 3 days after. In addition, methotrexate in combination with misoprostol is more effective than misoprostol alone. Conclusion: Medical abortion with mifepristone or methotrexate in combination with a prostaglandin is safe and effective. However, the risk of hemorrhage and gastrointestinal side effects is greater with medical abortion than with suction curettage. Further research should be done to compare mifepristone and methotrexate abortions, to determine the upper gestational age limit, and to find the best way to provide this service in the U.S. health care system. abstract_id: PUBMED:16997301 Misoprostol alone vs. methotrexate followed by misoprostol for early abortion. N/A abstract_id: PUBMED:17280669 Alternatives to mifepristone for early medical abortion. Objective: To review published reports of first-trimester medical abortion regimens that do not include mifepristone. Methods: Reports listed in Pubmed and Medline on prospective and controlled trials of the efficacy of misoprostol, alone or associated with methotrexate, for first-trimester abortion were analyzed if they included more than 100 participants and were published since 1990. Results: The efficacy of regimens using misoprostol alone ranged from 84% to 96%, and when misoprostol was used with methotrexate the efficacy ranged from 70% to 97%. Efficacy rates were influenced by follow-up interval. Treatment for infection, bleeding, and incomplete abortion were infrequent with both methods (0.3%-5%). Conclusion: Alone or in combination with methotrexate, misoprostol is an efficacious alternative to mifepristone for the medical termination of pregnancy. Answer: Yes, methotrexate does confer a significant advantage over misoprostol alone for early medical abortion. A retrospective analysis of 8678 abortions found that success rates for the three methotrexate regimens ranged from 81.7 to 83.5%, which did not differ significantly from each other, but were significantly higher than the success rate of 76.8% for the misoprostol-alone regimen. The efficacy was significantly higher for the combined methotrexate regimens compared with misoprostol alone, even in a multivariate model (OR = 1.35) (PUBMED:17439563). Additionally, a randomized controlled trial comparing methotrexate and misoprostol to misoprostol alone for early abortion showed that complete abortion occurred in 90% of patients in the combination group compared to 47% in the misoprostol alone group (PUBMED:7523739). Another study reported that the combination of methotrexate and misoprostol is a more effective abortifacient regimen than when either drug is used alone (PUBMED:10518234). Furthermore, a study comparing misoprostol alone or in combination with methotrexate for pregnancy termination at the first trimester found that 83% of the first group (misoprostol alone) and 81% of the second group (combination) had successful abortion, indicating that both methods are safe and cost-effective, although the study did not show a significant advantage of the combination over misoprostol alone (PUBMED:24363712). In contrast, a study evaluating the results of early medicinal pregnancy termination in a general practice on Curaçao found no significant difference in success rate between treatment with the methotrexate-misoprostol combination and treatment with misoprostol alone (PUBMED:18389885). However, the overall evidence suggests that methotrexate combined with misoprostol generally provides a higher success rate for early medical abortion compared to misoprostol alone.
Instruction: Carisolv: an alternative to NaOCl in immature root canals? Abstracts: abstract_id: PUBMED:15946265 Carisolv: an alternative to NaOCl in immature root canals? Aim: To test the null hypothesis that Carisolv is no more effective than 1% sodium hypochlorite in cleaning uninstrumented, immature root canals. Methodology: A total of 240 uniform, immature ovine incisors were decoronated at the CEJ level and randomly divided into four groups of 60. After gross pulp extirpation, canals were flooded with normal saline (negative control), 1% NaOCl, Carisolv or 5% NaOCl (positive control) and incubated for 10 min (group 1), 20 min (group 2), 30 min (group 3) or 30 min, refreshing irrigant at 10 and 20 min (group 4). SEM photomicrographs of canal wall debris in the apical, middle and coronal thirds were scored against a 5-point scale. Internal consistency was assessed by kappa statistics. Debris scores for different irrigant regimes at different canal levels were analysed by non-parametric tests (P &lt; 0.05). Results: Canals were consistently cleaner in the coronal and middle than apical thirds. NaOCl (5%) was consistently most effective. Carisolv and NaOCl (1%) were no more effective than normal saline in group 1 (P &gt; 0.05), but significantly more effective than normal saline in groups 2 (middle and apical 1/3), 3 and 4 (P &lt; 0.05). Carisolv and NaOCl (1%) had comparable activity in groups 1, 2 (middle and apical thirds) and 3, but NaOCl (1%) was significantly more effective than Carisolv in group 4 (coronal and middle thirds). Conclusions: 1. The ovine incisor model presents opportunities to investigate irrigation regimes under controlled ex-vivo conditions. 2.NaOCl (5%) remains the most effective irrigant for rapid debris removal in immature root canals. 3. Carisolv cleans pulp debris from the walls of immature root canals as effectively as NaOCl (1%) during static, unrefreshed wall contact for between 20 and 30 min. 4. Refreshment of NaOCl (1%) enhances its cleaning ability above that of Carisolv. abstract_id: PUBMED:24916092 Available chlorine consumption from NaOCl solutions passively placed in instrumented human root canals. Aim: To monitor chlorine consumption from nonagitated aqueous sodium hypochlorite (NaOCl) solutions in human root canals using a recently developed assay, which can determine the order of magnitude of available chlorine in small volumes of liquid. Methodology: The root canals of 80 extracted single-rooted human teeth were instrumented to ProTaper Universal F4 and irrigated using 1% NaOCl. Subsequently, canals were irrigated with copious amounts of deionized water to rinse out the residual chlorine. Subsequently, the teeth were sealed externally and placed in a water bath of 37 °C. Root canals were filled with NaOCl of 1%, 2.75%, 5.5%, or distilled water for 1, 10, 100 or 1000 min (n = 5 teeth per solution and time). Consumption of chlorine was measured using paper points pre-impregnated with 15% potassium iodide. Colour change of the paper points was determined photo-electronically, assessing their red value after absorbing solutions from root canals. Measurements were compared to a standard series of NaOCl down to 0.001% (n = 5 paper points per concentration). Results: Red values of the paper points inserted into the root canal were affected by initial NaOCl concentration and time (two-way anova, P &lt; 0.05). If NaOCl concentrations above 0.1% are considered to be clinically relevant, then 5.5% NaOCl retained its activity in the root canal for more than 100 min, whereas 1% NaOCl lost its activity between 10 and 100 min. Conclusions: Nonagitated NaOCl solutions can remain biologically active in human root canals for extended time periods. abstract_id: PUBMED:29519975 The efficiency of different irrigation solutions and techniques for the removal of triple antibiotic paste from simulated immature root canals. Introduction: The purpose of this study was to evaluate the efficiencies of different irrigation protocols in the removal of triple antibiotic paste (TAP) from root canals. Materials And Methods: A total of 127 extracted human maxillary incisor teeth were prepared. Then, root-end resection of 3 mm was accomplished to simulate immature apex model. The root canals were filled with TAP, after 21 days, randomly divided into nine groups according to irrigation systems and solutions (n = 13). Conventional irrigation (CI) groups - Group 1: Root canal irrigation was performed with CI by Peracetic acid (PAA) solution, Group 2: Root canal irrigation was performed with CI by etidronic acid 1-hydroxyethylidene-1, 1-bisphosphonate (HEBP) + sodium hypochlorite (NaOCl) solution, Group 3: Root canal irrigation was performed with CI by ethylenediaminetetraacetic acid (EDTA)/NaOCl solutions. Vibringe system groups - Group 4: Root canal irrigation was performed with Vibringe system by PAA solution, Group 5: Root canal irrigation was performed with Vibringe system by HEBP + NaOCl solution, Group 6: Root canal irrigation was performed with Vibringe system by EDTA/NaOCl solution. EndoVac system groups - Group 7: Root canal irrigation was performed with EndoVac system by PAA solution, Group 8: Root canal irrigation was performed with EndoVac system by HEBP + NaOCl solution, Group 9: Root canal irrigation was performed with EndoVac system by EDTA/NaOCl solution. Control Group: (n = 0). Samples were sectioned vertically, and the amount of remaining medicament was scored for each root half and data were statistically analyzed. Results: Among the irrigation systems, CI groups showed the highest scores at both apical and coronal parts (P &lt; 0.05). In comparisons among the solutions, at the apical part, PAA groups showed the highest scores (P &lt; 0.05). At the coronal part, EDTA + NaOCl groups showed the lowest score values (P &lt; 0.05). Conclusion: The use of irrigation systems improved the removal of TAP from the simulated immature root canals. Also, as an irrigation solution EDTA gives more promising results than PAA and HEBP solutions. abstract_id: PUBMED:28642893 Effectiveness of different irrigation protocols on calcium hydroxide removal from simulated immature teeth after apexification. Aim: To evaluate the effectiveness of different irrigation solutions and ultrasonic activation of the irrigation solutions on the removal of calcium hydroxide (Ca(OH)2) from the simulated immature root canals after apexification. Materials and methods: One-hundred and one single-rooted teeth were used. The root canals were shaped with ProTaper rotary files up to F5. Simulation of roots with immature apices was carried out using size 4 Unicore drills. An injectable Ca(OH)2 was injected into each root canal, and packed to the working length. Then, cotton pellets were placed over canal orifices, and apical and coronal parts of the roots were sealed with resin-modified glass ionomer cement, and light cured. Specimens were stored in distilled water for 3 months at 37°C. After 3 months, the temporary coronal seal was removed and the samples were randomly divided into: (a) saline (n = 20), (b) ultrasonic activation of saline (n = 20), (c) sodium hypochlorite (NaOCl) (n = 20), (d) ultrasonic activation of NaOCl (n = 15), (e) chlorhexidine digluconate (CHX) (n = 20) and one positive control group (n = 3) and one negative control group (n = 3). The amount of remaining Ca(OH)2 on the canal walls was measured under stereomicroscope with 30× magnification. Comparisons between groups were made by the non-parametric Kruskal-Wallis test and Dunn post-test at a significance level of p &lt; 0.05. Results: There were no significant differences among the saline, ultrasonic activation of saline, NaOCl, ultrasonic activation of NaOCl and CHX (p &gt; 0.05) groups. Conclusions: Irrigation solutions and ultrasonic activation of the irrigation solutions could not completely remove Ca(OH)2 from the simulated immature root canals. abstract_id: PUBMED:23980743 Efficacy of endodontic applications of ozone and low-temperature atmospheric pressure plasma on root canals infected with Enterococcus faecalis. Unlabelled: This study aimed to compare the antimicrobial efficacy of low-temperature atmospheric pressure plasma (LTAPP) design and gaseous ozone delivery system with 2.5% NaOCl on Enterococcus faecalis in root canal walls and dentine tubules. The samples were divided into LTAPP (n = 12), ozone (n = 12), NaOCl (positive control, n = 12) and saline (negative control, n = 6) groups. Microbial samples were collected using paper points and dentin chips from root canals. Antimicrobial efficacy was assessed by counting the colony-forming units of Ent. faecalis before and after each irrigation protocol. Data were analysed using Kruskal-Wallis, Wilcoxon signed-rank, Friedman and Bonferroni t (Dunn's test)-tests (P = 0.05). The microbial sampling with paper points showed antibacterial efficacy of NaOCl, LTAPP, ozone and saline in descending order, respectively (P &lt; 0.05). The microbial sampling with dentin chips demonstrated a superior efficacy of LTAPP compared with NaOCl in the middle third (P &lt; 0.05), while both had similar effects in coronal and apical thirds (P &gt; 0.05). NaOCl and LTAPP were better than ozone at the coronal and middle parts of the root canals (P &lt; 0.05). These findings led us to suggest that LTAPP, which has no thermal and chemical effects, may be of great aid in endodontic treatment. Significance And Impact Of The Study: The present study handles different perspectives on chemomechanical preparation of root canals. Ozone and low-temperature atmospheric pressure plasma (LTAPP) were investigated to determine whether they could be an alternative for NaOCl. Up to now, chemical solutions (NaOCl, chlorhexidine digluconate, etc...) have been used to disinfect the root canals. When the reported effects of LTAPP on biological and chemical decontamination were taken into consideration, a question rose whether it has antimicrobial efficacy in root canals infected with E. faecalis. According to the findings of the present study, LTAPP may constitute a promising aid in endodontics in disinfection of root canals. abstract_id: PUBMED:23130035 Comparison of Antimicrobial Efficacy of IKI and NaOCl Irrigants in Infected Root Canals: An In Vivo Study. Introduction: Effective debridement of the root canal system with chemical irrigants prior to obturation is the key to long-term success of endodontic therapy. The purpose of this study is to compare the antibacterial activity of 2.5% sodium hypochlorite (NaOCl) and 2% iodine potassium iodide (IKI) solutions as intracanal disinfectant in infected root canals during one-visit endodontic treatment procedure. Materials And Methods: Thirty single-rooted teeth with necrotic pulps in 27 patients were selected according to specific inclusion/exclusion criteria and divided into two random groups. In group I, canals were irrigated with 2.5% NaOCl during instrumentation and in group II canals were initially irrigated with sterile saline during biomechanical preparation and then exposed to a 5-minute final irrigation with 2% IKI. Bacterial samples were taken before treatment (S(1)), and at the end of treatment (S(2)). Mann-Whitney U test was used for analysis. Results: Bacteria were present in all initial samples. NaOCl was able to significantly reduce the number of colony forming units (CFU) from S(1) to S(2) in approximately 90% of canals. Only 15% reductions in CFUs occurred after irrigation/instrumentation in group II; this degree of disinfection was not statistically significant. Conclusion: According to this study, although root canal irrigation with 2.5% NaOCl could not eradicate all bacteria within the canals; it was significantly superior in comparison with 2% IKI use. abstract_id: PUBMED:32101646 The effects of alternate irrigation of root canals with chelating agents and sodium hypochlorite on the effectiveness of smear layer removal. Background: After the mechanical preparation of a root canal, the canal walls are covered with a smear layer. In order to deeply clean the dentinal tubules, removal of the smear layer is recommended. There is no consensus on the length of time of rinsing with chelating agents or irrigation with alternating chelating agents and sodium hypochlorite (NaOCl). Objectives: The aim of the study was to evaluate the effectiveness of smear layer removal using 4 irrigation protocols. Material And Methods: We prepared 42 straight root canals to size ISO40/04 and assigned them into 4 study groups (n = 10) and a control group (n = 2). The root canals were irrigated as follows: in the control group, 180 s with 5.25% NaOCl; in group 1, 60 s with 40% citric acid (CA) and 120 s with NaOCl; in group 2, 120 s with CA and 120 s with NaOCl; in group 3, 30 s CA, 30 s with NaOCl, 30 s CA and 120 s with NaOCl; and in group 4, 60 s with CA, 30 s with NaOCl, 60 s with CA, and 120 s with NaOCl. The roots were split longitudinally and the root canals were observed under ×200-500 magnification. The root canal walls were analyzed in areas 2 mm, 6 mm and 10 mm from the apex. Results: In the apical and medial sections, the best effects were achieved in groups 3 and 4. In coronal sections, no significant differences between experimental groups were found. Conclusions: Within the limitations of this study, it can be concluded that irrigation with alternating NaOCl and CA was the most effective at smear layer removal, regardless of the irrigation time. abstract_id: PUBMED:24506847 Effects of NaOCl, EDTA and MTAD when applied to dentine on stress distribution in post-restored roots with flared canals. Aim: To evaluate the effect of NaOCl, EDTA and MTAD on the stress distribution and levels in roots with flared canals and three different aesthetic post systems using finite element stress analysis (FEA). Methodology: Three-dimensional (3D) FEA models simulating a maxillary incisor with excessive structural loss and flared root canals were created. The dentine of the first models of each post group was assumed as homogenous, whereas the others were deemed as having their elastic modulus affected up to 100 μm deep as a result of irrigation protocol (5.25 NaOCl, 17% EDTA and MTAD for 2 h). A sound incisor tooth model was used as the control. Restorations were created according to the post system used (pre-fabricated fibre post (PFP)), polyethylene fibre (Ribbond) post and core build-up (RBP), and one-piece milled zirconia post and core (ZP). Ceramic crowns were added to the models. A 300-N static load was applied at the centre of the palatal surface of the models to calculate the stress distributions. The SolidWorks/Cosmosworks structural analysis programmes were used for FEA analysis. Results were presented by considering von Mises criteria. Results: The analysis of the von Mises stresses revealed that RBP created less stress in the remaining root dentine when compared to PFP and ZP. ZP maintained the stresses inside its body and reduced stress on the palatal surface of the root; however, it forwarded more stress towards the apical area. NaOCl-, EDTA- and MTAD-treated dentine increased the stresses within the root structure regardless of the effect of the post system used (11-15.4 MPa for PFP, 9.5-13.02 MPa for RBP and 14.2 MPa for ZP). Amongst the irrigation solutions used, EDTA and MTAD increased the stresses more than NaOCl in all models. All the irrigation solutions showed the same stress levels and distributions in the ZP model. Conclusion: NaOCl-, EDTA- and MTAD- treated dentine and a rigid post with high elastic modulus may increase fracture risk in roots with flared canals by increasing the stresses within root dentine. Therefore, solutions that alter the elastic modulus of dentine less (such as NaOCl) or an individually shaped post-core system constructed with a material that has an elastic modulus close to dentine (polyethylene fibre) should be used in weak roots. abstract_id: PUBMED:16409323 Efficacy of various concentrations of NaOCl and instrumentation techniques in reducing Enterococcus faecalis within root canals and dentinal tubules. Aim: To evaluate the efficacy of 0.5%, 2.5% and 5.25% sodium hypochlorite (NaOCl) as intracanal irrigants associated with hand and rotary instrumentation techniques against Enterococcus faecalis within root canals and dentinal tubules. Methodology: A total of 180 extracted human premolar teeth were infected for 21 days with E. faecalis. The specimens were divided into 12 groups, as follows: group 1: 5.25% NaOCl + Hybrid technique (Valdrighi et al. 1998); group 2: 5.25% NaOCl + nickel-titanium (NiTi) rotary technique 4 mm shorter than the apex (by FOP-UNICAMP); group 3: 5, 25% NaOCl + NiTi rotary technique (Hero 642); group 4: 2.5% NaOCl +Hybrid technique; group 5: 2.5% NaOCl + NiTi rotary technique 4 mm shorter than the apex; group 6: 2.5% NaOCl + NiTi rotary technique (Hero 642); group 7: 0.5% NaOCl + Hybrid technique; group 8: 0.5% NaOCl + NiTi rotary technique 4 mm shorter than the apex; group 9: 0.5% NaOCl + NiTi rotary technique (Hero 642); group 10: sterile saline solution + Hybrid technique; group 11: sterile saline solution + NiTi rotary technique 4 mm shorter than the apex; group 12: sterile saline solution + NiTi rotary technique (Hero 642). Canals were sampled before and after preparation. After serial dilution, samples were plated onto brain heart infusion (BHI) agar, and the colony forming units (CFU) that were grown were counted. The teeth were sectioned into three thirds and dentine chips were removed from the canals with conical burs. The samples obtained with each bur were immediately collected into test tubes containing BHI broth, and were incubated at 37 degrees C and plated onto BHI agar. The CFU were counted and analysed. Results: At all depths and thirds of the root canals and for all techniques used, 5.25% NaOCl was shown to be the most effective irrigant solution tested when dentinal tubules were analysed, followed by 2.5% NaOCl. No differences among concentrations in cleaning the canals were found. Conclusions: Especially at higher concentrations, NaOCl, was able to disinfect the dentinal tubules, independent of the canal preparation technique used. abstract_id: PUBMED:23880277 Comparison of efficacy of pulverization and sterile paper point techniques for sampling root canals. Introduction: The purpose of this study was to compare the efficacy of the pulverization and sterile paper point techniques for sampling root canals using 5.25% NaOCl/17% EDTA and 1.3% NaOCl/MTAD (Dentsply, Tulsa, OK) as irrigation regimens. Methods: Single-canal extracted human teeth were decoronated and infected with Enterococcus faecalis. Roots were randomly assigned to 2 irrigation regimens: group A with 5.25% NaOCl/17% EDTA (n = 30) and group B with 1.3% NaOCl/MTAD (n = 30). After chemomechanical debridement, bacterial samplings were taken using sterile paper points and pulverized powder of the apical 5 mm root ends. Results: The sterile paper point technique did not show growth in any samples. The pulverization technique showed growth in 24 of the 60 samples. The Fisher exact test showed significant differences between sampling techniques (P &lt; .001). The sterile paper point technique showed no difference between irrigation regimens. However, 17 of the 30 roots in group A and 7 of the 30 roots in group B resulted in growth as detected by pulverization technique. Data showed a significant difference between irrigation regimens (P = .03) in pulverization technique. Conclusions: The pulverization technique was more efficacious in detecting viable bacteria. Furthermore, this technique showed that 1.3% NaOCl/MTAD regimen was more effective in disinfecting root canals. Answer: Based on the provided abstracts, Carisolv has been tested as an alternative to sodium hypochlorite (NaOCl) for cleaning uninstrumented, immature root canals. The study aimed to test the null hypothesis that Carisolv is no more effective than 1% NaOCl in this context (PUBMED:15946265). The results indicated that while 5% NaOCl was the most effective irrigant for rapid debris removal in immature root canals, Carisolv was found to clean pulp debris from the walls of immature root canals as effectively as 1% NaOCl during static, unrefreshed wall contact for between 20 and 30 minutes. However, the refreshment of 1% NaOCl enhanced its cleaning ability above that of Carisolv (PUBMED:15946265). In comparison to other studies, it is evident that NaOCl, especially at higher concentrations, is effective in disinfecting dentinal tubules and root canals, regardless of the canal preparation technique used (PUBMED:16409323). Additionally, nonagitated NaOCl solutions can remain biologically active in human root canals for extended time periods, with 5.5% NaOCl retaining its activity for more than 100 minutes (PUBMED:24916092). Other studies have also explored the efficacy of different irrigation protocols and solutions, such as the use of chelating agents and NaOCl for smear layer removal (PUBMED:32101646), and the antimicrobial efficacy of various treatments including NaOCl, ozone, and low-temperature atmospheric pressure plasma (PUBMED:23980743). In conclusion, while Carisolv can be as effective as 1% NaOCl for a certain duration, the evidence suggests that NaOCl, particularly at higher concentrations, remains a highly effective and preferred irrigant for cleaning immature root canals. Therefore, Carisolv could be considered an alternative to NaOCl, but it may not be superior, especially when NaOCl is refreshed during the procedure (PUBMED:15946265).
Instruction: Therapeutic value of lymph node dissection in advanced gastric cancer with macroscopic duodenum invasion: is the posterior pancreatic head lymph node dissection beneficial? Abstracts: abstract_id: PUBMED:19224285 Therapeutic value of lymph node dissection in advanced gastric cancer with macroscopic duodenum invasion: is the posterior pancreatic head lymph node dissection beneficial? Background: In advanced gastric cancer (AGC) with duodenum invasion, the posterior pancreatic lymph nodes are susceptible to metastasis because of their proximity to the duodenum. The therapeutic value of lymph node dissection in this area for AGC with macroscopic duodenum invasion remains unclear. Methods: Patients who had undergone curative gastrectomy for lower-third AGC from 1970 to 2004 at the Cancer Institute Hospital were recruited for this study. Clinicopathological data were collected retrospectively, and compared between cases of AGC with duodenum invasion (AGC-DI group) and AGC without duodenum invasion (AGC-nDI group). In the AGC-DI group, the therapeutic value of lymph node dissection was evaluated using a therapeutic index (multiplication of the frequency of metastasis to the station by the 5-year survival rate of patients with metastasis to that station). Results: The AGC-DI group generally had tumors of higher pathological stage, which might account for the poorer 5-year survival rate compared with that of the AGC-nDI group (50.1% versus 68.5%; P = 0.0002). The incidence of lymph node metastasis was higher in the AGC-DI group than that in the AGC-nDI group, including nodes in the posterior pancreatic head (23.9% versus 7.0%, P &lt; 0.0001). In the AGC-DI group, posterior pancreatic head lymph node dissection was of therapeutic value (4.19) equivalent to dissection of second-tier lymph nodes. Conclusions: The dissection of posterior pancreatic head lymph nodes might be effective in AGC with macroscopic duodenum invasion since this has therapeutic value equivalent to that of second-tier lymph node dissection and might improve patients' long-term outcomes. abstract_id: PUBMED:30275748 Therapeutic Value of Lymph Node Dissection Along the Superior Mesenteric Vein and the Posterior Surface of the Pancreatic Head in Gastric Cancer Located in the Lower Third of the Stomach. Background: Therapeutic value of lymph node dissection along the superior mesenteric vein (14v) and the posterior surface of the pancreatic head (13) remains unclear in gastric cancer patients. Methods: We reviewed 355 patients with advanced gastric cancer in the lower third of the stomach who had undergone gastrectomy at our hospital. Results: The frequency of lymph node (LN) metastasis was 10.2% and 7.4% in stations 13 and 14v, respectively. The frequency of station 13 metastasis was 26.8% for T3/T4 tumors with group 2 LNs metastasis and 1.4% for all other tumors. The frequency of station 14v metastasis was 22.2% for T3/T4 tumors with group 2 LNs metastasis and 1.8% for all other tumors. The therapeutic values for dissecting LN stations 13 and 14v were 1.9 and 0.9, respectively, similar to the therapeutic value for group 2 LN dissection. Conclusion: Because metastasis to stations 13 and 14v occurs frequently in patients with T3/T4 gastric cancer located in the lower third of the stomach who also have metastasis to group 2 LNs, stations 13 and 14v should be dissected in these patients. abstract_id: PUBMED:32269841 Advantages of Splenic Hilar Lymph Node Dissection in Proximal Gastric Cancer Surgery. Gastrectomy with lymph node dissection remains the gold standard for curative treatment of gastric cancer. Dissection of splenic hilar lymph nodes has been included as a part of D2 lymph node dissection for proximal gastric cancer. Previously, pancreatico-splenectomy has been performed for dissecting splenic hilar lymph nodes, followed by pancreas-preserving splenectomy and spleen-preserving lymphadenectomy. However, the necessity of routine splenectomy or splenic hilar lymph node dissection has been under debate due to the increased morbidity caused by splenectomy and the poor prognostic feature of splenic hilar lymph node metastasis. In contrast, the relatively high incidence of splenic hilar lymph node metastasis, survival advantage, and therapeutic value of splenic hilar lymph node dissection in some patient subgroups, as well as the effective use of novel technologies, still supports the necessity and applicability of splenic hilar lymph node dissection. In this review, we aimed to evaluate the need for splenic hilar lymph node dissection and suggest the subgroup of patients with favorable outcomes. abstract_id: PUBMED:25992202 Lymph node dissection for gastric cancer: a critical review. Gastric cancer is one of the most common neoplasms and an important cause of cancer-related death worldwide. Efforts to reduce its high mortality rates are currently focused on multidisciplinary management. However, surgery remains a cornerstone in the management of patients with resectable disease. There is still some controversy as to the extent of lymph node dissection for potentially curable stomach cancer. Surgeons in eastern countries favor more extensive lymph node dissection, whereas those in the West favor less extensive dissection. Thus, extent of lymph node dissection remains one of the most hotly discussed aspects of gastric surgery, particularly because most stomach cancers are now often comprehensively treated by adding some perioperative chemotherapy or chemo-radiation. We provide a critical review of lymph nodes dissection for gastric cancer with a particular focus on its benefits in a multimodal approach. abstract_id: PUBMED:32913854 Combination of endoscopic submucosal dissection and laparoscopic sentinel lymph node dissection in early mucinous gastric cancer: Role of lymph node metastasis. Background: Recent evidence showed that combining endoscopic submucosal dissection (ESD) and laparoscopic sentinel lymph node dissection may avoid unnecessary gastrectomy in treating early mucinous gastric cancer (EMGC) patients with risks of positive lymph node metastasis (pLNM). Aim: To explore the predictive factors for pLNM in EMGC, and to optimize the clinical application of combing ESD and sentinel lymph node dissection in a proper subgroup of patients with EMGC. Methods: Thirty-one patients with EMGC who had undergone gastrectomy with lymph node dissection were consecutively enrolled from January 1988 to December 2016. Univariate and multivariate logistic regression analyses were used to estimate the association between the rates of pLNM and clinicopathological factors, providing odds ratio (OR) with 95% confidence interval. And the association between the number of predictors and the pLNM rate was also investigated. Results: Depth of invasion (OR = 7.342, 1.127-33.256, P = 0.039), tumor diameter (OR = 9.158, 1.348-29.133, P = 0.044), and lymphatic vessel involvement (OR = 27.749, 1.821-33.143, P = 0.019) turned out to be significant and might be the independent risk factors for predicating pLNM in the multivariate analysis. For patients with 1, 2, and 3 risk factors, the pLNM rates were 9.1%, 33.3%, and 75.0%, respectively. pLNM was not detected in seven patients without any of these risk factors. Conclusion: ESD might serve as a safe and sufficient treatment for intramucosal EMGC if tumor size ≤ 2 cm, and when lymphatic vessel involvement is absent by postoperative histological examination. Combining ESD and sentinel lymph node dissection could be recommended as a safe and effective treatment for EMGC patients with a potential risk of pLNM. abstract_id: PUBMED:11452825 Value of extended lymph node dissection in the treatment of gastric cancer. Regional lymph node metastasis is a critical prognostic factor in gastric cancer, and extended lymph node dissection and routine microscopic examination of all resected nodes could potentially provide accurate information regarding lymph node status. On the other hand, the therapeutic value of extended lymph node dissection is controversial. While retrospective and prospective non-randomised comparative studies have shown that extended lymph node dissection significantly improves the survival rate, prospective randomised trials have failed to demonstrate the efficacy of extended dissection, although the number of patients in these studies was limited. The present review of the literature therefore considers the prognostic significance of regional lymph node metastases and the therapeutic efficacy of lymph node dissection performed for gastric cancer. abstract_id: PUBMED:37717100 Splenic hilar lymph node dissection enhances survival in Bormann type 4 gastric cancer. Splenic hilar (no.10) lymph node dissection during total gastrectomy is no longer recommended for advanced proximal gastric cancer. However, the treatment efficacy of no.10 lymph node dissection in Borrmann type 4 tumors remains unclear. We enrolled 539 patients who underwent total gastrectomy for Borrmann type 4 tumors between 2006 and 2016 in four major institutions in Korea. We compared the long-term survival of the no.10 lymph node dissection (n = 309) and no-dissection groups (n = 230) using the propensity score (inverse probability of treatment weighting). The treatment effects of no.10 lymph node dissection were estimated in the weighted sample using the Cox proportional hazards regression model with a robust sandwich-type variance estimator. After inverse probability of treatment weighting, there were 540.4 patients in the no.10 lymph node dissection group and 532.7 in the no-dissection group. The two groups showed well-balanced baseline characteristics, including tumor node metastasis stage. The 5-year survival rates in the no.10 lymph node dissection and no-dissection groups were 45.7% and 38.6%, respectively (log-rank p = 0.036, hazard ratio 0.786, 95% confidence interval 0.630-0.982). Multivariate analysis revealed that no.10 lymph node dissection was an independent favorable prognostic factor (adjusted hazard ratio 0.747, 95% confidence interval 0.593-0.940) after adjusting for other prognostic factors. Sensitivity analyses in other inverse probability of treatment weighting models and the propensity score matching model showed similar results. Patients undergoing no.10 lymph node dissection showed improved survival compared to those without. No.10 lymph node dissection is recommended during total gastrectomy for patients with Borrmann type 4 gastric cancer. abstract_id: PUBMED:36568948 Clinical value of regional lymph node sorting in gastric cancer. Background: Increasing evidence have shown that regional lymph node metastasis is a critical prognostic factor in gastric cancer (GC). In addition, lymph node dissection is a key factor in determining the appropriate treatment for GC. However, the association between the number of positive lymph nodes and area of lymph node metastasis in GC remains unclear. Aim: To investigate the clinical value of regional lymph node sorting after radical gastrectomy for GC. Methods: This study included 661 patients with GC who underwent radical gastrectomy at Tianjin Medical University General Hospital between January 2012 and June 2020. The patients were divided into regional sorting and non-sorting groups. Clinicopathological data were collected and retrospectively reviewed to determine the differences in the total number of lymph nodes and number of positive lymph nodes between the groups. Independent sample t-tests were used for intergroup comparisons. Continuous variables that did not conform to a normal distribution were expressed as median (interquartile range), and the Mann-Whitney U test was used for inter-group comparisons. Results: There were no significant differences between the groups in terms of the surgical method, tumor site, immersion depth, and degree of differentiation. The total number of lymph nodes was significantly higher in the regional sorting group (n = 324) than in the non-sorting group (n = 337) (32.5 vs 21.2, P &lt; 0.001). There was no significant difference in the number of positive lymph nodes between the two groups. A total of 212 patients with GC had lymph node metastasis in the lymph node regional sorting group, including 89 (41.98%) cases in the first dissection station and 123 (58.02 %) cases in the second dissection station. Binary and multivariate logistic regression results showed that the number of positive lymph nodes (P &lt; 0.001) was an independent risk factor for lymph node metastases at the second dissection station. Conclusion: Regional sorting of lymph nodes after radical gastrectomy may increase the number of detected lymph nodes, thereby improving the reliability and accuracy of lymph node staging in clinical practice. abstract_id: PUBMED:23997542 Spleen-preserving splenic lymph node dissection in radical total gastrectomy. Radical gastrectomy has been recognized as the standard surgical treatment for advanced gastric cancer, and essentially applied in a wide variety of clinical settings. The thoroughness of lymph node dissection is an important prognostic factor for patients with advanced gastric cancer. Splenic lymph node dissection is required during D2 radical gastrectomy for upper stomach cancer. This is often accompanied by removal of the spleen in the past few decades. A growing number of investigators believe, however, that the spleen plays an important role as an immune organ, and thus they encourage the application of a spleen-preserving method for splenic hilum lymph node dissection. abstract_id: PUBMED:38249812 Laparoscopic D2+ lymph node dissection in patients with obesity and gastric cancer: A retrospective study. D2 lymph node dissection is widely used in laparoscopic radical gastrectomy for gastric cancer, and its efficacy and safety are known for patients with obesity. Currently, D2+ lymph node dissection is also applied to certain patients with gastric cancer of later stages. Due to the high difficulty of D2+ surgery, it is more challenging to perform on patients with obesity. There is currently limited research on the efficacy and safety of D2+ surgery in obese patients with gastric cancer. The present study aimed to retrospectively analyze the clinical data of patients undergoing laparoscopic radical gastrectomy for gastric cancer admitted to a single gastroenterology department. Patients with a body mass index ≥25 kg/m2 were included in the study. A total of 149 patients were selected as the research subjects and divided into two groups. The observation group comprised 74 patients who underwent D2+ lymph node dissection, while the control group comprised 75 patients who underwent standard D2 lymph node dissection. The surgical performance, postoperative recovery and postoperative complications of the two groups were compared. The results showed that the rates of conversion to open surgery in the D2+ and D2 groups were 5.4% (4/74) and 2.7% (2/75), respectively, and were not significantly different. The duration of surgery in the D2+ group (282.55±23.02 min) was significantly longer than that in the D2 group (271.45±20.05 min). The mean number of lymph node dissections in the D2+ group was 28.57±7.19, which was significantly higher than that in the D2 group (25.29±6.41). No statistically significant differences in intraoperative blood loss, time to first flatus, postoperative hospitalization days, total hospitalization expenses or postoperative complications was detected between the two groups. There were no deaths in either group within the 30-day perioperative period. In addition, there was no significant difference in the 3-year overall survival rate between the two groups, while the 5-year overall survival rate of the D2+ group was significantly higher than that of the D2 group. For obese patients with gastric cancer, D2+ surgery may increase the duration of surgery and slightly increase intraoperative blood loss compared with standard D2 radical surgery, but does not increase the incidence of postoperative complications. Moreover, D2+ surgery increases the number of lymph node dissections and improves the 5-year survival rate of patients. Therefore, it may be concluded that laparoscopic D2+ lymph node dissection is safe and feasible for obese patients with gastric cancer. Answer: The therapeutic value of lymph node dissection in advanced gastric cancer (AGC) with macroscopic duodenum invasion, particularly concerning the posterior pancreatic head lymph nodes, has been evaluated in several studies. In one study, patients with AGC with duodenum invasion (AGC-DI group) were found to have a higher incidence of lymph node metastasis, including nodes in the posterior pancreatic head, compared to those without duodenum invasion (AGC-nDI group). The therapeutic value of posterior pancreatic head lymph node dissection in the AGC-DI group was found to be equivalent to that of second-tier lymph node dissection, suggesting that it might be effective and could improve long-term outcomes for these patients (PUBMED:19224285). Another study reviewed patients with advanced gastric cancer located in the lower third of the stomach and found that the frequency of lymph node metastasis to stations 13 (posterior surface of the pancreatic head) and 14v (along the superior mesenteric vein) was higher in patients with T3/T4 tumors who also had group 2 lymph nodes metastasis. The therapeutic values for dissecting these lymph node stations were similar to that for group 2 lymph node dissection, indicating that dissection of stations 13 and 14v should be considered in these patients (PUBMED:30275748). These findings suggest that in the context of AGC with macroscopic duodenum invasion, dissection of posterior pancreatic head lymph nodes can be beneficial and holds therapeutic value. It is important to note that the decision to perform such dissections should be based on the individual patient's disease characteristics and the presence of metastasis to relevant lymph node stations.
Instruction: Does implementing pulse oximetry in a critical care unit result in substantial arterial blood gas savings? Abstracts: abstract_id: PUBMED:8339646 Does implementing pulse oximetry in a critical care unit result in substantial arterial blood gas savings? Objectives: To examine the impact of pulse oximetry on the use of arterial blood gas and other laboratory determinations and to examine predictors of the use of arterial blood gas measurements. Design: Before (preoximetry)/after (postoximetry) study. Setting: Thirty-bed multidisciplinary critical care unit. Patients: Consecutive admissions of 300 patients (150 before and 150 after oximetry). Measurements: For each patient examined, the number of arterial blood gas determinations, serum electrolyte levels, complete blood chemistries, arterial lactate levels, and creatinine samples were recorded for the initial 9 days of the stay in the critical care unit. These data were stratified by nursing shift (day vs night) and by the source of the admission (medical vs surgical). Other information collected included demographic variables, the severity of illness, the length of stay in the critical care unit, and various ventilatory parameters. Results: Introducing pulse oximetry was associated with a marginal (10.3 percent; p &lt; 0.025) reduction in the use of arterial blood gas determinations. This decrease was accounted for by changes occurring on the night shift and in the surgical patient. These findings were also observed for serum electrolyte determinations. No significant differences in the use of arterial blood gas measurements were found for medical patients. No significant differences were found in the use of arterial lactate levels, complete blood chemistries, or creatinine determinations. Significant predictors of arterial blood gas determinations included the number of days intubated, the number of ventilator orders, the number of days on an inspired oxygen content (FIO2) greater than 50 percent, and the acute physiology and chronic health evaluation II (APACHE II) score. Conclusions: The implementation of pulse oximetry in this manner gives an idea how effective the technology will be in reducing the use of arterial blood gas determinations without guidelines for the use of pulse oximetry. As only a marginal decrease was observed in the total population of medical and surgical patients, and only on the night shift, formal and standardized guidelines for the most efficient use of pulse oximetry should be considered. If these were considered, pulse oximetry may indeed make a significant contribution to improving the efficiency of care services. abstract_id: PUBMED:34504355 Study of Oxygen Saturation by Pulse Oximetry and Arterial Blood Gas in ICU Patients: A Descriptive Cross-sectional Study. Introduction: Pulse oximetery is expected to be an indirect estimation of arterial oxygen saturation. However, there often are gaps between SpO2 and SaO2. This study aims to study on arterial oxygen saturation measured by pulse oximetry and arterial blood gas among patients admitted in intensive care unit. Methods: It was a hospital-based descriptive cross-sectional study in which 101 patients meeting inclusion criteria were studied. SpO2 and SaO2 were measured simultaneously. Mean±SD of SpO2 and SaO2 with accuracy, sensitivity and specificity were measured. Results: According to SpO2 values, out of 101 patients, 26 (25.7%) were hypoxemic and 75 (74.25%) were non-hypoxemic. The mean±SD of SaO2 and SpO2 were 93.22±7.84% and 92.85±6.33% respectively. In 21 patients with spO2 less than 90%,the mean±SD SaO2 and SpO2 were 91.63±4.92 and 87.42±2.29 respectively. In 5 patients with SpO2 less 80%, the mean±SD of SaO2 and SpO2 were: 63.40±3.43 and 71.80±4.28, respectively. In non-hypoxemic group based on SpO2 values, the mean±SD of SpO2 and SaO2 were 95.773±2.19% and 95.654±3.01%, respectively. The agreement rate of SpO2 and SaO2 was 83.2%, and sensitivity and specificity of PO were 84.6% and 83%, respectively. Conclusions: Pulse Oximetry has high accuracy in estimating oxygen saturation with sp02&gt;90% and can be used instead of arterial blood gas. abstract_id: PUBMED:2328637 Increasing clinical use of pulse oximetry. Changing the system from measuring blood gases through invasive measures to using noninvasive pulse oximetry is a challenge in the Critical Care Unit where invasive techniques are taken for granted. The authors report a project that was successful in increasing the use of noninvasive monitoring techniques by critical care nurses in a Surgical Intensive Care Unit. This clinical project became an important aspect of incorporating the staff in a change to more extensive use of pulse oximetry. This study defines the change in nursing practice with the use of pulse oximetry. The authors discuss three areas: (1) demonstration of the correlation between O2Sat as measured by the pulse oximeter and arterial blood gas saturations; (2) introduction of the pulse oximeter as a reliable alternative to ABGs when monitoring oxygenation; and (3) the establishment of guidelines for using pulse oximetry within the clinical setting. abstract_id: PUBMED:37637606 Comparative Analysis of Oxygen Saturation by Pulse Oximetry and Arterial Blood Gas in Hypoxemic Patients in a Tertiary Care Hospital. Introduction: Oxygen saturation is essential for medical care and is closely regulated within the body. Arterial blood gas (ABG) analysis is used to evaluate critically ill individuals' ventilation, oxygenation, acid-base status, and metabolic condition. Pulse oximetry is an easy and non-invasive way to measure the status of oxygen saturation non-invasively in clinical settings and provides a quick and precise assessment of oxygenation and reduces medical errors. SpO2 may not always be a reliable predictor of arterial oxygen saturation (SaO2), and hypoxemic, hemodynamically compromised, and critically ill patients may have lower SpO2 accuracy. A study is needed to assess and compare various oxygen saturation methods. Aims And Objectives: The study aimed to compare the oxygen saturation levels measured by pulse oximetry and ABG analysis in hypoxemic patients. The objectives were to compare the values between SaO2, PaO2, and SpO2 values obtained from the patients, and correlate the study parameters among both techniques. Materials And Methods: The study was conducted from February 2021 to June 2022 among the 102 hypoxemic patients who were admitted to the emergency and surgical intensive care unit (ICU) of Sree Balaji Medical College and Hospital in Chennai. Primary data on ABG analysis and pulse oximetry readings were collected from the study subjects. The patient and their past medical records, physical exam, chest x-ray findings, pulse oximetry, and ABG results were all reviewed. Each patient had their ABG, and pulse oximetry measured simultaneously. A comparison was made between SpO2 and partial pressure of oxygen (PaO2) and arterial oxygen saturation (SaO2) parameters using a paired t-test. The correlation was done against the SpO2 and ABG parameters and assessed for association using the correlation coefficient value; gender was also considered while correlating. Results And Discussion: An observational study was done among 102 study samples to comparatively analyze the oxygen saturation by two methods, namely pulse oximetry and ABG, in hypoxemic patients. While comparing the mean values of SaO2 and SpO2, they were 84.41 ± 4.24 and 80.58 ± 5.77, respectively, and this difference was statistically very significant (p &lt; 0.001). While comparing the mean values of PaO2 and SaO2, they were 61.02 ± 5.01 and 84.41 ± 4.24, respectively, and this difference was statistically significant (p = 0.043). While comparing the mean values of PaO2 and SpO2, they were 61.02 ± 5.01 and 80.58 ± 5.77, respectively, and this difference was statistically significant (p = 0.054). Among the study population, with regard to the correlation factor, there is notably a very high and strong positive correlation between SaO2 and SpO2 and between SpO2 and PaO2. There was a negative correlation between SpO2 and finger abnormalities and between SpO2 and blood pressure. Conclusion: The ABG method is considered the gold standard. When SpO2 levels fall below 90%, pulse oximetry may not be accurate enough to reliably assess oxygenation. In such cases, where alveolar hypoventilation is suspected, it is recommended to complement pulse oximetry with ABG studies. This is because ABG analysis provides a more comprehensive assessment of oxygenation and acid-base status, which can aid in the diagnosis and management of respiratory conditions. abstract_id: PUBMED:24362831 Digital and forehead pulse oximetry in Intensive Care Unit. A comparison Pulse oximetry is now a widely available technology that provides easy and continuous reliable measurements of oxygen saturation, the percentage of O2 bound to Hb (SpO2). Pulse oximetry has become a standard monitoring not only in intensive care, but also in other clinical settings, although several factors can interfere with a correct reading, as low perfusion conditions, such as low cardiac output, vasoconstriction, hypothermia. We performed a prospective observational study in two groups of twenty-eight patients admitted to the ICU, in which we compared the values ​​obtained (SpO2) with digital and forehead pulse oximetry, with arterial oxygen saturation (SaO2) obtained from blood gas analysis (EGA). Comparison measurements into two distinct groups, though homogeneous might be a limitation of the study, but at the same time provide a representative picture of a polyvalent intensive care unit. The result was that both the values ​​obtained with digital and frontal pulse oximetry are accurate and reliable in comparison to EGA. In situations of hypoxia, the reliability of frontal pulse oximetry appears superior compared to digital oximetry. abstract_id: PUBMED:27283009 Correlation of Venous Blood Gas and Pulse Oximetry With Arterial Blood Gas in the Undifferentiated Critically Ill Patient. Rationale: Blood gas analysis is often used to assess acid-base, ventilation, and oxygenation status in critically ill patients. Although arterial blood gas (ABG) analysis remains the gold standard, venous blood gas (VBG) analysis has been shown to correlate with ABG analysis and has been proposed as a safer less invasive alternative to ABG analysis. Objective: The purpose of this study was to evaluate the correlation of VBG analysis plus pulse oximetry (SpO2) with ABG analysis. Methods: We performed a prospective cohort study of patients in the emergency department (ED) and intensive care unit (ICU) at a single academic tertiary referral center. Patients were eligible for enrollment if the treating physician ordered an ABG. Statistical analysis of VBG, SpO2, and ABG data was done using paired t test, Pearson χ2, and Pearson correlation. Main Results: There were 156 patients enrolled, and 129 patients completed the study. Of the patients completing the study, 53 (41.1%) were in the ED, 41 (31.8%) were in the medical ICU, and 35 (27.1%) were in the surgical ICU. The mean difference for pH between VBG and ABG was 0.03 (95% confidence interval: 0.03-0.04) with a Pearson correlation of 0.94. The mean difference for pCO2 between VBG and ABG was 4.8 mm Hg (95% confidence interval: 3.7-6.0 mm Hg) with a Pearson correlation of 0.93. The SpO2 correlated well with PaO2 (the partial pressure of oxygen in arterial blood) as predicted by the standard oxygen-hemoglobin dissociation curve. Conclusion: In this population of undifferentiated critically ill patients, pH and pCO2 on VBG analysis correlated with pH and pCO2 on ABG analysis. The SpO2 correlated well with pO2 on ABG analysis. The combination of VBG analysis plus SpO2 provided accurate information on acid-base, ventilation, and oxygenation status for undifferentiated critically ill patients in the ED and ICU. abstract_id: PUBMED:10752818 Evidence for the need of bedside accuracy of pulse oximetry in an intensive care unit. Objective: To compare pulse oximetry saturation (Spo2 with arterial blood gas saturation (SaO2) obtained during clinical routine to determine the optimal lowest reliable value of SpO2 in ventilator-dependent patients before setting up a nurse-directed protocol of FIO2 titration. Design: Prospective clinical study. Setting: Surgical intensive care unit in a university hospital. Patients: Thirty-three patients with a pulse oximeter probe in whom arterial blood gas was measured with a radial artery line. Interventions: SPO2 was recorded by the nurses and compared with SaO2 obtained by blood gas analysis with a co-oximeter. Two sensors currently used in our surgical intensive care unit and connected to a monitor (HP OmniCare M1165/66A; Hewett Packard, Andover, MA) were tested. In group I, the Durensor DS 100A (Nellcor Puritan Bennett, Pleasanton, CA), a reusable sensor, was used. In group II, the Oxisensor D25L (Nellcor Puritan Bennett), a nonreusable sensor, was used. Measurements And Main Results: In group 1, 64 data pairs were obtained. In this group, SaO2 ranged from 87 to 98% and SpO2 ranged from 92 to 100%. The bias was -1.90% and the limits of agreement ranged from -5.56 to 1.76%. In group 11, 47 data pairs were obtained. In this group, SaO2 ranged from 87 to 99% and SpO2 ranged from 92 to 100%. The bias was -2.49% and the limits of agreement ranged from -6.62 to 1.64%. Conclusions: In the range of SaO2 tested, regardless of the sensor used, SpO2 overestimated SaO2. Large limits of agreement were found. Based on this result, the authors concluded that before defining a nurse-directed protocol of FIO2 titration with SpO2, the material used daily must be evaluated. A minimum threshold SpO2 value of 96% in both groups I and II is more reliable to ensure SaO2 &gt; or = 90%. abstract_id: PUBMED:9104146 Correlation of pulse oximetry and co-oximetry. Pulse oximetry is an important advancement in noninvasive monitoring and assessment of arterial blood oxygenation. The purpose of this study was to assess the correlation of arterial oxygen saturation (SaO2) and pulse oximetry (SpO2) in critically ill patients using the ear and finger probes. Arterial oxygen saturation was measured simultaneously by pulse oximetry with a finger and ear probe and an arterial blood gas sample. Results indicated that SpO2 does trend SaO2 but it appears that SpO2 measured by the finger probe is more closely correlated to SaO2 than SpO2 measured with an ear probe in this study sample. abstract_id: PUBMED:2117192 Pulse oximetry versus arterial blood gas specimens in long-term oxygen therapy. Portable pulse oximeters are now widely available for the assessment of arterial oxygenation, and the U.S. Medicare program considers saturation readings to be acceptable substitutes for arterial PO2 in selecting patients for long-term oxygen therapy (LTOT). Current oximeters are reasonably accurate (plus or minus 4 or 5 percent of the co-oximetry value), but the clinician should be aware of several potential problems. Readings may be inaccurate in the presence of hemodynamic instability, carboxyhemoglobinemia, jaundice, or dark skin pigmentation, and also during exercise. Indicated saturation may substantially overestimate arterial PO2 if the patient is alkalemic. Pulse oximetry cannot detect hypercapnia or acidosis. For these and other reasons, pulse oximetry should not be used in initial selection of patients for LTOT, as a substitute for arterial blood gas analysis in the evaluation of patients with undiagnosed respiratory disease, during formal cardiopulmonary exercise testing, or in the presence of an acute exacerbation. Pulse oximetry is an important addition to the clinician's armamentarium, however, for titrating the oxygen dose in stable patients, in assessing patients for desaturation during exercise, for sleep studies, and for in-home monitoring. abstract_id: PUBMED:34031858 Modeling the Cost Savings of Continuous Pulse Oximetry and Capnography Monitoring of United States General Care Floor Patients Receiving Opioids Based on the PRODIGY Trial. Introduction: Despite the high incidence of respiratory depression on the general care floor and evidence that continuous monitoring improves patient outcomes, the cost-benefit of continuous pulse oximetry and capnography monitoring of general care floor patients remains unknown. This study modeled the cost and length of stay savings, investment break-even point, and likelihood of cost savings for continuous pulse oximetry and capnography monitoring of general care floor patients at risk for respiratory depression. Methods: A decision tree model was created to compare intermittent pulse oximetry versus continuous pulse oximetry and capnography monitoring. The model utilized costs and outcomes from the PRediction of Opioid-induced respiratory Depression In patients monitored by capnoGraphY (PRODIGY) trial, and was applied to a modeled cohort of 2447 patients receiving opioids per median-sized United States general care floor annually. Results: Continuous pulse oximetry and capnography monitoring of high-risk patients is projected to reduce annual hospital cost by $535,531 and cumulative patient length of stay by 103 days. A 1.5% reduction in respiratory depression would achieve a break-even investment point and justify the investment cost. The probability of cost saving is ≥ 80% if respiratory depression is decreased by ≥ 17%. Expansion of continuous monitoring to high- and intermediate-risk patients, or to all patients, is projected to reach a break-even point when respiratory depression is reduced by 2.5% and 3.5%, respectively, with a ≥ 80% probability of cost savings when respiratory depression decreases by ≥ 27% and ≥ 31%, respectively. Conclusion: Compared to intermittent pulse oximetry, continuous pulse oximetry and capnography monitoring of general care floor patients receiving opioids has a high chance of being cost-effective. Trial Registration: www.clinicaltrials.gov , Registration ID: NCT02811302. Answer: Implementing pulse oximetry in a critical care unit has been associated with a marginal reduction in the use of arterial blood gas (ABG) determinations, particularly on the night shift and in surgical patients, but not in medical patients (PUBMED:8339646). Pulse oximetry has been shown to have high accuracy in estimating oxygen saturation when SpO2 is greater than 90%, suggesting it can be used instead of ABG in certain conditions (PUBMED:34504355). However, when SpO2 levels fall below 90%, pulse oximetry may not be accurate enough to reliably assess oxygenation, and ABG studies are recommended to provide a more comprehensive assessment (PUBMED:37637606). Studies have demonstrated that pulse oximetry is a reliable alternative to ABG when monitoring oxygenation, and guidelines for its use have been established in clinical settings (PUBMED:2328637). In hypoxemic patients, there is a strong positive correlation between SaO2 and SpO2, and between SpO2 and PaO2, but pulse oximetry may not always be a reliable predictor of arterial oxygen saturation (SaO2) in certain patient populations (PUBMED:37637606). In the ICU, both digital and forehead pulse oximetry have been found to be accurate and reliable compared to ABG, with forehead pulse oximetry showing superior reliability in situations of hypoxia (PUBMED:24362831). Additionally, venous blood gas (VBG) analysis plus pulse oximetry (SpO2) has been shown to provide accurate information on acid-base, ventilation, and oxygenation status, correlating well with ABG analysis (PUBMED:27283009). Despite the potential for pulse oximetry to reduce the need for ABG, it is important to note that pulse oximetry cannot detect hypercapnia or acidosis and may be inaccurate in the presence of certain conditions (PUBMED:2117192). Therefore, while pulse oximetry can lead to some savings in ABG use, it is not a complete substitute for ABG analysis in all clinical scenarios. The implementation of pulse oximetry should be accompanied by formal and standardized guidelines to maximize its efficiency and effectiveness in reducing ABG use (PUBMED:8339646).
Instruction: High levels of grass pollen inside European dairy farms: a role for the allergy-protective effects of environment? Abstracts: abstract_id: PUBMED:19220219 High levels of grass pollen inside European dairy farms: a role for the allergy-protective effects of environment? Background: There is evidence of an allergy protective effect in children raised on farm. It has been assumed that microbial exposure may confer this protection. However in farm, little attention has been given to the pollen level and to concomitant microbiological exposure, and indoor pollen concentrations have never been precisely quantified. Methods: The kinetics of pollen in dairy farms have been studied in a pilot study (n = 9), and exposure in a sub-sample of the ongoing European birth cohort PASTURE (n = 106). Measurements of viable microorganisms and pollen were performed in air samples. To identify factors that modulate the pollen concentration multivariate regression analyses were run. Results: Indoor pollen (95% of Poaceae fragments and grains) were significantly higher in winter than in summer (P = 0.001) and ranged between 858 to 11 265 counts/m(3) during feeding in winter, thus exceeding typical outdoor levels during the pollen season. Geometric mean in French farms was significantly higher than in German and Swiss farms (7 534, 992 and 1 079 count/m(3), respectively). The presence of a ventilation system and loose housing systems significantly reduced indoor pollen levels. This pollen concentration rise after feeding was accompanied by an increase in fungal and actinomycetal levels, whereas the concentration of bacteria was not associated with feeding. Conclusions: Farmers and their children who attend cowsheds during the feeding sessions are exposed perennially to high pollen concentrations. It might be speculated that the combined permanent exposure to microbes from livestock and grass pollen may initiate tolerance in children living on a farm. abstract_id: PUBMED:35387060 Personalized Pollen Monitoring and Symptom Scores: A Feasibility Study in Grass Pollen Allergic Patients. Background: Pollen is a major trigger for allergic symptoms in sensitized individuals. Airborne pollen is usually monitored by Hirst type pollen samplers located at rooftop level, providing a general overview of the pollen distribution in the larger surroundings. In this feasibility study, grass pollen-sensitized subjects monitored the pollen in their direct environment using a portable pollen sampler (Pollensniffer) and scored their symptoms, to study the relation between symptom severity and personal grass pollen exposure. For comparison the symptoms were also correlated with pollen collected by the rooftop sampler. Methods: After recruitment 18 participants were screened for grass pollen specific (GP-sIgE) of which 12 were eligible. Nine participants completed the study (May, 2018). They were asked to monitor personal pollen exposure using a Pollensniffer on their way to school, work or other destination, and to score their symptoms via a mobile app on a scale from 0 to 10. Daily pollen concentrations were collected by a Hirst type sampler at rooftop level. Pollen grains were analyzed using a microscope. Results: Three of the four participants with high GP-sIgE (≥9.6 kU/l) reported high symptom scores (&gt;4) and an analysis showed a significant correlation (CC) between eye, nose, and lung symptoms and the grass pollen counts collected by the Pollensniffer, as well as the daily grass pollen concentrations monitored by the rooftop sampler (CC≥0.54). In contrast, the participants with low GP-sIgE levels (&lt;9.6 kU/l) reported low symptom scores (≤4) and often other sensitizations were present. For these subjects, no significant positive correlations (CC&lt;0.3) of symptoms with either grass pollen collected by the personal or the rooftop sampler were found. Conclusion: The results of this feasibility study suggest that correlations between the severity of clinical symptoms of grass pollen allergic patients, and grass pollen counts as determined by the Pollensniffer or a rooftop sampler, is restricted to patients with high GP-sIgE levels, high symptom scores, and no relevant other sensitizations. Based on the low numbers of subjects with severe symptoms included in this feasibility study, no conclusions can be drawn on the performance of the Pollensniffer in relating symptoms and pollen exposure in comparison with the rooftop sampler. Trial Registration: The study was approved by the Committee Medical Ethics of the LUMC (approval numbers: NL63953.058.17/ P17.304). abstract_id: PUBMED:29778490 The molecular allergology of subtropical grass pollen. Grass pollens are amongst the most important aeroallergen sources world-wide triggering allergic rhinoconjunctivitis and asthma in sensitised patients. Much of what we know about the allergen components of grasses is informed by research on pollen of temperate (Pooideae) species that are abundant in the temperate climate zones. However, climate changes are altering the biogeographical distribution as well as timing and allergenicity of grass pollens. This provides an impetus for better understanding of the contribution of subtropical subfamilies of grasses to pollen allergy globally. Pollen of Chloridoideae (e.g. Cynodon dactylon; Bermuda grass) and Panicoideae (e.g. Paspalum notatum; Bahia grass or Sorghum halepense; Johnson grass) subfamilies are clinically important in subtropical zones of Australia, Asia, India, Africa, and America. These grasses differ ecologically and phylogenetically from temperate grasses and, importantly their allergen composition is qualitatively different. For example, subtropical grass pollens appear to lack the major group 5 grass pollen allergen family. In this review we summarize current knowledge of the epidemiology and immunology of subtropical Chloridoideae and Pancoideae pollen allergens, describe the biochemical characteristics of known isoforms and variants as well as properties and structures of subtropical pollen allergen components. Whilst only one subtropical allergen component; Cyn d 1 of Bermuda grass pollen, is available commercially for diagnostic use, in a natural purified form, a number of allergens of Panicoideae grass pollen; Zea m 1, Zea m 3 and Zea m 13 of maize, Pas n 1 and Pas n 13 of Bahia, as well as Sor h 1, Sor h 2, Sor h 13 and Sor h 23 of Johnson grass, have been discovered. Research effort is directed towards making available subtropical grass pollen allergen components as innovative treatment and diagnostic options that more specifically address the needs of patients from warmer regions of the globe. abstract_id: PUBMED:35387005 Medium-Term Increases in Ambient Grass Pollen Between 1994-1999 and 2016-2020 in a Subtropical Climate Zone. Grass pollen is the major outdoor trigger of allergic respiratory diseases. Climate change is influencing pollen seasonality in Northern Hemisphere temperate regions, but many aspects of the effects on grass pollen remain unclear. Carbon dioxide and temperature rises could increase the distribution of subtropical grasses, however, medium term shifts in grass pollen in subtropical climates have not yet been analysed. This study investigates changes in grass pollen aerobiology in a subtropical city of Brisbane, Australia, between the two available monitoring periods, 1994-1999 and 2016-2020. Potential drivers of pollen change were examined including weather and satellite-derived vegetation indicators. The magnitude of the seasonal pollen index for grass showed almost a three-fold increase for 2016-2020 over 1994-1999. The number and proportion of high and extreme grass pollen days in the recent period increased compared to earlier monitoring. Statistically significant changes were also identified for distributions of CO2, satellite-derived seasonal vegetation health indices, and daily maximum temperatures, but not for minimum temperatures, daily rainfall, or seasonal fraction of green groundcover. Quarterly grass pollen levels were correlated with corresponding vegetation health indices, and with green groundcover fraction, suggesting that seasonal-scale plant health was higher in the latter period. The magnitude of grass pollen exposure in the subtropical region of Brisbane has increased markedly in the recent past, posing an increased environmental health threat. This study suggests the need for continuous pollen monitoring to track and respond to the possible effects of climate change on grass pollen loads. abstract_id: PUBMED:27317132 The grass pollen season 2014 in Vienna: A pilot study combining phenology, aerobiology and symptom data. Background: Grasses (Poaceae) are one of the largest plant families and are distributed worldwide. Grass pollen allergy is one of the most important pollen allergies affecting large parts of the population worldwide. The grass pollen season itself is special since it is caused by the flowering of various grass species that present unique profiles of allergenicity, which assumingly plays a significant role and impact on grass pollen sensitization profiles for the allergy sufferer. The aim of this study, conducted in Vienna, 2014, was to analyze the possible contribution of prevalent grass species to the grass pollen season and to the symptom load of grass allergy sufferers. Methods: This was the first study that combines phenological observations (i.e. grass species and their flowering periods) with aerobiological measurements (i.e. daily grass pollen concentrations) in concert with allergic symptoms from local users of the Patient's Hayfever Diary (symptom load index calculation). Results: Both the pollen concentrations and the symptom levels were higher in the first half of the main grass pollen season and therefore show the highest impact on pollen allergy sufferers. Of important note were the following species that are widely distributed in Vienna: Kentucky bluegrass (Poa pratensis), orchard grass (Dactylis glomerata), false oat-grass (Arrhenatherum elatius), fescue grass (Festuca sp.) and perennial rye-grass (Lolium perenne). Conclusion: Monitoring different grass species provided evidence for varying contribution in pollination across the main grass pollen season and highlighted the significance this impact may have on pollen allergy sufferers. abstract_id: PUBMED:33024481 Assessment of Google Trends terms reporting allergies and the grass pollen season in Ukraine. Background: Grass pollen allergy is an important trigger for the development of respiratory disorders. Defining the grass pollen season onset is critical for correct allergy diagnosis and personalized therapy. The development of a pan-European sentinel network for allergic diseases has raised the problem of translating the Google search terms into different European languages as well as defining specific pollen season characteristics in different regions. Grass pollen allergy was investigated due to high allergenicity and wide expansion of grass pollen in Europe. Objectives: The aim of this study was to examine which translations of "hay fever", "grass", and "rhinitis" could be used in the native Cyrillic languages, especially in Ukrainian and Russian, and to compare the seasonality of allergic respiratory queries in Ukraine with the grass pollen counts. Methods: Google Trends (GT) was used to search Google queries concerning grass pollen allergy: "allergy", "hay fever", "runny nose", "grass", "asthma", and "pollen". The Cyrillic terms in Ukrainian and Russian were used. The search was done for the period from 2013 to 2017. Pollen collection from 2013 to 2016 was conducted using volumetric methods. Average daily temperatures were obtained from http://gismeteo.ua. Correlations were assessed by Spearman (R) test. Results: The Ukrainian Google users searched the Cyrillic equivalents for "runny nose", "grass", and "asthma". Due to the GT queries profile, Ukraine had a "D" pattern according to the classification, developed by Bousquet J et al (2017). In Ukraine, the Poaceae pollen season generally occurred between the second ten-day period of May and the last ten-day period of July. The Poaceae pollen season started with a concentration of pollen grains of 8.0 m-3. This concentration provoked the growth of GT "grass", "allergy", "hay fever", and "asthma" queries. Conclusions: The terms "grass", "allergy", "hay fever", and "asthma" (in their Cyrillic equivalents) are required in Ukraine to account for the grass pollen exposure by GT. The study of GT may be a useful tool to make an assessment of the grass pollen season for the prevention and minimization of exposure to significant grass pollen concentrations. abstract_id: PUBMED:27069303 Regional and seasonal variation in airborne grass pollen levels between cities of Australia and New Zealand. Although grass pollen is widely regarded as the major outdoor aeroallergen source in Australia and New Zealand (NZ), no assemblage of airborne pollen data for the region has been previously compiled. Grass pollen count data collected at 14 urban sites in Australia and NZ over periods ranging from 1 to 17 years were acquired, assembled and compared, revealing considerable spatiotemporal variability. Although direct comparison between these data is problematic due to methodological differences between monitoring sites, the following patterns are apparent. Grass pollen seasons tended to have more than one peak from tropics to latitudes of 37°S and single peaks at sites south of this latitude. A longer grass pollen season was therefore found at sites below 37°S, driven by later seasonal end dates for grass growth and flowering. Daily pollen counts increased with latitude; subtropical regions had seasons of both high intensity and long duration. At higher latitude sites, the single springtime grass pollen peak is potentially due to a cooler growing season and a predominance of pollen from C3 grasses. The multiple peaks at lower latitude sites may be due to a warmer season and the predominance of pollen from C4 grasses. Prevalence and duration of seasonal allergies may reflect the differing pollen seasons across Australia and NZ. It must be emphasized that these findings are tentative due to limitations in the available data, reinforcing the need to implement standardized pollen-monitoring methods across Australasia. Furthermore, spatiotemporal differences in grass pollen counts indicate that local, current, standardized pollen monitoring would assist with the management of pollen allergen exposure for patients at risk of allergic rhinitis and asthma. abstract_id: PUBMED:26232158 Defining thresholds of specific IgE levels to grass pollen and birch pollen allergens improves clinical interpretation. Background: Cut-off values and predictive values are used for the clinical interpretation of specific IgE antibody results. However, cut-off levels are not well defined, and predictive values are dependent on the prevalence of disease. The objective of this study was to document clinically relevant diagnostic accuracy of specific IgE for inhalant allergens (grass pollen and birch pollen) based on test result interval-specific likelihood ratios. Likelihood ratios are independent of the prevalence and allow to provide diagnostic accuracy information for test result intervals. Methods: In a prospective study we included consecutive adult patients presenting at an allergy clinic with complaints of rhinitis or rhinoconjunctivitis. The standard for diagnosis was a suggestive clinical history of grass or birch pollen allergy and a positive skin test. Specific IgE was determined with the ImmunoCAP Fluorescence Enzyme Immuno-Assay. Results: We established specific IgE test result interval related likelihood ratios for clinical allergy to inhalant allergens (grass pollen, rPhl p 1,5, birch pollen, rBet v 1). The likelihood ratios for allergy increased with increasing specific IgE antibody levels. The likelihood ratio was &lt;0.03 for specific IgE &lt;0.1 kU/L, between 0.1 and 1.4 for specific IgE between 0.1 kU/L and 0.35 kU/L, between 1.4 and 4.2 for specific IgE between 0.35 kU/L and 3.5 kU/L, &gt;6.3 for specific IgE&gt;0.7, and very high (∞) for specific IgE &gt;3.5 kU/L. Conclusion: Test result interval specific likelihood ratios provide a useful tool for the interpretation of specific IgE test results for inhalant allergens. abstract_id: PUBMED:25237628 Molecular biomarkers for grass pollen immunotherapy. Grass pollen allergy represents a significant cause of allergic morbidity worldwide. Component-resolved diagnosis biomarkers are increasingly used in allergy practice in order to evaluate the sensitization to grass pollen allergens, allowing the clinician to confirm genuine sensitization to the corresponding allergen plant sources and supporting an accurate prescription of allergy immunotherapy (AIT), an important approach in many regions of the world with great plant biodiversity and/or where pollen seasons may overlap. The search for candidate predictive biomarkers for grass pollen immunotherapy (tolerogenic dendritic cells and regulatory T cells biomarkers, serum blocking antibodies biomarkers, especially functional ones, immune activation and immune tolerance soluble biomarkers and apoptosis biomarkers) opens new opportunities for the early detection of clinical responders for AIT, for the follow-up of these patients and for the development of new allergy vaccines. abstract_id: PUBMED:17877763 Do levels of airborne grass pollen influence asthma hospital admissions? Background: The effects of environmental factors and ambient concentrations of grass pollen on allergic asthma are yet to be established. Objective: We sought to estimate the independent effects of grass pollen concentrations in the air over Melbourne on asthma hospital admissions for the 1992-1993 pollen season. Methods: Daily grass pollen concentrations were monitored over a 24-h period at three stations in Melbourne. The outcome variable was defined as all-age asthma hospital admissions with ICD9-493 codes. The ambient air pollutants were average daily measures of ozone, nitrogen dioxide and sulphur dioxide, and the airborne particle index representing fine particulate pollution. Semi-parametric Poisson regression models were used to estimate these effects, adjusted for air temperature, humidity, wind speed, rainfall, day-of-the-week effects and seasonal variation. Results: Grass pollen was a strong independent non-linear predictor of asthma hospital admissions in a multi-pollutant model (P=0.01). Our data suggest that grass pollen had an increasing effect on asthma hospital admissions up to a threshold of 30 grains/m3, and that the effect remains stable thereafter. Conclusion: Our findings suggest that grass pollen levels influence asthma hospital admissions. High grass pollen days, currently defined as more than 50 grains/m3, are days when most sensitive individuals will experience allergic symptoms. However, some asthmatic patients may be at a significant risk even when airborne grass pollen levels are below this level. Patients with pollen allergies and asthma would be advised to take additional preventive medication at lower ambient concentrations. Answer: The study presented in abstract PUBMED:19220219 suggests that high levels of grass pollen are found inside European dairy farms, particularly during winter feeding sessions. The indoor pollen levels, primarily consisting of Poaceae fragments and grains, were found to be significantly higher in winter than in summer, with concentrations exceeding typical outdoor levels during the pollen season. The presence of a ventilation system and loose housing systems were associated with reduced indoor pollen levels. The increase in pollen concentration after feeding was also accompanied by an increase in fungal and actinomycetal levels, although bacteria concentration was not associated with feeding. The study speculates that the combined permanent exposure to microbes from livestock and grass pollen may initiate tolerance in children living on a farm, suggesting a potential role for the allergy-protective effects of the environment. This hypothesis aligns with the "hygiene hypothesis," which posits that early-life exposure to a variety of microorganisms, such as those found on farms, may stimulate the immune system in a way that reduces the risk of developing allergic diseases. The findings from this study imply that not only microbial exposure but also high levels of grass pollen exposure in farm environments could contribute to the development of immune tolerance, potentially offering protection against allergies. However, the study does not provide direct evidence of an allergy-protective effect but rather opens the possibility for further research into the role of environmental factors, including grass pollen, in the development of immune tolerance and allergy protection.
Instruction: Do intraoperative analgesics influence breast cancer recurrence after mastectomy? Abstracts: abstract_id: PUBMED:20435950 Do intraoperative analgesics influence breast cancer recurrence after mastectomy? A retrospective analysis. Background: Whether intraoperative analgesics have an impact on postoperative cancer recurrence is unknown. Some investigations suggest that the opioids could favor relapse and that regional analgesia and nonsteroidal antiinflammatory drugs could improve cancer prognosis. We retrospectively reviewed our series of breast cancer surgery patients. Methods: This retrospective study included 327 consecutive women who underwent mastectomy with axillary dissection for breast cancer. The main objective was to compare the incidence of cancer recurrence among patients who received different analgesics during surgery. Results: Perioperative characteristics, cancer prognostic factors, and the length of surgery were comparable regardless of the analgesics administered. Univariate and multivariate analyses showed a lower cancer recurrence rate when ketorolac was given before surgery (P = 0.019). Other analgesics (sufentanil, ketamine, and clonidine) were not associated with a significant reduction in cancer recurrence rates in our series. Conclusion: This retrospective analysis suggests that intraoperative administration of ketorolac decreases the risk of breast cancer relapse compared with other analgesícs. abstract_id: PUBMED:36697057 Breast Conserving Surgery in Combination With Targeted Intraoperative Radiotherapy Compared to Mastectomy for In-breast-tumor-recurrence. Background/aim: Mastectomy is the standard treatment of in-breast-recurrence of breast cancer after breast conserving surgery (BCS) and external beam radiation therapy (EBRT). In selected cases, it is possible to preserve the breast if targeted intraoperative radiotherapy (TARGIT-IORT) can be given during the second lumpectomy. This is a comparative analysis of overall survival and quality of life (QoL). Patients And Methods: Patients in our database with in-breast-recurrence and either mastectomy or BCS and TARGIT-IORT were included. Identified patients were offered participation in a prospective QoL-analysis using the BREAST-Q questionnaire. The cohorts were compared for confounding parameters, overall survival, and QoL. Results: Thirty-six patients treated for in-breast-recurrence were included, 21 had received a mastectomy and 16 patients had received BCS with TARGIT-IORT. Mean follow-up was 12.8 years since primary diagnosis and 4.2 years since recurrence. Both groups were balanced regarding prognostic parameters. Overall survival was numerically longer for BCS and TARGIT-IORT, but the numbers were too small for formal statistical analysis. No patient had further in-breast-recurrence. Psychosocial and sexual wellbeing did not differ between both groups. Physical wellbeing was significantly superior for those whose breast could be preserved (p-value=0.021). Patient-reported incidence and severity of lymphedema of the arm was significantly worse in the mastectomy group (p=0.007). Conclusion: Preserving the breast by use of TARGIT-IORT was safe with no re-recurrence and no detriment to overall survival in our analysis and led to a statistically significant improvement in physical wellbeing and incidence of lymphedema. These data should increase the confidence in offering breast preservation after in-breast-recurrence of breast cancer. abstract_id: PUBMED:3188790 Effect of perioperative blood transfusion on recurrence and death after mastectomy for breast cancer. A recent report suggested that perioperative blood transfusion doubles the recurrence rate of breast cancer after mastectomy. In the present retrospective study the effect of transfusion on cancer recurrence and death after mastectomy was investigated in 96 women, 27 with and 69 without blood transfusion. The overall survival rates, the breast-cancer-specific survival and the recurrence-free survival rates were similar in the two groups. The study thus does not support the hypothesis that perioperative blood transfusion promotes recurrence of breast cancer. abstract_id: PUBMED:34955166 Mastectomy options for the treatment of ipsilateral breast cancer recurrence after lumpectomy. Background: Mastectomy is the recommended treatment for ipsilateral breast tumor recurrence (IBTR) in patients who initially received breast conservation surgery (BCS) and radiation. We evaluated the 3 types of mastectomies performed for the surgical treatment of IBTR (nipple sparing (NSM), skin sparing (SSM) and total mastectomy (TM)) to assess surgical complications and local control outcomes among groups. Methods: Patients who developed IBTR after BCS and received mastectomy from 2011 to 2019 were reviewed. Patient characteristics and treatment were analyzed. The incidence of postoperative complications and second breast cancer recurrence were compared. Results: Mastectomy was performed in 113 patients presenting with isolated IBTR (17 NSM, 48 SSM and 48 TM). There was no difference in post-operative complications between groups. At 3-year follow-up, 5 (4%) patients had a second recurrence. Tumor size at IBTR was the only predictor for second recurrence and not mastectomy type or receipt of reconstruction. Conclusion: In patients initially treated with BCS who experienced an IBTR, NSM or SSM mastectomy with immediate reconstruction had low complication rates and no increase in local recurrence compared to the TM group. abstract_id: PUBMED:19655603 Recurrence after mastectomy for ductal carcinoma in situ. Mastectomy has long been a standard option for patients with ductal carcinoma in situ (DCIS). It is preferentially chosen by some women and may be suggested for individuals with recurrent, multifocal, or multicentric disease. We chose to evaluate our recent experience with mastectomy for DCIS. A retrospective review was conducted of 83 patients (87 breasts) from 1995 to 2006 who underwent mastectomy for DCIS. Mastectomy for DCIS was performed in 49 postmenopausal, 33 premenopausal, and one male patient. The average age was 53 years and the mean follow up was 4.5 years. Sentinel lymph node (SLN) biopsy was performed on 44 cases; positive nodes were identified in two. Intraoperative analyses of SLN were all negative. Only one patient had ipsilateral recurrence of the skin (1.1%). DCIS with microinvasion was noted in 32 per cent of the patients; none of these patients had ipsilateral recurrence. Three patients had positive microscopic margins; none have recurred to date. These results confirm the usefulness of mastectomy for patients with DCIS. We recommend SLN biopsy without intraoperative touch prep analysis. Additional treatment may not be required in patients with microinvasion, positive or close margins because our series shows no local recurrence in these patients. abstract_id: PUBMED:1526822 Surgical treatment of local recurrence after mastectomy This paper deals only with local recurrence of a mammary carcinoma, not with all locoregional recurrences. The surgical treatment of a true local recurrence depends on the freedom from metastases, on the type of the primary operation, whereas the localisation and the size of the recurrence are of minor importance. If no metastases can be found, the operative treatment of the recurrence is the method of choice leading to cure of part of the patient. Simple excision of the tumor is insufficient; the second operation must be radical. When a segmentectomy or a lumpectomy was performed as primary operation, the recurrence must be treated with a modified mastectomy. The status after modified mastectomy with or without primary reconstruction must be converted into a radical mastectomy. A local recurrence after radical mastectomy must be removed by chest wall resection. abstract_id: PUBMED:29335922 Breast conserving surgery in combination with intraoperative radiotherapy after previous external beam therapy: an option to avoid mastectomy. Purpose: Mastectomy is the standard procedure in patients with in-breast tumor recurrence (IBTR) or breast cancer after irradiation of the chest due to Hodgkin's disease. In certain cases a second breast conserving surgery (BCS) in combination with intraoperative radiotherapy (IORT) is possible. To date, data concerning BCS in combination with IORT in pre-irradiated patients are limited. This is the first pooled analysis of this special indication with a mature follow-up of 5 years. Methods: Patients with IBTR after external beam radiotherapy (EBRT; treated in two centers) for breast cancer were included. Patients with previous EBRT including the breast tissue due to other diseases were also included. IORT was performed with the Intrabeam™-device using low kV X-rays. Clinical data including outcome for all patients and toxicity for a representative cohort (LENT-SOMA scales) were obtained. Statistical analyses were done including Kaplan-Meier estimates for local recurrence, distant metastasis and overall survival. Results: A total of 41 patients were identified (39 patients with IBTR, 2 with Hodgkin`s disease in previous medical history). Median follow-up was 58 months (range 4-170). No grade 3/4 acute toxicity occurred within 9 weeks. Local recurrence-free survival rate was 89.9% and overall survival was 82.7% at 5 years. Seven patients developed metastasis within the whole follow-up. Conclusions: BCS in combination with IORT in IBTR in pre-irradiated patients is a feasible method to avoid mastectomy with a low risk of side effects and an excellent local control and good overall survival. abstract_id: PUBMED:30888580 Oncologic outcomes and radiation safety of nipple-sparing mastectomy with intraoperative radiotherapy for breast cancer. Background: Nipple-sparing mastectomy combined with breast reconstruction helps to optimize the contour of the breast after mastectomy. However, the indications for nipple-sparing mastectomy are still controversial. Local radiation to the nipple-areola complex may play some roles in improving the oncological safety of this procedure. Methods: From January 2014 to December 2017, 41 consecutive patients who underwent nipple-sparing mastectomy combined with Intrabeam intraoperative radiotherapy to the nipple-areola complex flap and breast reconstruction were enrolled in this prospective study. The prescribed radiation dose at the surface of the spherical applicator was 16 Gy. Results: In eight cases, carcinomas were in the central portion of the breast. Partial necrosis of the nipple-areola complex occurred in three cases. Over 90% of patients reported "no or poor sensation" of the nipple-areola complex postoperatively. With a median follow-up time of 26 months, no recurrences or metastases were identified; however, breast-cancer mortality occurred in one patient. Pathologic evaluation of paraffin-embedded sections showed ductal carcinoma in situ in the remaining tissues deep to the nipple-areola complex flap in two patients. Although no further treatment was administered to the nipple-areola complexes postoperatively, no recurrences or metastases were identified 20 months and 24 months later, respectively. Optical microscopy and transmission electron microscopy revealed changes in some normal tissues immediately after Intrabeam intraoperative radiotherapy. Karyopyknosis were observed in gland tissues, and the collagenous fibers became sparse and arranged chaotically. As assessed by thermoluminescence, radiation doses at different sites in the nipple-areola complex flap varied considerably and were about 10 Gy at the areola surface. No Intrabeam intraoperative radiotherapy-related acute or chronic radiation injuries of the lung, heart or bone marrow were identified. Conclusions: Our findings indicate that Intrabeam intraoperative radiotherapy during nipple-sparing mastectomy combined with breast reconstruction is safe and feasible. Trial Registration: The current study was approved by the Ethics Committee of the First Affiliated Hospital of Guangzhou Medical University (registering order 201750). All participants gave their written informed consent. abstract_id: PUBMED:27055945 Local recurrence risk after previous salvage mastectomy. Introduction: Breast-conserving surgery is a standard treatment for early breast cancer. For ipsilateral breast tumor recurrence (IBTR) after breast-conserving surgery, salvage mastectomy is the current standard surgical procedure. However, it is not rare for patients with IBTR who have received salvage mastectomy to develop local recurrence. In this study, we examined the risk factors of local recurrence after salvage mastectomy for IBTR. Patients And Methods: A total of 118 consecutive patients who had histologically confirmed IBTR without distant metastases and underwent salvage mastectomy without irradiation for IBTR between 1989 and 2008 were included from eight institutions in Japan. The risk factors of local recurrence were assessed. Results: The median follow-up period from salvage mastectomy for IBTR was 4.6 years. Patients with pN2 or higher on diagnosis of the primary tumor showed significantly poorer local recurrence-free survival than those with pN0 or pN1 at primary tumor (p &lt; 0.001). Multivariate analysis showed that the lymph node status of the primary tumor was a significantly independent predictive factor of local recurrence-free survival (p = 0.02). Conclusion: The lymph node status of the primary tumor might be a predictive factor of local recurrence-free survival after salvage mastectomy for IBTR. Further research and validation studies are needed. (UMIN-CTR number UMIN000008136). abstract_id: PUBMED:31538696 Effect of intraoperative radiotherapy for early breast cancer on 10-year recurrence rates and overall survival. The aim of this study was to evaluate 10-year local control and overall survival of IORT for early breast cancer treatment. We analyzed 68 patients submitted to breast conservative surgery and IORT, in the accelerator room of the Radiotherapy Service in South Brazil. In the long-term follow-up, we had 17.6% of patients with ipsilateral breast cancer recurrence, 2.9% with regional recurrence, 2.9% with contralateral breast recurrence, and 5.9% with distant metastasis. The 10-year overall survival was 82.8%. Our data show high local recurrence rates, however, good overall survival in early breast cancer patients treated with breast-conserving surgery and intraoperative radiotherapy with electron beams in the long-term follow-up. Answer: The retrospective analysis conducted in the study with PUBMED:20435950 suggests that intraoperative administration of ketorolac, a nonsteroidal anti-inflammatory drug, decreases the risk of breast cancer relapse compared with other analgesics. This study included 327 consecutive women who underwent mastectomy with axillary dissection for breast cancer, and the results indicated a lower cancer recurrence rate when ketorolac was given before surgery. Other analgesics such as sufentanil, ketamine, and clonidine were not associated with a significant reduction in cancer recurrence rates in this series. Therefore, based on this study, it appears that the type of intraoperative analgesic used can influence breast cancer recurrence after mastectomy, with ketorolac showing a potential benefit in reducing relapse rates.
Instruction: Does supplemental vitamin C increase cardiovascular disease risk in women with diabetes? Abstracts: abstract_id: PUBMED:15531665 Does supplemental vitamin C increase cardiovascular disease risk in women with diabetes? Background: Vitamin C acts as a potent antioxidant; however, it can also be a prooxidant and glycate protein under certain circumstances in vitro. These observations led us to hypothesize that a high intake of vitamin C in diabetic persons might promote atherosclerosis. Objective: The objective was to examine the relation between vitamin C intake and mortality from cardiovascular disease. Design: We studied the relation between vitamin C intake and mortality from total cardiovascular disease (n = 281), coronary artery disease (n = 175), and stroke (n = 57) in 1923 postmenopausal women who reported being diabetic at baseline. Diet was assessed with a food-frequency questionnaire at baseline, and subjects initially free of coronary artery disease were prospectively followed for 15 y. Results: After adjustment for cardiovascular disease risk factors, type of diabetes medication used, duration of diabetes, and intakes of folate, vitamin E, and beta-carotene, the adjusted relative risks of total cardiovascular disease mortality were 1.0, 0.97, 1.11, 1.47, and 1.84 (P for trend &lt; 0.01) across quintiles of total vitamin C intake from food and supplements. Adjusted relative risks of coronary artery disease were 1.0, 0.81, 0.99, 1.26, and 1.91 (P for trend = 0.01) and of stroke were 1.0, 0.52, 1.23, 2.22, and 2.57 (P for trend &lt; 0.01). When dietary and supplemental vitamin C were analyzed separately, only supplemental vitamin C showed a positive association with mortality endpoints. Vitamin C intake was unrelated to mortality from cardiovascular disease in the nondiabetic subjects at baseline. Conclusion: A high vitamin C intake from supplements is associated with an increased risk of cardiovascular disease mortality in postmenopausal women with diabetes. abstract_id: PUBMED:24910823 Calcium and vitamin D in post menopausal women. Calcium and Vitamin D are widely used therapies for Osteoporosis. Vitamin D is not a vitamin in true sense since it is produced in response to the action of sunlight on skin. Vitamin D has multiple roles in the body, not all of them well-understood. Vitamin D supplementation must be considered a form of hormone replacement therapy. Therefore it raises all the questions about efficacy, dose, and side effects. The Efficacy of use of Calcium and Vitamin D in all post menopausal women in terms of the prevention of fracture is uncertain. The Annual worldwide sales of these supplements have been several billion dollars. The variation of the results from various studies of Calcium and Vitamin D supplementation in elderly women suggest that benefit of calcium plus vitamin D on bone mineral density or the risk of fracture is small and may vary from group to group and baseline Vitamin D status. Women taking supplemental vitamin D and calcium have a statistically increased incidence of renal stones, according to evidence from the Women's Health Initiative. Studies have shown association between calcium use and increased risk for cardiovascular disease. In a recent review of evidence from 6 randomized trials evaluating the use of vitamin D and calcium to prevent fractures in postmenopausal women who are not living in a nursing home or other institution, the United States Preventive Task Force (USPTF) found no evidence of a benefit from supplementation with 400 IU or less of vitamin D3 and 1000 mg or less of calcium. Also in a report from institute of Medicine Committee, there was insufficient evidence, particularly from randomized trials, that vitamin D treatment affected the risk of non skeletal outcomes like risk of cancer, cardiovascular disease, diabetes, infections, autoimmune disease, and other extra skeletal outcomes. abstract_id: PUBMED:38138286 Serum Vitamin Levels, Cardiovascular Disease Risk Factors, and Their Association with Depression in Korean Women: A Cross-Sectional Study of a Nationally Representative Sample. Background and Objectives: Serum vitamin levels, cardiovascular disease risk factors, and their association with depression is a complex issue that has been the subject of much research. Therefore, we investigated the relationship between vitamin A, B9, and E levels, cardiovascular risk factors, and depression in premenopausal and menopausal South Korean women. Materials and Methods: This cross-sectional study used the 2016-2018 Korea National Health and Nutrition Examination Survey data. Depression was assessed using a questionnaire to check for symptoms of depression or the Patient Health Questionnaire-9. Blood samples were collected from the antecubital vein in the morning after an overnight fast. Covariates were defined as self-reported physician diagnoses. Well-trained medical staff performed the standard procedures. Statistical analysis was performed using the complex sample analysis method of SPSS, using two separate logistic regression models (model 1: adjusted for age; model 2: adjusted for age, marital status, smoking, and alcohol consumption). Results: A total of 3313 women aged over 20 years were enrolled. The association between vitamin A levels and depression was as follows: lower levels of vitamin A were associated with an increased risk of depression in premenopausal women in model 1 and model 2. The levels of serum vitamins E and B9 were not correlated with depression in premenopausal and postmenopausal women. In the premenopausal group, depression increased in the obesity (model 1: p = 0.037; model 2: p = 0.047) and diabetes mellitus (model 1: p = 0.010; model 2: p = 0.009) groups. The menopausal group with depression had higher rates of stroke (model 1: p = 0.017; model 2: p = 0.039) and myocardial infarction (model 1: p = 0.004; model 2: p = 0.008) than the group without depression. Conclusions: Depression is correlated with lower blood levels of vitamin A in premenopausal women. Vitamin B9 and E levels were not associated with depression independent of menopausal status. Depression is associated with obesity and diabetes mellitus in premenopausal women and with stroke and myocardial infarction in postmenopausal women. abstract_id: PUBMED:17003353 Vitamin E and risk of type 2 diabetes in the women's health study randomized controlled trial. We directly assessed the efficacy of vitamin E supplements for primary prevention of type 2 diabetes among apparently healthy women in the Women's Health Study randomized trial. Between 1992 and 2004, 38,716 apparently healthy U.S. women aged &gt;or=45 years and free of diabetes, cancer, and cardiovascular disease were in two randomly assigned intervention groups and received 600 IU of vitamin E (alpha-tocopherol, n = 19,347) or placebo (n = 19,369) on alternate days. During a median 10-year follow-up, there were 827 cases of incident type 2 diabetes in the vitamin E group and 869 in the placebo group, a nonsignificant 5% risk reduction (relative risk [RR] 0.95 [95% CI 0.87-1.05], P = 0.31). There was no evidence that diabetes risk factors including age, BMI, postmenopausal hormone use, multivitamin use, physical activity, alcohol intake, and smoking status modified the effect of vitamin E on the risk of type 2 diabetes. In a sensitivity analysis taking compliance into account, women in the vitamin E group had an RR of 0.93 (95% CI 0.83-1.04) (P = 0.21) compared with those randomized to placebo. In this large trial with 10-year follow-up, alternate-day doses of 600 IU vitamin E provided no significant benefit for type 2 diabetes in initially healthy women. abstract_id: PUBMED:21072911 Vitamin D--an old vitamin in a new perspective Vitamin D, is a secosteroid which, in its active form 1,25-(OH)2-Vitamin D3, has hormone activities. Most cells and tissues in the human body have vitamin D receptors that stimulate the nuclear transcription of various genes to alter cellular function. Vitamin D, appears to have an effect on numerous disease states and disorders, including osteoporosis, chronic musculoskeletal pain, diabetes (types 1 and 2), multiple sclerosis, cardiovascular disease, and cancers of the breast, prostate, and colon. According to many researchers there is currently a worldwide vitamin D deficiency in various populations, including infants, pregnant and lactating women, and the elderly. The prevalence of vitamin D, insufficiency in the general German population is high. Vitamin D in the food supply is limited and most often inadequate to prevent deficiencies. Supplemental vitamin D is likely necessary to avoid deficiency, especially in winter months. The estimated cost saving effect of improving vitamin D status in Germany might be up to 37.5 billion euros annually. abstract_id: PUBMED:16306556 Dietary calcium, vitamin D, and the prevalence of metabolic syndrome in middle-aged and older U.S. women. Objective: To examine whether and to what extent intakes of calcium and vitamin D are related to the metabolic syndrome in middle-aged or older women. Research Design And Methods: We analyzed data from 10,066 women aged &gt; or =45 years participating in the Women's Health Study who were free of cardiovascular disease, cancer, or diabetes and who never used postmenopausal hormones. We used multiple logistic regression models to estimate multivariable odds ratios (ORs) and 95% CIs comparing different dietary intake levels of calcium and vitamin D. Results: In age- and calorie-adjusted analyses, higher intakes of total, dietary, and supplemental calcium were significantly and inversely associated with the prevalence of metabolic syndrome. After further adjusting for smoking status, exercise, alcohol intake, multivitamin use, and parental history of myocardial infarction before age 60 years, the ORs of having the metabolic syndrome for increasing quintiles of total calcium intake were 1.00 (reference), 0.82 (95% CI 0.70-0.97), 0.84 (0.71-0.99), 0.70 (0.59-0.83), and 0.64 (0.54-0.77) (P for trend &lt;0.0001). This association was not appreciably altered by additional adjustment for other dietary factors or total vitamin D intake. In contrast, neither total (P for trend = 0.13) nor supplemental (P for trend = 0.45) vitamin D was significantly associated with metabolic syndrome. Dietary vitamin D was inversely associated with prevalence of metabolic syndrome but was not independent of total calcium intake. Similar strong relations between intakes of dairy products and metabolic syndrome were also observed. After adjustment for lifestyle and dietary factors, the multivariable ORs comparing highest with lowest intake categories were 0.66 (0.55-0.80) (P for trend &lt;0.0001) for total dairy products and 0.85 (0.71-1.02) (P for trend = 0.05) for total milk intake. Conclusions: Our results indicate that intakes of calcium and dairy products may be associated with lower prevalence of the metabolic syndrome in middle-aged and older women. abstract_id: PUBMED:9830485 Serum lipids and vitamin A, C and E concentrations in an adult population of Caracas City Lipid profiles as well as vitamins A, C and E were determined in a sample of 90 men and 151 women with ages within 35 and 50 years old. Comparing the lipid profiles obtained in the study with the limits established by the National Cholesterol Education Program, resulted that a little more than 60% of the group had total cholesterol and LDL cholesterol levels in the desirable range, 20 to 30% had levels in the marginal range while 10 to 15% had levels in the high risk range. This distribution of the risk is more favorable than that observed in populations with a high risk of heart diseases such us the British or American populations, which show a substantially higher segment of the people in the high risk level. When the protective effect of the HDL cholesterol was included in the estimation of the risk by calculating the indexes: Total cholesterol/HDL chol., LDL Chol/HDL chol. or Total Chol-HDL chol. 65 to 80% of the population had values within the normal range and the first of these indexes, indicated that the men had a higher risk than the women. Integrating both methods of estimating the risk and considering that the risk of the individuals in the marginal range defined by the National Cholesterol Education Program is minimal unless they have two additional risk factors, it appears that an important segment (20-30%) of the studied population may benefit from programs aiming to reduce other risk factors such as smoking, high blood pressure, diabetes or overweight. The vitamin levels measured in this study indicated that the vast majority of the population had their levels in the safe range but an important segment had vitamin C serum levels indicative of poor consumption of this vitamin. Since vitamin C is high in fruits and vegetables we concluded that the studied population had a low consumption of these foods. Due to the existing evidence of a protective effect of fruits and vegetables in heart and other chronic diseases it was concluded that institutions such as the one studied here should engage in preventive campaigns emphasizing a reduction of both risk factors and the consumption of saturated fats. The results of this study also indicate that the consumption of fruits and vegetables should be encouraged. abstract_id: PUBMED:10383366 Relation of consumption of vitamin E, vitamin C, and carotenoids to risk for stroke among men in the United States. Background: Antioxidants increase the resistance of low-density lipoprotein to oxidation and may thereby reduce risk for atherosclerosis. Objective: To determine whether intake of vitamin E, vitamin C, or carotenoids predict risk for total or ischemic stroke. Design: Prospective observational study. Setting: The Health Professionals Follow-up Study. Participants: 43,738 men 40 to 75 years of age who did not have cardiovascular disease or diabetes. Measurements: Repeated and validated dietary assessments were done by using a self-administered 131-item food-frequency questionnaire, which included questions on dose and duration of vitamin supplement use. The follow-up period was 8 years. Results: A total of 328 strokes occurred: 210 ischemic, 70 hemorrhagic, and 48 unclassified. After adjustment for age, smoking, hypertension, hypercholesterolemia, body mass index, physical activity, parental history of myocardial infarction, alcohol consumption, and total energy intake, the relative risk for ischemic stroke in the top quintile of vitamin E intake (median, 411 IU/d) compared with the bottom quintile (5.4 IU/d) was 1.18 (95% CI, 0.77 to 1.82). The relative risk for ischemic stroke in the top quintile of vitamin C intake (1167 mg/d) compared with the bottom quintile (95 mg/d) was 1.03 (CI, 0.66 to 1.59). Results for total stroke were similar. Associations of vitamin intake with hemorrhagic stroke were also nonsignificant, but the CIs were wide. Neither dose nor duration of vitamin E or vitamin C supplement use was related to risk for total or ischemic stroke. The relative risk for ischemic stroke was 1.16 (CI, 0.81 to 1.67) in men using 250 IU or more of vitamin E supplementation per day compared with men who used no vitamin E supplements and was 0.93 (CI, 0.60 to 1.45) in men using 700 mg or more of vitamin C supplementation per day compared with men who used no vitamin C supplements. A significant inverse relation between lutein intake and risk for ischemic stroke was seen but was not independent of other dietary factors. Conclusions: Vitamin E and vitamin C supplements and specific carotenoids did not seem to substantially reduce risk for stroke in this cohort. Modest effects, however, cannot be excluded. abstract_id: PUBMED:19491386 Effects of vitamins C and E and beta-carotene on the risk of type 2 diabetes in women at high risk of cardiovascular disease: a randomized controlled trial. Background: Vitamin C, vitamin E, and beta-carotene are major antioxidants and as such may protect against the development of type 2 diabetes via reduction of oxidative stress. Objective: The purpose of this study was to investigate the long-term effects of supplementation with vitamin C, vitamin E, and beta-carotene for primary prevention of type 2 diabetes. Design: In the Women's Antioxidant Cardiovascular Study, a randomized trial that occurred between 1995 and 2005, 8171 female health professionals aged &gt; or =40 y with either a history of cardiovascular disease (CVD) or &gt; or =3 CVD risk factors were randomly assigned to receive vitamin C (ascorbic acid, 500 mg every day), vitamin E (RRR-alpha-tocopherol acetate, 600 IU every other day), beta-carotene (50 mg every other day), or their respective placebos. Results: During a median follow-up of 9.2 y, a total of 895 incident cases occurred among 6574 women who were free of diabetes at baseline. There was a trend toward a modest reduction in diabetes risk in women assigned to receive vitamin C compared with those assigned to receive placebo [relative risk (RR): 0.89; 95% CI: 0.78, 1.02; P = 0.09], whereas a trend for a slight elevation in diabetes risk was observed for vitamin E treatment (RR: 1.13; 95% CI: 0.99, 1.29; P = 0.07). However, neither of these effects reached statistical significance. No significant effect was observed for beta-carotene treatment (RR: 0.97; 95% CI: 0.85, 1.11; P = 0.68). Conclusion: Our randomized trial data showed no significant overall effects of vitamin C, vitamin E, and beta-carotene on risk of developing type 2 diabetes in women at high risk of CVD. This trial was registered at clinicaltrials.gov as NCT00000541. abstract_id: PUBMED:37195485 Associations of serum vitamin C concentrations with risk of all-cause and cause-specific mortality among individuals with and without type 2 diabetes. Purpose: Compared with people without diabetes, people with type 2 diabetes (T2D) are at higher risk of both subnormal vitamin C status and increased oxidative stress. We aimed to investigate the associations of serum vitamin C concentrations with all-cause and cause-specific mortality among adults with and without T2D. Methods: The current analysis included 20,045 adults (2691 people with T2D and 17,354 without T2D) from the Third National Health and Nutrition Examination Survey (NHANES III) and NHANES 2003-2006. Cox proportional hazards regression models were applied to estimate hazard ratios (HRs) and 95% confidence intervals (CIs). Restricted cubic spline analyses were used to examine the dose-response relationship. Results: After a median follow-up of 17.3 years, 5211 deaths were documented. Individuals with T2D had a lower level of serum vitamin C concentrations compared with those without T2D (the median value: 40.1 vs. 44.9 μmol/L). Furthermore, the dose-response relationship between serum vitamin C and mortality showed different patterns between participants with and without T2D. In individuals without T2D, there was a nonlinear association of serum vitamin C concentrations with all-cause, cancer, and CVD mortality, with the lowest risk around a serum vitamin C concentration of 48.0 μmol/L (all Poverall &lt; 0.05, Pnonlinearity &lt; 0.05). In contrast, among those with T2D in the similar concentration range, higher serum vitamin C levels (ranged from 0.46 to 116.26 μmol/L) were linearly associated with lower all-cause and cancer mortality (both Poverall &lt; 0.05, Pnonlinearity &gt; 0.05). Significant additive interaction was observed between diabetes status and serum vitamin C levels with regard to all-cause and cancer mortality (P &lt; 0.001). In addition, C-reactive protein, gamma-glutamyl transpeptidase, and HbA1c explained 14.08, 8.96, and 5.60% of the association between serum vitamin C and all-cause mortality among individuals with T2D, respectively. Conclusions: Higher serum vitamin C concentrations were significantly associated with lower risk of mortality in participants with T2D in a linear dose-response manner, while a nonlinear association was observed in participants without T2D, with an apparent threshold around 48.0 μmol/L. These findings suggest that the optimal vitamin C requirement may differ in individuals with and without T2D. Answer: Yes, supplemental vitamin C does appear to increase cardiovascular disease risk in women with diabetes. A study examining the relation between vitamin C intake and mortality from cardiovascular disease in 1923 postmenopausal women with diabetes found that a high intake of vitamin C from supplements was associated with an increased risk of cardiovascular disease mortality. After adjusting for various factors, the relative risks of total cardiovascular disease mortality increased across quintiles of total vitamin C intake from food and supplements, with the highest quintile showing a significant trend towards increased risk. This association was not observed in nondiabetic subjects at baseline, suggesting that the effect might be specific to women with diabetes (PUBMED:15531665).
Instruction: Do patient and proxy agree? Abstracts: abstract_id: PUBMED:15244245 Proxy ratings of patient quality of life--factors related to patient-proxy agreement. It is generally agreed that patients are the best raters of their quality of life (QL). Where it is not possible to obtain information from the patient the use of proxies can be an alternative. The proxy can be a significant other or a healthcare provider. The use of a proxy offers a solution to non-response. The accuracy of proxy reports is most typically determined by examining the extent to which patient and proxy ratings agree. A literature overview shows that agreement depends on several factors. Methodological limitations may have an impact on agreement. Agreement also varies according to the QL domains under study; the highest agreement is usually found for concrete domains. In addition, patient and proxy characteristics are related to agreement. When designing QL studies the threat of selection bias due to exclusion of patients has to be balanced against the threat of information bias due to proxy ratings. abstract_id: PUBMED:34228242 Patient-proxy agreement on change in acute stroke patient-reported outcome measures: a prospective study. Objectives: Research has indicated proxies overestimate symptoms on patients' behalves, however it is unclear whether patients and proxies agree on meaningful change across domains over time. The objective of this study is to assess patient-proxy agreement over time, as well as agreement on identification of meaningful change, across 10 health domains in patients who underwent acute rehabilitation following stroke. Methods: Stroke patients were recruited from an ambulatory clinic or inpatient rehabilitation unit, and were included in the study if they were undergoing rehabilitation. At baseline and again after 30 days, patients and their proxies completed PROMIS Global Health and eight domain-specific PROMIS short forms. Reliability of patient-proxy assessments at baseline, follow-up, and the change in T-score was evaluated for each domain using intra-class correlation coefficients (ICC(2,1)). Agreement on meaningful improvement or worsening, defined as 5+ T-score points, was compared using percent exact agreement. Results: Forty-one patient-proxy dyads were included in the study. Proxies generally reported worse symptoms and functioning compared to patients at both baseline and follow-up, and reported less change than patients. ICCs for baseline and change were primarily poor to moderate (range: 0.06 (for depression change) to 0.67 (for physical function baseline)), and were better at follow-up (range: 0.42 (for anxiety) to 0.84 (for physical function)). Percent exact agreement between indicating meaningful improvement versus no improvement ranged from 58.5-75.6%. Only a small proportion indicated meaningful worsening. Conclusions: Patient-proxy agreement across 10 domains of health was better following completion of rehabilitation compared to baseline or change. Overall change was minimal but the majority of patient-proxy dyads agreed on meaningful change. Our study provides important insight for clinicians and researchers when interpreting change scores over time for questionnaires completed by both patients and proxies. abstract_id: PUBMED:24710798 Do patient and proxy agree? Long-term changes in multiple sclerosis physical impact and walking ability on patient-reported outcome scales. Background: Patient-reported outcome scales (PROs) are useful in monitoring changes in multiple sclerosis (MS) over time. Although these scales are reliable and valid measures in longitudinal studies in MS patients, it is unknown what the impact is when obtaining longitudinal data from proxies. Objective: The objective of this paper is to compare longitudinal changes in patient and proxy responses on PROs assessing physical impact of MS and walking ability. Methods: In a prospective observational study, data on the Multiple Sclerosis Impact Scale (MSIS-29 physical) and Multiple Sclerosis Walking Scale (MSWS-12) were obtained from 137 patient-proxy couples at baseline and at two-year follow-up. Demographic and disease-related variables explaining agreement or disagreement between patients and proxies were investigated using linear regression analyses. Results: Full agreement was found in 56% (MSIS) and 62% (MSWS) of the patient-proxy couples. Complete disagreement was very rare for both scales (2% MSIS, 5% MSWS). When patients were more positive than proxies, a higher age, longer disease duration, longer patient-proxy relationship and increased levels of depression, anxiety and caregiver burden in proxies were observed. Conclusion: In the majority of the patient-proxy couples there was agreement. Proxies can serve as a valuable source of information, but caution remains essential when using scores from proxies. abstract_id: PUBMED:24018128 Analyzing differences between patient and proxy on Patient Reported Outcomes in multiple sclerosis. Background: Proxy respondents, partners of multiple sclerosis (MS) patients, can provide valuable information on the MS patients' disease. In an earlier publication we found relatively good agreement on patient reported outcomes (PROs) measuring physical impact and functioning, but we found large differences on (neuro)psychological scales. Objective: We aim to identify patient and proxy related variables explaining differences between patients' and proxies' ratings on five PROs. Methods: We report on data from 175 MS patients and proxy respondents. Regression analyses were performed, using as dependent variable the mean differences on five scales: Physical and Psychological scale of the Multiple Sclerosis Impact Scale (MSIS-29), the Multiple Sclerosis Walking Scale (MSWS), Guy's Neurological Disability Scale (GNDS) and the Multiple Sclerosis Neuropsychological Screening Questionnaire (MSNQ). The independent variables were patient, proxy and disease related variables. Results: Caregiver strain was significantly related to differences between patient and proxy scores for all five PROs. A higher level of patient anxiety on the HADS was linked to larger differences on all PROs except the GNDS. In addition, cognitive functioning, proxy depression, walking ability, proxy gender and MS related disability were contributing to the discrepancies. Conclusion: We found several patient and proxy factors that may contribute to discrepancies between patient and proxy scores on MS PROs. The most important factor is caregiver burden. abstract_id: PUBMED:7885511 Munchausen syndrome by proxy The term "Munchausen by proxy" denotes a special form of factitious disorder and a rare kind of child abuse. We present the case of a 12-year-old boy, whose mother's manipulations culminated in his immobilization in a wheelchair and the implantation of a pacemaker. Problems of diagnosis and clinical management of Munchausen by proxy are discussed. abstract_id: PUBMED:31971889 Frequency of and Factors Associated with Care Partner Proxy Interaction with Health Care Teams Using Patient Portal Accounts. Introduction: Proxies can communicate with health care teams through patient portals either by using proxy login credentials or a patient's login credentials. The frequency of proxies using patient login credentials is unknown. Methods: A random selection of 3,000 portal messages sent in through adult patients' own portal account was reviewed for indicators (referring to the patient in the third person) that someone other than the patient was using the patient portal account. Results: Of the reviewed 3,000 portal messages sent through patient portal accounts, 221 (7.4%) appeared to be sent in by a proxy, 2,512 (83.8%) appeared to have been sent in by the patient and for 266 (8.9%) portal messages reviewed it was unclear who sent in the message. There was no difference in mean age between patients who had proxy messages sent through patient portal accounts versus proxy portal accounts. Patients who had proxies send messages through patient accounts were more likely to be married and male. Out of 221 manually reviewed messages apparently sent by proxies through patient portal accounts there were 113 (51%) where the proxy included their name and 56 (25.3%) where they reported their relationship to the patient. During the study period, 0.7% of total messages on adult patients were sent through proxy accounts. Discussion: Proxies appear to use patient portal accounts much more frequently than proxy accounts to communicate with the health care team on adult patients; however, when using patient accounts they only identify themselves approximately half of the time. abstract_id: PUBMED:29304818 Proxy and patient reports of health-related quality of life in a national cancer survey. Background: Proxy respondents are frequently used in surveys, including those assessing health-related quality of life (HRQOL). In cancer, most research involving proxies has been undertaken with paired proxy-patient populations, where proxy responses are compared to patient responses for the same individual. In these populations, proxy-patient differences are small and suggest proxy underestimation of patient HRQOL. In practice, however, proxy responses will only be used when patient responses are not available. The difference between proxy and patient reports of patient HRQOL where patients are not able to report for themselves in cancer is not known. The objective of this study was to evaluate the difference between patient and proxy reports of patient HRQOL in a large national cancer survey, and determine if this difference could be mitigated by adjusting for clinical and sociodemographic information about patients. Methods: Data were from the Cancer Care Outcomes Research and Surveillance (CanCORS) study. Patients or their proxies were recruited within 3-6 months of diagnosis with lung or colorectal cancer. HRQOL was measured using the SF-12 mental and physical composite scales. Differences of ½ SD (=5 points) were considered clinically significant. The primary independent variable was proxy status. Linear regression models were used to adjust for patient sociodemographic and clinical covariates, including cancer stage, patient age and education, and patient co-morbidities. Results: Of 6471 respondents, 1011 (16%) were proxies. Before adjustment, average proxy-reported scores were lower for both physical (-6.7 points, 95% CI -7.4 to -5.9) and mental (-6 points, 95% CI -6.7 to -5.2) health. Proxy-reported scores remained lower after adjustment (physical: -5.8 points, -6.6 to -5; mental: -5.8 points, -6.6 to 5). Proxy-patient score differences remained clinically and statistically significant, even after adjustment for sociodemographic and clinical variables. Conclusions: Proxy-reported outcome scores for both physical and mental health were clinically and significantly lower than patient-reported scores for these outcomes. The size of the proxy-patient score differences was not affected by the health domain, and adjustment for sociodemographic and clinical variables had minimal impact. abstract_id: PUBMED:35982202 Neurocognitive impairment and patient-proxy agreement on health-related quality of life evaluations in recurrent high-grade glioma patients. Purpose: The rate of missing data on patient-reported health-related quality of life (HRQOL) in brain tumor clinical trials is particularly high over time. One solution to this issue is the use of proxy (i.e., partner, relative, informal caregiver) ratings in lieu of patient-reported outcomes (PROs). In this study we investigated patient-proxy agreement on HRQOL outcomes in high-grade glioma (HGG) patients. Methods: Generic and disease-specific HRQOL were assessed using the EORTC QLQ-C30 and QLQ-BN20 in a sample of 501 patient-proxy dyads participating in EORTC trials 26101 and 26091. Patients were classified as impaired or intact, based on their neurocognitive performance. The level of patient-proxy agreement was measured using Lin's concordance correlation coefficient (CCC) and the Bland-Altman limit of agreement. The Wilcoxon signed-rank test was used to evaluate differences between patients' and proxies' HRQOL. Results: Patient-proxy agreement in all HGG patients (N = 501) ranged from 0.082 to 0.460. Only 18.8% of all patients were neurocognitively intact. Lin's CCC ranged from 0.088 to 0.455 in cognitively impaired patients and their proxies and from 0.027 to 0.538 in cognitively intact patients and their proxies. Conclusion: While patient-proxy agreement on health-related quality of life outcomes is somewhat higher in cognitively intact patients, agreement in high-grade glioma patients is low in general. In light of these findings, we suggest to cautiously consider the use of proxy's evaluation in lieu of patient-reported outcomes, regardless of patient's neurocognitive status. abstract_id: PUBMED:16301231 The safety of patient-controlled analgesia by proxy in pediatric oncology patients. Between February 1999 and December 2003, we studied the use of patient-controlled analgesia (PCA) to control pain in 1011 children and young adults with cancer, by evaluating 4972 24-h periods of PCA usage, 576 of which involved PCA by proxy. Selection of patients for PCA by proxy was based on younger age group, neuromuscular limitation, expectation of repeated painful procedures, and terminal disease. We measured the incidence of respiratory and neurological complications related to the use of PCA. Major complications were observed during 70 of the 4972 24-h observations, with 28 of 4972, or 0.56%, involving respiratory complications, 35 of 4972, or 0.7%, involving neurological complications, and 7 of 4972 24-h observations, or 0.14%, involving both respiratory and neurological complications. In the PCA by proxy group two respiratory complications, two neurological complications and one mixed respiratory and neurological complication were observed [corrected] Reversal of opioid-related respiratory or neurological effects with naloxone was required in three instances, two in the standard PCA group and one in the PCA by proxy group. Recommendations to ensure continuing safety include careful patient selection, education of proxy users, appropriate documentation, and institutional guidelines. abstract_id: PUBMED:30389654 Insights Into Older Adult Patient Concerns Around the Caregiver Proxy Portal Use: Qualitative Interview Study. Background: Electronic patient portals have become common and offer many potential benefits for patients' self-management of health care. These benefits could be especially important for older adult patients dealing with significant chronic illness, many of whom have caregivers, such as a spouse, adult child, or other family member or friend, who help with health care management. Patient portals commonly contain large amounts of personal information, including diagnoses, health histories, medications, specialist appointments, lab results, and billing and insurance information. Some health care systems provide proxy accounts for caregivers to access a portal on behalf of a patient. It is not well known how much and in what way caregivers are using patient portals on behalf of patients and whether patients see any information disclosure risks associated with such access. Objective: The objective of this study was to examine how older adult patients perceive the benefits and risks of proxy patient portal access by their caregivers. Methods: We conducted semistructured interviews with 10 older adult patients with chronic illness. We asked them about their relationship with their caregivers, their use of their patient portal, their caregiver's use of the portal, and their perceptions about the benefits and risks of their caregiver's use of the portals. We also asked them about their comfort level with caregivers having access to information about a hypothetical diagnosis of a stigmatized condition. Two investigators conducted a thematic analysis of the qualitative data. Results: All patients identified caregivers. Some had given caregivers access to their portals, in all cases by sharing log-in credentials, rather than by setting up an official proxy account. Patients generally saw benefits in their caregivers having access to the information and functions provided by the portal. Patients generally reported that they would be uncomfortable with caregivers learning of stigmatized conditions and also with caregivers (except spouses) accessing financial billing information. Conclusions: Patients share their electronic patient portal credentials with caregivers to receive the benefits of those caregivers having access to important medical information but are unaware of all the information those caregivers can access. Better portal design could alleviate these unwanted information disclosures. Answer: The agreement between patient and proxy ratings of patient quality of life (QL) and health-related outcomes varies across different studies and contexts. In general, the literature suggests that agreement between patient and proxy ratings depends on several factors, including the QL domains under study, with higher agreement usually found for more concrete domains. Patient and proxy characteristics are also related to agreement, and methodological limitations may impact the level of agreement (PUBMED:15244245). In a study on acute stroke patients, proxies generally reported worse symptoms and functioning compared to patients at both baseline and follow-up, and reported less change than patients. The intra-class correlation coefficients (ICCs) for baseline and change were primarily poor to moderate, and agreement on meaningful improvement or worsening was variable, with percent exact agreement ranging from 58.5-75.6% (PUBMED:34228242). In a study on multiple sclerosis (MS), there was full agreement between patient and proxy responses on physical impact and walking ability in 56% (MSIS) and 62% (MSWS) of the patient-proxy couples. Complete disagreement was rare. Factors such as higher age, longer disease duration, longer patient-proxy relationship, and increased levels of depression, anxiety, and caregiver burden in proxies were associated with instances where patients were more positive than proxies (PUBMED:24710798). Another study on MS found that caregiver strain was significantly related to differences between patient and proxy scores for all patient-reported outcomes (PROs) under study. Other factors contributing to discrepancies included patient anxiety, cognitive functioning, proxy depression, walking ability, proxy gender, and MS-related disability (PUBMED:24018128). In the context of cancer, proxy-reported outcome scores for both physical and mental health were clinically and significantly lower than patient-reported scores, and the size of the proxy-patient score differences was not affected by the health domain or adjustment for sociodemographic and clinical variables (PUBMED:29304818). For high-grade glioma patients, patient-proxy agreement on health-related quality of life outcomes was low in general, and somewhat higher in cognitively intact patients, suggesting caution when using proxy evaluations in lieu of patient-reported outcomes (PUBMED:35982202). In summary, while there are instances of agreement between patient and proxy ratings, the level of agreement is influenced by various factors, and discrepancies are not uncommon.
Instruction: Can sleep microstructure improve diagnosis of OSAS? Abstracts: abstract_id: PUBMED:28853424 The clinical diagnosis of Obstructive Sleep Apnea Syndrome (OSAS). Obstructive Sleep Apnea Syndrome (OSAS) is the most frequent sleep breathing disorder in the general population. To reach a correct diagnosis, the clinical work-up requires the association of comprehensive clinical evaluation (anamnesis, physical examination) and nocturnal polysomnography. Polysomnographic recordings may differ by number of bio-parameters recorded and setting (in laboratory or at home), and allow the identification of other sleep disorders in addition to the diagnosis of OSAS. Excessive daytime sleepiness (EDS) is the most frequent daytime complaint of OSAS patients. Its evaluation is fundamental in subjects with suspected OSAS and concomitant high risk of sleep-related accidents due to work-related factors (e.g. professional drivers). To test EDS, physicians may use subjective (questionnaires) and/or objective (polysomnografic or performance tests) measures. Objective tests are more advisable, but to date they are time consuming and expensive. Objective tests less time-consuming and easily applicable to clinic practice are being evaluated. abstract_id: PUBMED:28853428 Obstructive Sleep Apnea Syndrome (OSAS): evaluation and treatment of odontostomatological problems. The Obstructive Sleep Apnea Syndrome (OSAS) is a breathing disorder characterized by repeated episodes of apnea/hypopnea occurring during sleep and caused by the obstruction of the upper airway. As OSAS can have a significant impact on quality of life, an early diagnosis and a multi-specialized recognition become important. Therapeutic proposals include the Mandibular Advancement Devices (MADs), intraoral devices that induce a mandibular advancement during the night to eliminate snoring and apnea. Italian guidelines on dental aspects of OSAS in adults and children focus primarily on anamnestic and clinical methods by which the dentist, in a role of epidemiological sentinel, can first identify a patient with OSAS and then participate in the therapeutic process using specific orthodontic intraoral devices. abstract_id: PUBMED:28853427 Ear nose and throat (ENT) aspects of Obstructive Sleep Apnea Syndrome (OSAS) diagnosis and therapy. This article analyzes the role played by the ear, nose and throat (ENT) specialist in the diagnosis and treatment of the Obstructive Sleep Apnea Syndrome (OSAS). Diagnosis: The instrumental methods of investigation of otolaryngology expertise are fiberoptic rhinolaryngoscopy and sleep endoscopy. To better define and treat OSAS it is essential to define location, extension and degree of obstruction. The major limitation of rhinolaryngoscopy is that it is conducted while the patient is awake. Sleep endoscopy was introduced to address this issue: it is an endoscopic examination conducted during pharmacologically induced sleep (midazolam, propofol). Surgical treatment: The applicability of surgical treatment is limited: it is not efficient for every patient and not to the same extent. Surgical therapy is based on the detection and correction of the site of obstruction, although in most cases the obstruction sites are located at different levels. The surgical approach usually follows a path characterized by "subsequent steps". According to Stanford University Powell-Riley Protocol the first phase includes: nasal surgery, palate surgery (UPPP, LAUP Laser Assisted uvulopalatoplasty, UvuloPalatalFlap UPF) and tongue base surgery (tongue suspension, genioglossus advancement, hyoid suspension) that could either be executed individually or associated. Four-six months after surgery a physical and polysomnography examination should be performed. In case of symptoms persistence, the surgeon could move on to phase II surgery as recovery therapy. The phase II surgery includes: maxillo-mandibular advancement, tongue base resection and tracheotomy. In selected cases a Phase II surgery could be planned at the beginning of treatment. abstract_id: PUBMED:28891505 Sleep microstructure dynamics and neurocognitive performance in obstructive sleep apnea syndrome patients. The present study examined the relationship between the increment in cyclic alternating patterns (CAPs) in sleep electroencephalography and neurocognitive decline in obstructive Sleep Apnea Syndrome (OSAS) patients through source localization of the phase-A of CAPs. All-night polysomnographic recordings of 10 OSAS patients and 4 control subjects along with their cognitive profile using the Addenbrooke's Cognitive Examination (ACE) test were acquired. The neuropsychological assessment involved five key domains including attention and orientation, verbal fluency, memory, language and visuo-spatial skills. The standardized low-resolution brain electromagnetic tomography (sLORETA) tool was used to source-localize the phase-A of CAPs in sleep EEG aiming to investigate the correlation between CAP phase-A and cognitive functions. Our findings suggested a significant increase in CAP rates among OSAS subjects versus control subjects. Moreover, sLORETA revealed that CAP phase-A is mostly activated in frontoparietal cortices. As CAP rate increases, the activity of phase-A in such areas is dramatically enhanced leading to arousal instability, lower sleep efficiency and a possibly impaired cortical capacity to consolidate cognitive inputs in frontal and parietal areas during sleep. As such, cognitive domains including verbal fluency, memory and visuo-spatial skills which predominantly relate to frontoparietal areas tend to be affected. Based on our findings, CAP activity may possibly be considered as a predictor of cognitive decline among OSAS patients. abstract_id: PUBMED:26742673 Can sleep microstructure improve diagnosis of OSAS? Integrative information from CAP parameters. Purpose: The scoring of American Academy of Sleep Medicine (AASM) arousal is mandatory for the definition of respiratory event-related arousal (RERA). However there are other EEG activation phenomena, such as A phases of cyclic alternating pattern (CAP) which are associated with respiratory events in non rapid eye movements (NREM) sleep. This study aims at quantifying the additional value of CAP for the definition of respiratory events and sleep alterations in OSAS. Methods: Analysis of polysomnographic recordings from nineteen OSAS patients was carried out. Scoring was focused on investigation of the cerebral response to flow limitation (FL) events. For this purpose we used both CAP rules and AASM arousal criteria. Main Results: While no difference was demonstrated in the arousal index between mild and moderate-severe OSAS patients, CAP time showed a progressive enhancement from normal subjects (152.5±20.76) to mild (180.64±34.76) and moderate-severe (282.27±58.02) OSAS patients. In NREM sleep, only 41.1% of FL events met the criteria for the definition of RERA, while, 75.5% of FL events ended with a CAP A phase and most FL CAP (69.1%) terminated with a CAP phase A3 subtype. Conclusions: Our data indicate that the RERA scoring has a limited accuracy in the detection of FL events. In NREM sleep, CAP rules provided more information than AASM arousal for the definition of respiratory events and sleep alterations in OSAS. abstract_id: PUBMED:28838754 Sleep respiratory problems in children: Diagnosis and contribution of the orthodontist. Obstructive sleep apnea/hypopnea syndrome (OSAS) is still not widely diagnosed in children as its clinical signs are extremely variable. The diagnosis is based on recognition of the nocturnal and diurnal clinical symptoms of the syndrome, but above all on analysis of a sleep recording revealing abnormal respiratory phenomena. This recording makes it possible to determine an Apnea/Hypopnea Index (AHI) for each hour of sleep. The abnormality threshold is 1.5AHI/h for children and 5AHI/h for adults. The higher the index, the more severe the OSAS. The consequences of this syndrome are far from negligible, leading possibly to learning difficulties, mood problems, growth abnormalities and delayed neurocognitive development; it may even have an impact on metabolism. The orthodontist, as a health specialist frequently in contact with young patients, needs to be able to detect OSAS by launching a dialogue on the question of sleep so as to refer the patient to a specialist who can confirm the diagnosis. Through observation of the patient, it is possible to identify children at risk. Potential OSAS can then be prevented or cured by increasing the volume of the upper airways thanks to orthodontic treatment. Management of patients suffering from OSAS is multidisciplinary, under the direction of the sleep specialist. abstract_id: PUBMED:28853422 The obstructive sleep apnea syndrome (OSAS): implications for work and Occupational Health. Commission Directive 2104/85/EU on driving licenses highlighted significant issues related to the Obstructive Sleep Apnea Syndrome (OSAS). Due to the high incidence of this syndrome in the general population, it is important to precisely assess its impact on work and driving performance, in terms of employee and third-party safety, general health and related social costs (errors, accidents, absenteeism, presenteism, treatments). As occupational health physicians regularly check the workers in the age group in which the incidence of OSAS is higher, they are called upon to play a strategic role in the early detection of the disease and the prescription of an appropriate therapy, as well as in monitoring the treatment effectiveness in order to assess fitness to work and/or drive. The soundness and effectiveness of the occupational health physicians' work depend not only on their knowledge of and attention to the problem, but above all on the interaction with specialists from other disciplines, essential to arrive at a correct diagnosis and an effective treatment. Main goals of this activity are to maintain a satisfactory operating performance and to prevent possible deleterious health effects in the medium and long-term period. abstract_id: PUBMED:37519157 Correlation between Obstructive Sleep Apnea Syndrome (OSAS) and Cognitive Dysfunction in Elderly Patients with Hypertension. Introduction: Obstructive sleep apnea syndrome (OSAS) is a common clinical disease that seriously affects the quality of life and health of patients. This study aimed to explore the correlation between OSAS and cognitive dysfunction in elderly patients with hypertension. Methods: A total of 106 elderly hypertensive patients were included and divided into OSAS group (n = 45) and non-OSAS group (n = 61), according to whether they combined with OSAS. OSAS was monitored with a portable polysomnography monitor by monitoring sleep and breathing. The cognitive dysfunction of the patients was evaluated using the Montreal Cognitive Assessment (MoCA). Basic data and MoCA scores of the patients were compared between the OSAS group and non-OSAS group. The correlation between OSAS and cognitive dysfunction in patients was evaluated using Pearson's correlation analysis. Results: The proportion of men (38 vs. 22, p = 0.000), atrial fibrillation (31 vs. 19, p = 0.000), body mass index (27.32 ± 3.85 vs. 21.27 ± 5.90, p = 0.002), systolic pressure (167.76 ± 14.31 vs. 153.22 ± 12.79, p = 0.008), homocysteine (29.71 ± 6.27 vs. 12.50 ± 4.19, p = 0.005), cognitive dysfunction (15 vs. 10, p = 0.042) in patients of the OSAS group were significantly higher compared to patients of the non-OSAS group. Visual space/executive ability (3.12 ± 1.23 vs. 4.75 ± 1.03, p = 0.021), memory (2.48 ± 0.31 vs. 3.71 ± 0.42, p = 0.039), attention (4.15 ± 1.21 vs. 5.12 ± 1.87, p = 0.041), total MoCA scores (20.11 ± 5.09 vs. 25.76 ± 4.31, p = 0.017) in patients in the OSAS group were significantly lower compared to patients in the non-OSAS group. OSAS was positively correlated with cognitive dysfunction in elderly patients with hypertension (r = 0.224, p &lt; 0.05). Conclusions: OSAS was positively correlated with cognitive dysfunction in elderly patients with hypertension, so OSAS could increase the risk of cognitive dysfunction through its own adverse effects or its accompanying disease status. abstract_id: PUBMED:34365608 Accuracy of the sleep clinical record for the diagnosis of pediatric moderate-to-severe obstructive sleep apnea syndrome. Purpose: The sleep clinical record (SCR) has been used to diagnose obstructive sleep apnea syndrome (OSAS) in children when access to polysomnography (PSG) is limited. Our aim was to determine the best SCR score that could facilitate diagnosis of moderate-to-severe OSAS in children with snoring. Methods: Healthy children with history of snoring, who were referred for PSG, were prospectively recruited. The SCR score was calculated. Receiver operating characteristic curves (ROCs) were plotted to determine the area under curve (AUC), and the optimum SCR cutoff value was determined using the Youden index (J). Results: Two hundred and seventy-three children were recruited (mean age 6.3 ± 2.5 years; median obstructive apnea-hypopnea index 1.5 episodes/h; range 0-61.1). The mean SCR score was 6.9 ± 3.6. Forty-six children had moderate-to-severe OSAS. Subjects with moderate-to-severe OSAS had a significantly higher mean SCR score (10.2 ± 2.9) than those with mild OSAS (6.2 ± 3.3; P &lt; 0.001). Based on the plotted ROC, the AUC was 0.811 (95% confidence interval: 0.747-0.876; P &lt; 0.001). Calculation of J, based on its ROC coordinates, indicated that the optimum cutoff SCR score to predict moderate-to-severe OSAS was 8.25, corresponding to a sensitivity of 83% and a specificity of 70%. Conclusion: Among children with history of snoring, an SCR score above 8.25 can identify those with moderate-to-severe OSAS. abstract_id: PUBMED:38397260 Analysis of General Knowledge on Obstructive Sleep Apnea Syndrome (OSAS) among Italian Pediatricians. Introduction: Obstructive sleep apnea syndrome (OSAS) is a disorder characterized by partial or total airway obstruction during sleep. Studies have shown variability in the level of knowledge and awareness about OSAS among pediatricians. The management of childhood obstructive sleep apnea syndrome (OSAS) depends on the severity of the disease, the presence of comorbidities, and the child's age. The American Pediatric Academy recommends a multidisciplinary approach involving a pediatrician, a sleep specialist, and an otolaryngologist to provide comprehensive care for children with OSAS. The aim of this cross-sectional study is to evaluate the level of knowledge among pediatricians in Italy regarding the diagnosis of pediatric OSAS. Material And Methods: An anonymized survey was conducted among Italian pediatricians. The survey was administered electronically using Google Forms, and a total of 350 pediatricians were invited to participate. Out of the 350 invitations, 299 pediatricians responded to the survey. The statistical analysis performed consisted of descriptive analysis. The study included 297 pediatricians. Results: Pediatricians demonstrated proficiency in identifying common nocturnal and day symptoms of OSAS. A majority (68.9%) considered the oral and otorhinolaryngologist areas during checkups. Approximately 70.6% took patient weight into account, and 62.8% were aware of the regional diagnostic-therapeutic-assistance pathway. Conclusions: According to the results of this manuscript, there is evidence of a good level of knowledge about OSAS, but disseminating more information about OSAS and all the health issues associated with this syndrome is suggested. This study also has limitations caused by the complexity of the pathology. Answer: Yes, sleep microstructure can improve the diagnosis of Obstructive Sleep Apnea Syndrome (OSAS). The analysis of cyclic alternating patterns (CAPs) in sleep electroencephalography (EEG) has been shown to provide additional value for the definition of respiratory events and sleep alterations in OSAS. Specifically, the study by PUBMED:26742673 found that while the arousal index did not differ significantly between mild and moderate-severe OSAS patients, CAP time showed a progressive enhancement from normal subjects to mild and moderate-severe OSAS patients. In non-rapid eye movement (NREM) sleep, only 41.1% of flow limitation (FL) events met the criteria for the definition of respiratory event-related arousal (RERA), whereas 75.5% of FL events ended with a CAP A phase, with most FL CAP (69.1%) terminating with a CAP phase A3 subtype. This indicates that CAP rules provided more information than the American Academy of Sleep Medicine (AASM) arousal criteria for the definition of respiratory events and sleep alterations in OSAS. Furthermore, the study by PUBMED:28891505 examined the relationship between the increment in CAPs and neurocognitive decline in OSAS patients. The findings suggested a significant increase in CAP rates among OSAS subjects compared to control subjects. The activation of CAP phase-A in frontoparietal cortices was associated with arousal instability, lower sleep efficiency, and possibly impaired cortical capacity to consolidate cognitive inputs during sleep, affecting cognitive domains such as verbal fluency, memory, and visuo-spatial skills. In conclusion, the analysis of sleep microstructure, particularly CAPs, can provide valuable information for the diagnosis of OSAS, offering insights into the cerebral response to respiratory events and the potential impact on cognitive functions.
Instruction: Does the estradiol level on the day of human chorionic gonadotrophin administration have an impact on pregnancy rates in patients treated with rec-FSH/GnRH antagonist? Abstracts: abstract_id: PUBMED:11006188 Addition of GnRH antagonist in cycles of poor responders undergoing IVF. Concern about the use of gonadotrophin-releasing hormone (GnRH) agonists in ovarian stimulation of poor responder IVF patients has arisen from the claim that GnRH agonists might have a direct deleterious effect through their receptors on the ovary. In this study, we compared two ovarian stimulation protocols in which no GnRH agonists were used. In all, 40 patients with a poor response in previous treatment cycles were included. They were divided into two groups: group I (n = 20) received ovarian stimulation for 20 cycles, without the addition of either GnRH agonist or antagonist; while group II (n = 20) patients received ovarian stimulation for 20 cycles, including the administration of a GnRH antagonist (Cetrorelix, 0.25 mg daily) during the late follicular phase. There was no statistically significant difference between the groups for mean age, duration of infertility, baseline FSH concentration, cancellation rate, number of ampoules of gonadotrophin used, number of mature oocytes retrieved, oestradiol concentrations on the day of injection of human chorionic gonadotrophin (HCG), fertilization rate and number of embryos transferred. The clinical pregnancy and implantation rates in group II appeared higher than in group I, but were not significantly different (20 and 13.33% compared with 6.25 and 3.44% respectively). The addition of GnRH antagonists to ovarian stimulation protocols might be a new hope for poor responder IVF patients, but this report is preliminary and further controlled randomized prospective studies with larger sample sizes are required. abstract_id: PUBMED:11679500 Comparison of luteal phase profile in gonadotrophin stimulated cycles with or without a gonadotrophin-releasing hormone antagonist. Background: The aim of our study was to explore luteal phase hormone profiles in gonadotrophin-stimulated cycles with or without gonadotrophin-releasing hormone (GnRH) antagonist therapy during intrauterine insemination (IUI). Forty-one infertile couples were recruited in this randomized clinical study. Methods: The 19 patients included in group A were treated for 21 cycles with recombinant FSH 150 IU/day starting from day 3 of the cycle and with the GnRH antagonist cetrorelix at the dose of 0.25 mg/day starting from the day in which a follicle with a mean diameter of &gt; or =14 mm was seen at ultrasound scan. Cetrorelix was administered until human chorionic gonadotrophin (HCG) administration. The 22 patients included in group B were administered recombinant FSH alone at the same dosage for 27 cycles. Results: The two treatment groups showed a similar increase in progesterone concentration during the luteal phase. In the mid-luteal phase (day 6 after HCG), oestradiol concentrations in group B were significantly higher compared with group A (P &lt; 0.05) but the oestradiol:progesterone ratio was similar in the two groups. Serum LH was completely suppressed during the follicular phase only in group A, concomitantly with GnRH antagonist administration. A total of six pregnancies, all ongoing, were achieved (14.3% per patient and 12.2% per cycle), equally distributed in group A and in group B. Conclusion: GnRH antagonists can be safely administered in gonadotrophin-stimulated IUI cycles without luteal phase supplementation because no deleterious effects of GnRH antagonist administration were noted on luteal progesterone concentration or on the duration of the luteal phase. abstract_id: PUBMED:19909587 Use of aromatase inhibitors in poor-responder patients receiving GnRH antagonist protocols. The efficacy of aromatase inhibitors incorporated in the ovarian stimulation protocols of poor-responder patients undergoing intracytoplasmic sperm injection-embryo transfer cycles was investigated. A total of 70 poor-responder patients were randomized into two groups on day 3 of their menstrual cycle. In Group A, an aromatase inhibitor (letrozole, 5 mg/day) was administered along with a fixed dosage (450 IU/day) of recombinant FSH (rFSH), whereas Group B were treated with the same rFSH dosage alone. A flexible regimen of gonadotrophin-releasing hormone antagonist was administered in both groups. The mean total dose of rFSH (2980 +/- 435 IU versus 3850 +/- 580 IU, P &lt; 0.05) and serum concentrations of oestradiol on the day of human chorionic gonadotrophin administration (1870 +/- 159 pg/ml versus 2015 +/- 175 pg/ml, P &lt; 0.05) were significantly lower in Group A compared with Group B, respectively. The rate of cycle cancellation due to poor ovarian response was lower in Group A (8.6%) than in Group B (28.6%), ( P &lt; 0.05). The costs of achieving a clinical pregnancy were US$11560 and US$17584, and the clinical pregnancy rates per embryo transfer were 25.8% and 20%, in groups A and B, respectively. In conclusion, adjunctive letrozole administration seems to restore an IVF cycle by decreasing the rate of cycle cancellation and seems to reduce the cost by reducing the total gonadotrophin dosage. abstract_id: PUBMED:17174489 The effects of GnRH analogue (buserelin) or hCG (Chorulon) on Day 12 of pregnancy on ovarian function, plasma hormone concentrations, conceptus growth and placentation in ewes and ewe lambs. The objectives of this study were to determine the effect of GnRH analogue (buserelin) or human chorionic gonadotrophin (hCG, Chorulon) treatment on Day 12 of pregnancy on ovarian function, plasma hormone concentrations, conceptus growth and placentation in ewes and ewe lambs. After oestrus synchronization with progestagen sponges and eCG, all the animals were mated with fertile rams. Both ewes and ewe lambs (20 per treatment group) were given either normal saline or 4 microg GnRH or 200 IU hCG on Day 12 post-mating. Pre- and post-treatment plasma hormone concentrations were determined in seven pregnant animals per treatment group in samples collected 1h before and 0, 2, 4, 6, 8, 24, 48 and 72 h after treatment. Overall mean progesterone concentrations were higher (P&lt;0.001) in ewes as compared with ewe lambs in saline-treated controls. GnRH or hCG treatment increased (P&lt;0.001) mean plasma progesterone concentrations in both age groups, however, post-treatment concentrations were significantly (P&lt;0.05) higher in ewes than in ewe lambs. Oestradiol concentrations were similar in the two control groups. In ewes, but not in ewe lambs, both GnRH and hCG treatments significantly (P&lt;0.05) increased the mean oestradiol concentrations above pre-treatment levels. Moreover, post-treatment oestradiol concentrations in GnRH- and hCG-treated animals were significantly (P&lt;0.05) higher than those in the saline-treated controls. LH release in response to GnRH treatment was greater (P&lt;0.05) in ewes than in ewe lambs, whereas FSH release in ewes was less (P&lt;0.05) than that of ewe lambs. The effects of GnRH or hCG on conceptus growth and placentation was determined at slaughter on Day 25. In ewes, GnRH treatment increased (P&lt;0.05) luteal weight, amniotic sac width and length, and crown-rump length compared with controls, but had no effect on these parameters in ewe lambs. In ewes, hCG treatment also enhanced (P&lt;0.05) luteal weight, amniotic sac width and length, crown-rump length, embryo weight and number of placentomes as compared with controls. In ewe lambs, there was no difference (P&lt;0.05) between hCG and control groups in luteal weight, embryo weight and amniotic sac width but crown-rump length, amniotic sac length and the number of placentomes forming the placenta were greater (P&lt;0.05). In conclusion, GnRH or hCG treatment on Day 12 of pregnancy can increase ovarian function, conceptus growth and placental attachment in ewes. However, these treatments were less effective in ewe lambs. abstract_id: PUBMED:17207326 Recombinant LH supplementation to recombinant FSH during induced ovarian stimulation in the GnRH-antagonist protocol: a meta-analysis. This study aims to compare the efficacy of recombinant LH (rLH) supplementation for ovarian stimulation in gonadotrophin-releasing hormone-antagonist protocol for IVF/intracytoplasmic sperm injection cycles. Search strategies included online surveys of databases. The fixed effects model was used for odds ratio (OR) and effect size (weighted mean difference, WMD). Five trials fulfilled the inclusion criteria. When the meta-analysis was carried out, advantages were observed for the LH supplementation protocol with respect to higher serum oestradiol concentrations on the day of human chorionic gonadotrophin administration P &lt; 0.0001; WMD: 514, 95% CI 368, 660) and higher number of mature oocytes (P = 0.0098; WMD: 0.88, 95% CI 0.21, 1.54). However, these differences were not observed in the total amount of recombinant FSH (rFSH) administered, days of stimulation, number of oocyets retrieved, the clinical pregnancy rate per oocyte retrieval, the implantation rate and miscarriage rate. This result demonstrates that the association of rLH with rFSH may prevent any decrease in oestradiol after antagonist administration and that a significantly higher number of mature oocytes was available for laboratory work. Nevertheless, it failed to show any statistically significant difference in clinically significant end-points in IVF (implantation and pregnancy rates). Additional randomized controlled trials are needed to confirm these results further. abstract_id: PUBMED:2971082 Ovarian hyperstimulation for in-vitro fertilization controlled by GnRH agonist administered in combination with human menopausal gonadotrophins. Surges of luteinizing hormone (LH) in serum that result in luteinization, but occur prematurely with respect to the diameter of the leading follicle, frustrate attempts to induce multiple follicular maturation for in-vitro fertilization (IVF) in a number of women. We examined the possibility of blocking premature LH surges by the administration of D-TRP6-LH-RH, a potent agonistic analogue of gonadotrophin-releasing hormone (GnRH). Six patients who had repeatedly shown premature LH surges were treated for 10 days, beginning between days 1 and 3 of the cycle with daily s.c. injections of 500 micrograms D-Trp6-LH-RH followed by a daily injection of 100 micrograms of the analogue until the day of administration of human chorionic gonadotrophin (HCG). When pituitary and ovarian suppression had occurred, ovarian stimulation with human menopausal gonadotrophin was started and adjusted in dose according to the ovarian response. HCG was injected when the dominant follicle had reached a diameter of at least 18 mm and oestradiol levels were above 300 pg for each follicle greater than 15 mm. Oocyte collection was performed 36 h later via laparoscopy, followed by IVF and embryo transfer. The six patients studied to date responded to therapy and treatment could be completed up to embryo transfer. Two patients became pregnant; one of the pregnancies, however, resulted in abortion. Combined treatment with GnRH analogue for suppression of pituitary gonadotrophin secretion followed by the administration of gonadotrophins thus seems to be a promising method for ovarian stimulation in patients who frequently exhibit premature LH discharges and therefore fail to complete treatment. abstract_id: PUBMED:10221695 Revival of the natural cycles in in-vitro fertilization with the use of a new gonadotrophin-releasing hormone antagonist (Cetrorelix): a pilot study with minimal stimulation. Natural cycles were abandoned in in-vitro fertilization (IVF) embryo transfer, due to premature luteinizing hormone (LH) surges--and subsequent high cancellation rates. In this study, we investigated the administration of a new gonadotrophin-releasing hormone antagonist (Cetrorelix) in the late follicular phase of natural cycles in patients undergoing IVF and intracytoplasmic sperm injection (ICSI). A total of 44 cycles from 33 healthy women [mean age 34.1 +/- 1.4 (range 26-36) years] were monitored, starting on day 8 by daily ultrasound and measurement of serum concentrations of oestradiol, LH, follicle stimulating hormone (FSH) and progesterone. When plasma oestradiol concentrations reached 100-150 pg/ml, with a lead follicle between 12-14 mm diameter, a single injection (s.c.) of 0.5 mg (19 cycles) or 1 mg (25 cycles) Cetrorelix was administered. Human menopausal gonadotrophin (HMG; 150 IU) was administered daily at the time of the first injection of Cetrorelix, and repeated thereafter until human chorionic gonadotrophin (HCG) administration. Four out of 44 cycles were cancelled (9.0%). No decline in follicular growth or oestradiol secretion was observed after Cetrorelix administration. A total of 40 oocyte retrievals leading to 22 transfers (55%) was performed. In 10 cycles (25%), no oocyte was obtained. Fertilization failure despite ICSI occurred in six cycles (15%). In two patients the embryo was arrested at the 2 pronuclear (PN) stage. The stimulation was minimal (4.7 +/- 1.4 HMG ampoules). A total of seven clinical pregnancies was obtained (32.0% per transfer, 17.5% per retrieval), of which five are ongoing. Thus, a spontaneous cycle and the GnRH antagonist Cetrorelix in single dose administration could represent a first-choice IVF treatment with none of the complications and risks of current controlled ovarian hyperstimulation protocols, and an acceptable success rate. abstract_id: PUBMED:9565144 Ovarian stimulation with low-dose pure follicle-stimulating hormone in polycystic ovarian syndrome anovulatory patients: effect of long-term pretreatment with gonadotrophin-releasing hormone analogue. A randomised clinical trial was performed to evaluate the effect of a 3-month gonadotrophin-releasing-hormone analogue (GnRH-a) in one cycle of ovulation induction with low-dose pure follicle-stimulating hormone (pFSH) in patients with polycystic ovarian syndrome (PCOS) anovulation. Twenty patients with chronic anovulation due to PCOS were randomised to ovulation induction with pFSH administered in a low-dose schedule with (10 patients) and without (10 patients) a 3-month pretreatment with GnRH-a. Ultrasound scan only monitoring of follicular growth, evaluation of plasmatic oestradiol at the day of triggering of ovulation with human chorionic gonadotrophin 5,000 IU and evaluation of plasmatic progesterone 8 days after were the main outcome measures. Ovulation occurred in 9 patients treated with pFSH and in 2 patients treated with GnRH-a plus pFSH. Five pregnancies in the pFSH group and no pregnancy in the GnRH-a group were obtained. Five cycles were stopped due to multifollicular growth in the GnRH-a group and 1 in the pFSH group. Pretreatment with a 3-month administration of a GnRH-a did not improve the ovulation rate and pregnancy rate in PCOS patient ovulation induction with low-dose pFSH. abstract_id: PUBMED:1587939 The synergistic effects of clomiphene citrate and human menopausal gonadotrophin in the folliculogenesis of stimulated cycles as assessed by the gonadotrophin-releasing hormone antagonist Nal-Glu. Clomiphene citrate (CC), alone or in combination with exogenous gonadotrophins, has been widely used in ovulation induction. CC promotes endogenous release of gonadotrophins, yet when used in combination with exogenous gonadotrophins, its contribution to folliculogenesis is difficult to assess. In order to determine the contribution of CC-induced endogenous gonadotrophin production to the overall ovarian stimulation in cycles treated with CC/human menopausal gonadotrophin (HMG), Nal-Glu, a gonadotrophin-releasing hormone (GnRH) antagonist was administered. Fertile women (n = 10) undergoing ovarian stimulation and oocyte aspiration for the sole purpose of gamete donation were studied. Five women received CC (100 mg daily for 5 days) in conjunction with pure follicle stimulating hormone (FSH) 150 IU daily. Five women received HMG alone. Nal-Glu (50 micrograms/kg/day) was administered intramuscularly to both groups when the leading follicles reached a mean diameter of 16 mm. Human chorionic gonadotrophin (HCG) 10,000 IU was given when the largest follicles reached a mean diameter of 20-22 mm. A significant fall in serum oestradiol levels was observed in women given CC/FSH (37.9 +/- 7.3%) within the first 24 h of Nal-Glu administration. Serum luteinizing hormone (LH) decreased greater than 20% within 24 h of Nal-Glu administration and remained low throughout the rest of the treatment. No decrease in oestradiol levels was noted in cycles receiving HMG alone. With supplemental FSH, falling oestradiol levels in CC/FSH cycles rebounded and continued to rise until the day after HCG administration. Despite a drop in oestradiol in CC/FSH cycles, the aspirated oocytes exhibited no untoward effects. The fertilization and cleavage rates were similar, and pregnancies occurred in both groups.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:24012150 Recurrent IVF failure is associated with elevated progesterone on the day of hCG administration. Objective: During in vitro fertilization (IVF) treatment, elevated progesterone on the day of human chorionic gonadotrophin (hCG) administration has been reported to be associated with a reduced chance of live birth. It is not known, however, if the relationship is casual or causal. In the latter situation, one would expect the incidence of elevated progesterone on the day of hCG administration to increase with the number of IVF/embryo transfer (ET) failures. The aim of this study was to investigate if the frequency of elevated progesterone on the day of hCG administration is related to the number of IVF failures. Study Design: This retrospective, observational, cohort study included a consecutive series of 6673 IVF cycles. Subjects were categorized into one of three groups: Group I, no previous IVF/ET treatment; Group II, one previous IVF/ET treatment failure; or Group III, two or more previous IVF/ET treatment failures. The main outcome measure was the proportion of cycles with elevated progesterone (&gt;6 nmol/l) on the day of hCG administration. Results: After adjusting for age, oestradiol level on the day of hCG administration and number of oocytes retrieved, the proportion of women with elevated progesterone on the day of hCG administration remained significantly different between the three groups: Group I, 16.8%; Group II, 31.7%; and Group III, 39.7% (p &lt; 0.001). Conclusion: Elevated progesterone on the day of hCG administration is more likely in women with recurrent IVF failure. Women with two or more IVF failures are twice as likely to have elevated progesterone on the day of hCG administration as women undergoing their first IVF cycle. Answer: The estradiol level on the day of human chorionic gonadotrophin (hCG) administration is an important parameter in monitoring ovarian stimulation during IVF treatment. However, based on the abstracts provided, there is no direct statement that explicitly correlates estradiol levels on the day of hCG administration with pregnancy rates in patients treated with recombinant FSH and GnRH antagonist. One study did find that recombinant LH supplementation to recombinant FSH during induced ovarian stimulation in the GnRH-antagonist protocol resulted in higher serum estradiol concentrations on the day of hCG administration (PUBMED:17207326). This suggests that estradiol levels can be influenced by the specifics of the ovarian stimulation protocol. However, the study also noted that while there were higher estradiol concentrations and a higher number of mature oocytes, there was no statistically significant difference in clinically significant endpoints such as implantation and pregnancy rates. Another study compared the use of GnRH antagonist cetrorelix in gonadotrophin-stimulated cycles with or without the antagonist during intrauterine insemination (IUI) and found no deleterious effects on luteal progesterone concentration or on the duration of the luteal phase (PUBMED:11679500). This study did not specifically address the impact of estradiol levels on pregnancy rates. In summary, while estradiol levels are a key factor monitored during IVF treatment, the abstracts provided do not offer conclusive evidence on the direct impact of estradiol levels on the day of hCG administration on pregnancy rates in patients treated with recombinant FSH and GnRH antagonist. Additional research and controlled studies would be required to establish a definitive relationship between these variables.
Instruction: Is operative conversion necessary for patients diagnosed with dense adhesions during an elective laparoscopic colectomy? Abstracts: abstract_id: PUBMED:18757381 Is operative conversion necessary for patients diagnosed with dense adhesions during an elective laparoscopic colectomy? Unlabelled: Laparoscopic colectomy is often complicated by adhesions. Studies examining the morbidity of laparoscopic lysis of adhesions (LOA) combined with colectomy are sparse. Objectives: The goal of this study was to prospectively evaluate the results of using the harmonic scalpel to lyse adhesions in patients undergoing laparoscopic colectomy. Methods: Laparoscopic colectomy was performed in 83 patients between November 2003 and April 2007. A total of 20 patients underwent laparoscopic colectomy with LOA and 53 patients underwent laparoscopic colectomy alone. Patients were prospectively followed to determine operative time (OT), blood loss, operative conversion, length of stay (LOS), and 30-day morbidity. Results: Operative conversion was 2%, mean estimated blood loss (EBL) was 95+/-84 mL, and mean OT was 220+/-64 minutes. There were no anastomotic leaks or perioperative mortalities. There were 9 major complications (11%). Patients undergoing laparoscopic colectomy with LOA (n=20) compared with patients undergoing laparoscopic colectomy alone (n=63) had similar conversion rates (5% vs 2%), EBL (115+/-108 vs 88+/-74 mL), and major complications (15% vs 10%), but prolonged minor complications (25% vs 6%) and LOS (6.0+/-3.0 vs 4.6+/-1.5 days). Conclusions: Laparoscopic LOA combined with colectomy leads to similar conversion rates and major morbidity compared to laparoscopic colectomy alone. abstract_id: PUBMED:25773308 The impact of delaying elective resection of diverticulitis on laparoscopic conversion rate. Background: Guideline-concordant delay in elective laparoscopic colectomy for diverticulitis may result in repeated bouts of inflammation. We aimed to determine whether conversion rates from elective laparoscopic colectomy are higher after multiple episodes of diverticulitis. Methods: Prospective cohort study evaluating laparoscopic colectomy conversion rates for diverticulitis from 42 hospitals was conducted. Results: Between 2010 and 2013, 1,790 laparoscopic colectomies for diverticulitis (mean age 57.8 ± 13; 47% male) resulted in 295 (16.5%) conversions. Conversion occurred more frequently in nonelective operations (P &lt; .001) and with fistula indications (P = .012). Conversion rates decreased with surgeon case volume (P = .028). Elective colectomy exclusively for episode-based indications (n = 784) had a conversion rate of 12.9%. Increasing episodes of diverticulitis were not associated with higher conversion rates, even among surgeons with similar experience levels. Conclusions: Conversion from laparoscopic colectomy for diverticulitis did not increase after multiple episodes of diverticulitis. Delaying elective resection appears to not prevent patients from the benefits of laparoscopy. abstract_id: PUBMED:26005319 Effectiveness of elective laparoscopic treatment for colonic diverticulitis. Background And Objectives: To analyze the short- and long-term outcomes of laparoscopic sigmoid colectomy for the elective treatment of diverticular disease. Methods: A consecutive unselected series of 94 patients undergoing elective laparoscopic sigmoid colectomy for diverticular disease from 2008 to 2012 was analyzed. We collected patients-, surgery- and hospital stay-related data, as well as the short- and long-term outcomes. Operative steps, instrumentation, and postoperative cares were standardized. Comorbidity was assessed by Charlson comorbidity index. Complications were classified using the Clavien-Dindo classification system. The qualitative long-term assessment was carried out by subjecting patients to the validated gastrointestinal quality of life index questionnaire before and after surgery. Results: The mean age of our cohort was 61.3 ± 11.0 years with a Charlson comorbidity index of 1.2 ± 1.5. Mean operative time was 213.5 ± 60.8 minutes and estimated blood loss was 67.2 ± 94.3 mL. We had 3 cases (3.2%) of conversion to open laparotomy. The rates of postoperative complications were 35.1%, 6.3%, 2.1%, and 1.06%, respectively, for grades 1, 2, 3b, and 5 according to the Clavien-Dindo system. Length of hospital stay was 8.1 ± 1.9 days, and we have not recorded readmissions in patients discharged within 60 days after surgery. Median follow-up was of 9.6 ± 2.7 months. We observed no recurrence of diverticular disease, but there was evidence of 3 cases of incisional hernia (3.19%). The difference between preoperative and late gastrointestinal quality of life index score was statistically significant (97.1 ± 5.8 vs 129.6 ± 8.0). Conclusions: Elective laparoscopic treatment of colonic diverticular disease represents an effective option that produces adequate postoperative results and ensures a satisfactory functional outcome. abstract_id: PUBMED:27456018 Hand-Assisted Laparoscopic Versus Standard Laparoscopic Colectomy: Are Outcomes and Operative Time Different? Background: HAL colectomy is a technique perceived to provide the benefits of laparoscopic surgery while improving tactile feedback and operative time. Published data are largely limited to small, single-institution studies. Methods: The 2012-2013 National Surgical Quality Improvement Program Participant Data Use File was queried for patients undergoing elective SL or HAL colectomy. Patients underwent 1:1 propensity matching and had outcomes compared. An additional subgroup analysis was performed for patients undergoing segmental resections only. Results: 13,949 patients were identified, of whom 6084 (43.6 %) underwent HAL colectomy. Patients undergoing HAL versus SL colectomy had higher rates of postoperative ileus (8.7 vs. 6.3 %, p &lt; 0.001), wound complication (8.8 vs. 6.8 %, p = 0.006), and 30-day readmission (7.5 vs. 6.0 %, p = 0.002), without any differences in operative time (156 vs. 157 min, p = 0.713). Amongst segmental colectomies, HAL remained associated with higher rates of wound complications (8.6 vs. 6.5 %, p = 0.016), postoperative ileus (8.9 vs. 6.3 %, p &lt; 0.001), and 30-day readmission (7.1 vs. 5.9 %, p = 0.041) with no difference in operative time between HAL and SL (145 vs. 145 min, p = 0.334). Conclusions: Use of HAL colectomy is associated with increased risk of wound complications, postoperative ileus, and readmissions. Importantly, this technique is not associated with any decrease in operative time. abstract_id: PUBMED:28636471 Laparoscopic colectomy in a district hospital: the single surgeon can be safe. Background: Several outcome measures have been identified for colorectal surgery and published in the literature. This study sought to compare outcomes of high volume laparoscopic colectomy by a single surgeon in a district hospital with outcomes from tertiary referral centres. Methods: This was a retrospective review of elective laparoscopic colectomy by a single laparoscopic general surgeon in a district hospital over a 51-month period using a prospectively maintained database. The key outcome measures studied were length of hospital stay, conversion to open, anastomotic leak, wound infection, re-admission and 30-day mortality. Results: 187 elective laparoscopic colectomies were performed at the Kent and Canterbury Hospital between July 2008 and October 2012. The median patient age was 69 years (range 22-90 years). Median length of hospital stay was 4 days (range 1-48 days). Anastomotic leak occurred in 4 (2.1%) patients. Seven (3.7%) patients underwent conversion to open surgery. Re-admission occurred in 4 (2.1%) patients for small bowel obstruction (1), wound infection (1), anastomotic leak (1) and colo-vaginal fistula (1). There was one post-operative death from severe chest infection (0.5%). These results are similar to those published by tertiary referral centres. Conclusions: This study of outcomes at a district hospital shows that the outcome reported from laparoscopic colorectal surgery in tertiary referral centres is reproducible at the district hospital level by a single surgeon with a high operative volume. abstract_id: PUBMED:25860588 Laparoscopic right colectomy in patients treated with previous gastrectomy. Purpose: Laparoscopic colorectal surgery is increasingly being performed in patients treated with previous abdominal surgery. This is a retrospective study designed to evaluate the feasibility of laparoscopic right colectomy in patients with a previous history of gastrectomy. Methods: Of 838 consecutive patients who underwent elective laparoscopic right colectomy, 23 had previously undergone gastrectomy (PG group) and 516 had no history of previous abdominal surgery (NS group). The short-term surgical outcomes were retrospectively investigated in the PG and NS groups. Results: The median patient age was 75 years in the PG group and 67 years in the NS group (p = 0.0026), and the median body mass index in both groups was 19.2 and 22.6 kg/m(2), respectively (p = 0.0006). The mean operative time, amount of blood loss and postoperative hospital stay were similar. One patient in the PG group and five patients in the NS group required conversion to laparotomy (p = 0.1307). Three patients in the PG group experienced postoperative complications, one each with an intraperitoneal abscess, wound infection and enterocolitis; however, none of these complications were directly attributable to adhesiolysis. The rates of intraoperative and postoperative complications were similar. Conclusions: Laparoscopic right colectomy is feasible in patients treated with previous gastrectomy. abstract_id: PUBMED:24819102 Longer operative time: deterioration of clinical outcomes of laparoscopic colectomy versus open colectomy. Background: As laparoscopic surgery is applied to colorectal surgery procedures, it becomes imperative to delineate whether there is an operative duration where benefits diminish. Objective: The purpose of this work was to determine whether benefits of a laparoscopic right colectomy compared with an open right colectomy are diminished by prolonged operative times. Design: We performed a retrospective analysis comparing outcomes of patients undergoing laparoscopic right and open right colectomy for colon cancer with operative duration of less than and greater than 3 hours. Settings: This study was based on data in the American College of Surgeons National Surgical Quality Improvement Program database. Patients: We queried the database for patients with laparoscopic and open right colectomy with a diagnosis of colorectal cancer between 2005 and 2010. Main Outcome Measures: Patients were stratified by operative technique and duration. Forward multivariable logistic regression analysis was performed for mortality, cerebrovascular/cardiovascular complications, and infectious complications. Predictors of operative time &gt;3 hours in the laparoscopic cohort were identified by logistic regression. Results: Of 4273 patients, operative duration was &gt;3 hours for 18.4% of patients with a laparoscopic right colectomy and 11.3% with an open right colectomy. There was no benefit of the laparoscopic right colectomy with an operative duration &gt;3 hours over open right colectomy with respect to mortality and cardiopulmonary and cerebrovascular complications. An operative duration &gt;3 hours was an independent risk factor for infectious complications in patients undergoing a laparoscopic right colectomy. Limitations: This was a retrospective study and not an intention-to-treat analysis. Conclusions: At an operative duration of ≥3 hours, laparoscopic right colectomy has higher infectious complications than open right colectomy. Reduced mortality and less cardiopulmonary and cerebrovascular complications seen in the laparoscopic cohort with shorter operative duration were lost with an operative duration &gt;3 hours. In patients at risk for prolonged laparoscopic right colectomy, early conversion to an open technique may be warranted. abstract_id: PUBMED:27109780 Comparison of 30-Day Postoperative Outcomes after Laparoscopic vs Robotic Colectomy. Background: The robotic platform has been used increasingly to perform colorectal surgery. The benefits of robotic colectomy when compared with laparoscopic colectomy have not been definitively established. Study Design: A retrospective review of the NSQIP database was performed on patients undergoing elective laparoscopic or robotic colectomy in 2013. Demographic characteristics, length of stay, operative time, and 30-day postoperative complications were compared between the 2 groups. Results: Of the 17,774 colectomies performed during the study period, 11,267 (63.4%) were performed laparoscopically and 653 (3.7%) were performed robotically. In comparison with laparoscopic colectomy, robotic colectomy was associated with increased operative time (233 vs 180 minutes; p &lt; 0.01) and decreased length of stay (5.04 vs 6.06 days; p &lt; 0.01). There was no significant difference with respect to mortality (0.2% vs 0.4%; p &lt; 0.312), anastomotic leak (3.4% vs 3.1%; p = 0.715), reoperation (4.9% vs 4.0%; p = 0.27), conversion (10.3% vs 12.2%; p = 0.13), or readmission (9.3% vs 8.7%; p = 0.593) rates. The differences in length of stay and operative time persisted in the right colectomy and left colectomy/sigmoid resection subgroup analysis with no difference in conversion rates. Conclusions: In this head-to-head comparison of laparoscopic colectomy and robotic colectomy, the majority of postoperative outcomes were equivalent, except for an increase in operative time and shorter length of stay in the robotic group. Robotic colectomy appears to be a safe option for minimally invasive colectomy, but additional studies are needed to elucidate whether it is cost-effective when compared with laparoscopic colectomy. abstract_id: PUBMED:27655383 Laparoscopic versus robotic colectomy: a national surgical quality improvement project analysis. Introduction: Robotic colorectal surgery is being increasingly adopted. Our objective was to compare early postoperative outcomes between robotic and laparoscopic colectomy in a nationally representative sample. Methods: The American College of Surgeons National Surgical Quality Improvement Project Colectomy Targeted Dataset from 2012 to 2014 was used for this study. Adult patients undergoing elective colectomy with an anastomosis were included. Patients were stratified based on location of colorectal resection (low anterior resection (LAR), left-sided resection, or right-sided resection). Bivariate data analysis was performed, and logistic regression modeling was conducted to calculate risk-adjusted 30-day outcomes. Results: There were a total of 25,998 laparoscopic colectomies (30 % LAR's, 45 % left-sided, and 25 % right-sided) and 1484 robotic colectomies (54 % LAR's, 28 % left-sided, and 18 % right-sided). The risk-adjusted overall morbidity, serious morbidity, and mortality were similar between laparoscopic and robotic approaches in all anastomotic groups. Patients undergoing robotic LAR had a lower conversion rate (OR 0.47, 95 % CI 1.20-1.76) and postoperative sepsis rate (OR 0.49, 95 % CI 0.29-0.85) but a higher rate of diverting ostomies (OR 1.45, 95 % CI 1.20-1.76). Robotic right-sided colectomies had significantly lower conversion rates (OR 0.58, 95 % CI 0.34-0.96). Robotic colectomy in all groups was associated with a longer operative time (by 40 min) and a decreased length of stay (by 0.5 days). Conclusions: In a nationally representative sample comparing laparoscopic and robotic colectomies, the overall morbidity, serious morbidity, and mortality between groups are similar while length of stay was shorter by 0.5 days in the robotic colectomy group. Robotic LAR was associated with lower conversion rates and lower septic complications. However, robotic LAR is also associated with a significantly higher rate of diverting ostomy. The reason for this relationship is unclear. Surgeon factors, patient factors, and technical factors should be considered in future studies. abstract_id: PUBMED:28070033 Analysis of indication for laparoscopic right colectomy and conversion risks. Laparoscopic surgery developed continuously over the past years becoming the gold standard for some surgical interventions. Laparoscopic colorectal surgery is well established as a safe and feasible procedure to treat benign and malignant pathologies. In this paper we studied in deep the role of laparoscopic right colectomy analysing the indications to this surgical procedure and the factors related to the conversion from laparoscopy to open surgery. We described the different surgical techniques of laparoscopic right colectomy comparing extra to intracorporeal anastomosis and we pointed out the different ways to access to the abdomen (multiport VS single incision). The indications for laparoscopic right colectomy are benign (inflammatory bowel disease and rare right colonic diverticulitis) and malignant diseases (right colon cancer and appendiceal neuroendocrine neoplasm): we described the good outcomes of laparoscopic right colectomy in all these illnesses. Laparoscopic conversion rates in right colectomy are reported as 12-16%; we described the different type of risk factors related to open conversion: patient-related, disease-related and surgeon-related factors, procedural factors and intraoperative complications. We conclude that laparoscopic right colectomy is considered superior to open surgery in the shortterm outcomes without difference in long-term outcomes. Key Words: Conversion risks, Indication to treatment, Laparoscopy, Post-operative pain, Right colectomy. Answer: Operative conversion is not necessarily required for patients diagnosed with dense adhesions during an elective laparoscopic colectomy. According to the study by PUBMED:18757381, laparoscopic lysis of adhesions (LOA) combined with colectomy leads to similar conversion rates and major morbidity compared to laparoscopic colectomy alone. The study found that patients undergoing laparoscopic colectomy with LOA had similar conversion rates (5% vs 2%), estimated blood loss, and major complications when compared with patients undergoing laparoscopic colectomy alone. However, there was a noted increase in minor complications and length of stay for the LOA group. This suggests that while dense adhesions can complicate the procedure, they do not necessarily mandate a conversion to open surgery, and laparoscopic management of adhesions is feasible with comparable outcomes to cases without adhesions.
Instruction: Are long-term bisphosphonate users a reality? Abstracts: abstract_id: PUBMED:25166043 Long-term risks of bisphosphonate therapy. The objective this study was to summarize long-term risks associated with bisphosphonate therapy. Search of relevant medical publications for data from clinical trials, trial extensions, observational studies and post-marketing reports. Trial extensions and modifications did not reveal significant long-term safety issues. Observational data suggest at least as many benefits as risks. Post-marketing reports of musculoskeletal pain, osteonecrosis of the jaw and atypical femur fractures have been widely circulated in the lay press. Most focus on long-terms risks has been on osteonecrosis of the jaw and atypical femur fractures which occur in patients who have not received bisphosphonate therapy but may be more frequent (though still uncommon) in patients who have been on treatment for 5 years or longer. Lower-risk patients may be able to stop treatment after 3-5 years for a "drug holiday," which mitigates these long-term risks; for higher risk patients, therapy through 6-10 years appears to be advisable and offers more benefits than risks. abstract_id: PUBMED:22572963 Are long-term bisphosphonate users a reality? Dose years for current bisphosphonate users assessed using the danish national prescription database. Unlabelled: The prevalence of long-term bisphosphonate use may be low due to low refill compliance and gaps in treatment. An analysis of the prescription history of 58,674 bisphosphonate users in Denmark found that only 2.8 % had received ten dose years of treatment or above. Introduction: This study aims to describe the demographics of present bisphosphonate (BP) users, to determine the prevalence of long-term BP use, and to establish if long-term use (a 10-year history of osteoporosis treatment) translated to ten dose years of bisphosphonate prescriptions filled, given the propensity for treatment gaps and low refill compliance with bisphosphonates. Methods: The study population was all persons aged 35 and above, who had filled at least one prescription for an oral bisphosphonate in the year 2008. Past use of osteoporosis medications (bisphosphonates, raloxifene, strontium ranelate, or PTH analogs) for the period January 1, 1995 to December 31, 2007 was retrieved from the Danish National Prescription Database for descriptive analysis. Results: Of the 58,674 BP users, 46 % were above the age of 75, and 13 % were men. Though 5.9 % had at least a 10-year history of treatment, only half (2.8 %) had received more than ten dose years of a BP. For any osteoporosis drug, 3.0 % had received ten dose years or more, while 23.2 % had received between 5 and 10 years of treatment. Conclusion: Long-term users with ten dose years or more of a BP are rare due to periods of low compliance and gaps, with a discrepancy between the length of treatment and doses taken. The study also highlights the great number of patients who have used BP for more than five dose years and should be advised on length of treatment, a decision process that will be difficult due to the paucity of long-term safety and efficacy data. abstract_id: PUBMED:36356548 Effects of virtual reality-based interventions on the physical and mental health of older residents in long-term care facilities: A systematic review. Background: Older residents in long-term care facilities often experience physical and mental health problems along with vulnerability to psychological and social stressors. Due to the shortage of human resources, the application of virtual reality technology may be a new way to promote healthy aging for older residents in long-term care facilities. Objectives: To identify the effects of virtual reality-based interventions for improving physical and mental health outcomes among older residents in long-term care facilities. Design: Systematic review. Methods: A systematic review was reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 Statement. Four databases (the Cochrane Library, PubMed, EMBASE, and Web of Science) were searched from their inception until May 2022 for original articles. Randomized controlled trials (RCTs) and quasi-experimental trials reporting the effects of virtual reality-based interventions on physical and mental health outcomes were included. Two independent reviewers extracted trial data and assessed the risk of bias using the risk of bias tool recommended by the Cochrane Back Review Group and the Methodological Index for Non-Randomized Studies. Results: Of the 8511 articles that were screened, 30 articles including 1057 participants met the inclusion criteria. Many studies have investigated multiple outcomes. Eighteen of the twenty-five studies that assessed physical health-related outcomes (n = 18/25, 72%) reported positive intervention effects, and fourteen (n = 14/25, 56%) reported negative postintervention effects. Eight of the sixteen studies that assessed mental health-related outcomes (n = 8/16, 50%) reported positive intervention effects, while twelve (n = 12/16, 75%) reported negative intervention effects. Meta-analysis was not conducted due to the heterogeneity of the study designs and outcome measures. Conclusions: This study demonstrated that virtual reality-based interventions could be a safe and effective approach to improve the physical outcomes of older residents in long-term care facilities, but this evidence lacks certainty. The effectiveness of virtual reality-based interventions for mental health outcomes is uncertain. Virtual reality-based interventions have the potential to provide individualized, error-free self-training for older residents in long-term care facilities, but the usability for these residents requires further study. Future studies should use RCT designs and aim to include larger sample sizes and have longer intervention durations. Registration Number: PROSPERO registration ID: CRD42022331917. Tweetable Abstract: Virtual reality-based interventions are possibly effective for improving physical health of older residents in long-term care facilities. abstract_id: PUBMED:28555811 Long-Term Oral Bisphosphonate Therapy and Fractures in Older Women: The Women's Health Initiative. Objectives: To examine the association between long-term bisphosphonate use and fracture in older women at high risk of fracture. Design: Retrospective cohort. Setting: Women's Health Initiative. Participants: Older women who reported at least 2 years of bisphosphonate use in 2008-09 (N = 5,120). Measurements: Exposure data were from a current medications inventory. Outcomes (hip, clinical vertebral, wrist or forearm, any clinical fracture) were ascertained annually. Using multivariate Cox proportional hazards models, the association between duration of bisphosphonate use (3-5, 6-9, 10-13 years) and fracture was estimated, using 2 years as the referent group. Results: On average participants were 80 years old and were followed for 3.7 ± 1.2 years. There were 127 hip, 159 wrist or forearm, 235 clinical vertebral, and 1,313 clinical fractures. In multivariate-adjusted analysis, 10 to 13 years of bisphosphonate use was associated with higher risk of any clinical fracture than 2 years of use (hazard ratio (HR) = 1.29, 95% confidence interval (CI) = 1.07-1.57). This association persisted in analyses limited to women with a prior fracture (HR = 1.30, 95% CI = 1.01-1.67) and women with no history of cancer (HR = 1.36, 95% CI = 1.10-1.68). The association of 10 to 13 years of use, compared with 2 years of use, was not statistically significant for hip (HR = 1.66, 95% CI = 0.81-3.40), clinical vertebral (HR = 1.65, 95% CI = 0.99-2.76), or wrist fracture (HR = 1.16, 95% CI = 0.67-2.00). Conclusion: In older women at high risk of fracture, 10 to 13 years of bisphosphonate use was associated with higher risk of any clinical fracture than 2 years of use. These results add to concerns about the benefit of very long-term bisphosphonate use. abstract_id: PUBMED:23558826 The role of young users in determining long-term care expenditure in Norway. Aims: In Norway, it is the responsibility of the country's 429 municipalities to provide long term care (LTC) services to their residents. Recent years have seen a sharp rise in the number of LTC users under the age of 65. This article aims to explore the effect of this rise on LTC expenditure. Methods: Panel data models are used on data from municipalities from 1986 to 2011. An instrumental variable approach is also utilized to account for possible endogeneity related to the number of young users. Results: The number of young users appears to have a strong effect on LTC expenditure. There is also evidence of municipalities exercising discretion in defining eligibility criteria for young users in order to limit expenditure. Conclusions: The rise in the number of young LTC users presents a long-term challenge to the sustainability of LTC financing. The current budgeting system appears to compensate municipalities for expenditure on young LTC users. abstract_id: PUBMED:35414201 Long-term safety and efficacy of bisphosphonate therapy in advanced lung cancer with bone metastasis. Aim: This retrospective, observational study evaluated the long-term (&gt;12 months) safety and effectiveness of bisphosphonate. Methods: Data collected for 359 patients included quantity and proportion of adverse events (AEs) and skeletal-related events (SREs), and times to first AE and first SRE. Results: Patients in the ≤24-month group experienced significantly fewer AEs compared with the &gt;24-month treatment group (p = 0.008), and treatment for &gt;24 months was a potential risk factor for AEs (p = 0.05). Neither the proportion nor the risk of SRE was significantly associated with therapy duration (p = 0.525 and 0.084, respectively). Conclusion: Bisphosphonate treatment beyond 2 years may increase the risk of AEs, but may prolong SRE-free survival early after 24 months, compared with medication administered for ≤24 months. abstract_id: PUBMED:25708052 The skeletal muscle cross sectional area in long-term bisphosphonate users is smaller than that of bone mineral density-matched controls with increased serum pentosidine concentrations. Bisphosphonates are effective in increasing bone mineral density (BMD), but fragility fractures can still occur despite bisphosphonate treatment. The purpose of this study was to determine if long-term bisphosphonate users have characteristic findings in the musculoskeletal system, which could put them at risk of developing typical or atypical femoral fractures. We recruited 40 female patients who had taken bisphosphonates for more than 3 years. The control group included 60 volunteers who were matched by age, body mass index, and dual-energy X-ray absorptiometry-derived BMDs. We measured the skeletal muscle cross sectional area around the proximal thigh and buckling ratio of the femoral neck using quantitative computed tomography (qCT) and several biochemical markers of bone metabolism. Those parameters were compared between the groups. While no significant differences of buckling ratio derived from qCT were detected, the skeletal muscle cross sectional area was significantly smaller in the long-term bisphosphonate users than in the controls. Furthermore, the serum pentosidine level was significantly higher in the bisphosphonate users than in the controls. To determine if those differences were attributable to bisphosphonate treatment, we further compared those parameters between before and after 3 years of bisphosphonate treatment in 32 patients. After 3 years of bisphosphonate treatment, the BMD of the femoral neck and serum pentosidine level increased but not the skeletal muscle cross sectional area. In the present study, the skeletal muscle mass did not match the bone mass in long-term bisphosphonate users, thus suggesting that increases in BMD by bisphosphonates are unlikely to have secondary positive effects on the surrounding skeletal muscles. Also, serum pentosidine levels were greater in the long-term bisphosphonate users. Further study is necessary to test if such patients are prone to develop typical or atypical femoral fractures. abstract_id: PUBMED:29955629 Denosumab significantly improves lumbar spine bone mineral density more in treatment-naïve than in long-term bisphosphonate-treated patients. The purpose of our study was to compare the skeletal responses to 3-year denosumab treatment in bisphosphonate (BP)-naïve and long-term BP-treated patients with postmenopausal osteoporosis. Female patients who were BP treatment-naïve (treatment-naïve group: 25 cases) or who received long-term BPs (BP pre-treated group: 24 cases) were compared for serum bone alkaline phosphatase (BAP), tartrate-resistant acid phosphatase (TRACP)-5b, and urinary N-terminal telopeptide of type I collagen (NTX) at baseline and at 4, 8, 12, 15, 18, 21, 24, 27, 30, 33, and 36 months of denosumab therapy. Lumbar 1-4 (L) spine bone mineral density (BMD), total hip (H)-BMD, and femoral neck (FN)-BMD values were measured at baseline and at 4, 8, 12, 18, 24, 30, and 36 months. The percentage changes of bone turnover markers were significantly decreased throughout the study period by a larger margin in the treatment-naïve group than in the BP pre-treated group. L-BMD, H-BMD, and FN-BMD were all significantly increased in the treatment-naïve and BP pre-treated groups at 36 months (12.9% and 7.5%, 5.9% and 6.0%, and 7.6% and 4.5%, respectively), compared with pre-treatment levels. There were significant differences for L-BMD at 12, 24, 30, and 36 months between the groups. Our findings suggest that the BMD response to denosumab, especially that of L-BMD, was diminished following BP therapy relative to treatment-naïve patients, thus providing evidence supporting the use of denosumab as a first-line therapy. abstract_id: PUBMED:34199801 Virtual Reality Therapy for Depression and Mood in Long-Term Care Facilities. Virtual reality (VR) describes a family of technologies which immerse users in sensorily-stimulating virtual environments. Such technologies have increasingly found applications in the treatment of neurological and mental health disorders. Depression, anxiety, and other mood abnormalities are of concern in the growing older population-especially those who reside in long-term care facilities (LTCFs). The transition from the familiar home environment to the foreign LTCF introduces a number of stressors that can precipitate depression. However, recent studies reveal that VR therapy (VRT) can promote positive emotionality and improve cognitive abilities in older people, both at home and in LTCFs. VR thus holds potential in allowing older individuals to gradually adapt to their new environments-thereby mitigating the detrimental effects of place attachment and social exclusion. Nevertheless, while the current psychological literature is promising, the implementation of VR in LTCFs faces many challenges. LTCF residents must gain trust in VR technologies, care providers require training to maximize the positive effects of VRT, and decision makers must evaluate both the opportunities and obstacles in adopting VR. In this review article, we concisely discuss the implications of depression related to place attachment in LTCFs, and explore the potential therapeutic applications of VR. abstract_id: PUBMED:24115250 Assessment of bone microarchitecture in postmenopausal women on long-term bisphosphonate therapy with atypical fractures of the femur. Reports of atypical femoral fractures (AFFs) in patients receiving long- term bisphosphonate therapy have raised concerns regarding the genesis of this rare event. Using high-resolution peripheral quantitative computed tomography (HR-pQCT), we conducted a study to evaluate bone microarchitecture in patients who had suffered an AFF during long-term bisphosphonate treatment. The aim of our study was to evaluate if bone microarchitecture assessment could help explain the pathophysiology of these fractures. We compared bone volumetric density and microarchitectural parameters measured by HR-pQCT in the radius and tibia in 20 patients with AFFs with 35 postmenopausal women who had also received long-term bisphosphonate treatment but had not experienced AFFs, and with 54 treatment-naive postmenopausal women. Control groups were similar in age, body mass index (BMI), and bone mineral density (BMD). Mean age of the 20 patients with AFFs was 71 years, mean lumbar spine T-score was -2.2, and mean femoral neck T-score was -2. Mean time on bisphosphonate treatment was 10.9 years (range, 5-20 years). None of the patients had other conditions associated with AFFs such as rheumatoid arthritis, diabetes or glucocorticoid use. There were no statistically significant differences in any of the parameters measured by HR-pQCT between postmenopausal women with or without treatment history and with or without history of atypical fractures. We could not find any distinctive microarchitecture features in the peripheral skeleton of women who had suffered an atypical fracture of the femur while receiving bisphosphonate treatment. This suggests that risk of developing an atypical fracture is not related to bone microarchitecture deterioration. Our results indicate that there may be other individual factors predisposing to atypical fractures in patients treated with bisphosphonates, and that those are independent of bone microarchitecture. In the future, identification of those factors could help prevent and understand the complex physiopathology of these rare events. Answer: Yes, long-term bisphosphonate users are a reality, although the prevalence of long-term use may be low. A study using the Danish National Prescription Database found that only 2.8% of bisphosphonate users had received ten dose years of treatment or above, indicating that long-term users with ten dose years or more of a bisphosphonate are rare. This low prevalence is attributed to periods of low compliance and gaps in treatment, which results in a discrepancy between the length of treatment and the actual doses taken (PUBMED:22572963). Despite the low prevalence, there are indeed patients who use bisphosphonates for extended periods, and studies have been conducted to assess the long-term risks and benefits associated with such use (PUBMED:25166043).
Instruction: Comparison of resource utilization in the treatment of open-angle glaucoma between two cities in Finland: is more better? Abstracts: abstract_id: PUBMED:23621767 Comparison of resource utilization in the treatment of open-angle glaucoma between two cities in Finland: is more better? Introduction: Glaucoma is a progressive optic neuropathy associated with neural rim loss of the optic disc and the retinal nerve fibre layer typically causing visual field (VF) deterioration. Generally, glaucomatous lesions in the eye and in the visual field progress slowly over the years. In population-based cross-sectional studies, the percentage of unilateral or bilateral visual impairment varied between 3-12%. In screening studies, 0.03-2.4% of patients have been found to suffer visual impairment. Glaucoma has previously been associated with substantial healthcare costs and resource consumption attributable to the treatment of the disease. The disease also causes reduction in health-related quality of life (HRQoL) in patients with glaucoma. Objective And Methods: This study compares patients with diagnosed open-angle glaucoma from two geographically different regions in Finland. A total of 168 patients were examined, 85 subjects from an area with higher per patient treatment costs (Oulu) and 83 patients from a region with lower per patient treatment costs (Turku). All patients had a history of continuous glaucoma medication use for a period of 11 years. For each patient, the total direct costs from glaucoma treatment were calculated and the total amount of resource consumption was determined from registries and patient records. Each patient underwent a clinical examination with visual field assessment and fundus photography. These data were used to determine the current stage of disease for each patient. Health-related quality of life questionnaire (15D) was used in determining each patient's subjective HRQoL score. Results: When applying the current diagnostic criteria for open-angle glaucoma, a total of 40% of patients did not to display any structural or functional damage suggesting glaucoma after 11 years of continuous medical treatment and follow-up. Patients with higher glaucoma stage (worse disease) were found to have statistically higher treatment costs compared with those at lower disease stages. Resource consumption was also greater in the patients in higher glaucoma stage. Patients in the Oulu district consumed more resources, and glaucoma treatment was more expensive than in the Turku area. The total treatment cost in Oulu and Turku was 6010 € and 4452 €, respectively, for the whole 11-year period. There was no statistically significant difference in quality-of-life scores between the two areas. No difference was noted between the higher-spending and lower-spending areas in this respect. However, when the population was analysed as a whole, patients with higher glaucoma stage were found to have lower vision-based 15D scores compared with those at lower disease stages. This observation was made also at both districts independently. Conclusions: Major cost source in open-angle glaucoma treatment is medication, up to 74% of annual costs. In addition, it seems that higher resource consumption and higher treatment costs do not increase the patients' HRQoL as assessed by the 15D instrument. abstract_id: PUBMED:18631334 Trabeculectomy with an active postoperative regimen: results and resource utilization. Purpose: To evaluate intraocular pressure (IOP) and resource utilization after trabeculectomy, using an active postoperative regimen, in a Swedish population. Methods: A retrospective analysis was performed on the patient charts of all patients who underwent trabeculectomy in a Swedish university hospital during 1 year (November 2000-December 2001). Trabeculectomy was performed in 34 eyes and trabeculectomy in combination with phacoemulsification in 10 eyes. Intraocular pressure, visual acuity, complications and numbers of injections of 5-fluorouracil, suture removal, needling procedures and visits to ophthalmologists were recorded for 2 years. Results: Mean IOP before surgery was 30.4 mmHg (standard deviation [SD] 9.5) in eyes with capsular glaucoma and 28.7 mmHg (SD 9.3) in eyes with primary open-angle glaucoma. Mean IOP after 2 years was 15.3 mmHg (SD 3.9) in all eyes. Intraocular pressure of &lt; 18 mmHg was achieved in 65% of the eyes after 2 years, as was IOP &lt; or = 13 mmHg in 37%. Flap or suture manipulation was performed in 41 of 44 eyes. Needling procedures were carried out a mean of 2.3 times in 31 eyes. The mean number of visits to an ophthalmologist was 14.1 during the first postoperative year and 4.4 during the second. Conclusions: The results were encouraging, with few complications and modest resource utilization. abstract_id: PUBMED:16445443 A multicentre, retrospective study of resource utilization and costs associated with glaucoma management in France and Sweden. Purpose: To assess resource utilization and costs associated with glaucoma management in France and Sweden. Methods: A total of 267 patient records (121 in France, 146 in Sweden) with diagnoses of primary open-angle glaucoma (POAG) and ocular hypertension (OH), treated medically, were reviewed for a 2-year period (beginning during 1997-99) for relevant clinical and resource utilization data. Economic data were applied to estimate treatment costs. Results: The annual cost of treating glaucoma was estimated at SEK5305 (531 euro )/patient in Sweden and 390 euro/patient in France. In both countries, medication costs comprised about half of the total costs. Surgical procedures and hospitalizations represented greater proportions of total cost in France (7.0% and 9.6%, respectively) than in Sweden (3.7% and 0.6%, respectively). Conclusion: Medication costs represent a high proportion of total treatment costs. These findings highlight the relative importance of medical therapy and of assessing the cost-effectiveness of medications in glaucoma. abstract_id: PUBMED:25642815 Incremental Healthcare Service Utilization for Open-angle Glaucoma: A Population-Based Study. Purpose: To investigate differences in the utilization of healthcare services between subjects with open-angle glaucoma (OAG) and comparison subjects without OAG using Taiwan's National Health Insurance population-based database. Patients And Methods: The study comprised 2204 subjects with OAG and 2204 sex-matched and age-matched subjects without OAG. We individually followed each subject for a 1-year period to evaluate their healthcare resource utilization. Outcome variables of the healthcare resource utilization were as follows: numbers of outpatient visits and inpatient days and the mean costs of outpatient and inpatient treatment. In addition, we divided healthcare resource utilization into ophthalmologic and nonophthalmologic services. Results: As for the utilization of ophthalmologic services, OAG subjects had significantly more outpatient visits (7.4 vs. 1.3, P&lt;0.001) and significantly higher outpatient costs (US$272 vs. US$39, P&lt;0.001) than comparison subjects. For nonophthalmologic services, OAG subjects also had significantly more outpatient visits (29.4 vs. 21.8, P&lt;0.001) and significantly higher outpatient costs (US$1263 vs. US$847, P&lt;0.001) than comparison subjects. Furthermore, OAG subjects incurred significantly higher inpatient costs compared with comparison subjects (US$434 vs. US$234, P&lt;0.001). For all healthcare services, OAG subjects had significantly more outpatient visits (36.8 vs. 23.1, P&lt;0.001) and significantly higher outpatient (US$1535 vs. US$887, P&lt;0.001) and total (US$2245 vs. US$1122, P&lt;0.001) costs than comparison subjects. In other words, the total cost was about 2-fold greater for OAG subjects than comparison subjects. Conclusions: We concluded that subjects with OAG had significantly higher utilization of all healthcare services than comparison subjects. abstract_id: PUBMED:36003802 Clinical Practice Management of Primary Open-Angle Glaucoma in the United States: An Analysis of Real-World Evidence. Purpose: To investigate clinical management of primary open-angle glaucoma (POAG) in the United States using real-world evidence and to examine healthcare resource utilization (HCRU), medication adherence/persistence, and procedure use. Design: A cross-sectional, retrospective analysis of Optum's de-identified Market Clarity Dataset (July 1, 2013-December 31, 2019). Patients And Methods: Patients ≥18 years with POAG diagnosis and continuous enrollment for 1-year pre- and post-index were eligible and categorized into four mutually exclusive cohorts: CH1, treated with antiglaucoma medication(s) only; CH2, underwent glaucoma procedure(s) only; CH3, treated with antiglaucoma medication(s) and underwent procedure(s); and CH4, received no treatment for POAG. Adherence and persistence with antiglaucoma medications, and disease-specific HCRU were analyzed. Pairwise two-sample comparisons and multivariate regressions were conducted. Results: Examined 232,572 eligible patients (CH1=60,895; CH2=4330; CH3=6027; CH4=161,320). Prostaglandin analogs were most prescribed antiglaucoma medications (CH1: 69.7%; CH3: 62.7%), of which latanoprost was most common (CH1: 51.3%; CH3: 46.1%). Disease-specific office visits occurred in 26.3%, 78.2%, 75.0%, 23.8%, and surgical services visits occurred in 3.8%, 36.3%, 42.5%, 3.3%, in CH1-CH4, respectively. Adherence was higher (medication possession ratio: 47.1% vs 39.4%; P&lt;0.0001), and more patients remained persistent across 1-year post-index period in CH1 vs CH3 (25.4% vs 16.1%; P&lt;0.0001). Positive predictors of medication persistence included being female, ≥55 years, and history of dyslipidemia or thyroid disease (all P≤0.0003). Conclusion: Overall, 70% POAG patients might not have received antiglaucoma treatment. Since POAG is a slowly progressive blinding disease, the lack of antiglaucoma treatment and suboptimal adherence/persistence with medications are of major concerns. Targeted screening and educational approaches are needed to improve POAG management. abstract_id: PUBMED:30629910 A Comparison of Resource Use and Costs of Caring for Patients With Exfoliation Syndrome Glaucoma Versus Primary Open-Angle Glaucoma. Purpose: To characterize differences in resource utilization and cost of managing enrollees with exfoliation glaucoma (XFG) compared to primary open-angle glaucoma (POAG). Design: Retrospective utilization and cost comparison using Medicare claims data. Methods: We identified Medicare beneficiaries with XFG or POAG and ≥5 years of continuous enrollment from January 2008 to December 2014. We distinguished newly diagnosed cases from those with preexisting disease. We compared ophthalmic resource utilization and costs over 2 years of follow-up for persons with newly diagnosed and preexisting XFG vs those with POAG. Main outcome measures were number of clinic visits, diagnostic procedures, medication fills, laser and incisional surgery, and mean eye care costs per beneficiary. Results: Among 192 eligible enrollees (median age 77.6 years) with newly diagnosed XFG and 7339 enrollees (median age 77.3 years) with newly diagnosed POAG, those with XFG had more office visits (mean, 9.1 vs 7.9; P = .001), cataract surgery (34.9% vs 19.0%; P &lt; .0001), and glaucoma surgery (28.7% vs 19.7%, P = .002). They also experienced 27% higher mean total eye care costs ($3260 vs $2562, P = .0001) over 2 years of follow-up. Among 2745 enrollees (median age 80.5 years) with preexisting XFG and 89 036 persons (median age 79.5) with preexisting POAG, persons with XFG had more office visits (mean 9.3 vs 7.3; P &lt; .0001), perimetry (85.3% vs 79.8%; P &lt; .0001), cataract surgery (23.4% vs 12.3%; P &lt; .0001), laser trabeculoplasty (18.6% vs 9.6%; P &lt; .0001), and trabeculectomy (8.1 vs 1.8%; P &lt; .0001) and experienced 37% higher total mean eye care costs ($3764 vs $2739; P &lt; .0001). Conclusions: Healthcare resource utilization and costs are substantially higher for managing patients with XFG compared to POAG. abstract_id: PUBMED:31485731 Trabeculectomy offers better intraocular pressure reduction in intrapatient comparison to transscleral cyclophotocoagulation. Purpose: Both trabeculotomy (TE) as well as transscleral endodiode laser cyclophotocoagulation (CPC) are common approaches in glaucoma surgery. The purpose of this study was to perform an intraindividual comparison of these procedures carried out by the same surgeon in the same patient on the same day. Methods: An observational monocentric retrospective cohort study was conducted. Patients with a bilateral refractory open-angle glaucoma who underwent trabeculectomy in one eye and transscleral endodiode laser cyclophotocoagulation in the fellow eye simultaneously were included and followed up with over the course of 1 year. Results: Eighty-two eyes of 41 patients were included. Seventeen patients (41.5%) were men and 24 (58.5%) women. The mean age was 68.7 ± 9.5 years. The diagnosis comprised 33 (80.5%) patients with a primary open-angle glaucoma, five (12.2%) patients with pseudoexfoliation glaucoma, and three (7.3%) patients with pigment dispersion glaucoma. A reduction in intraocular pressure (IOP) was seen in both after TE (from 26.2 ± 13.2 to 10.6 ± 4.1 mmHg, 52 weeks post-treatment) as well as CPC (from 24.2 ± 9.9 to 15.0 ± 5.4 mmHg, 52 weeks post-treatment). In comparison to each other, TE was significantly more effective in lowering the IOP (10.6 ± 4.1 vs. 13.4 ± 5.0; p = 0.0030, 52 weeks post-treatment) and needed antiglaucomatous medications (0.45 ± 0.80 vs. 1.24 ± 1.13; p = 0.0009, 52 weeks post-treatment). Consistently, the achievement rate of an IOP ≤ 16 mmHg without antiglaucomatous medications was significantly higher in TE-treated eyes (65.8% vs. 31.6%; p = 0.0019). Re-interventions, including 10 secondary TEs, were commonly required in those eyes undergoing CPC, especially in younger patients. Conclusions: Trabeculectomy was demonstrated to be more effective in reducing IOP in comparison to fellow eyes receiving CPC. In particular, in younger patients, an additional TE in the CPC-treated eyes was necessary. The outcome of those secondary TEs however was comparable to the primarily performed TEs. Our study thus supports the use of CPC as tool to control IOP, especially in the context of bilateral refractive glaucoma. abstract_id: PUBMED:28854348 Changing Initial Glaucoma Medical Therapy Increases Healthcare Resource Utilization. Purpose: To determine the frequency and economic impact of changing initial glaucoma therapy for patients with newly diagnosed open-angle glaucoma (OAG) or ocular hypertension (OHT). Methods: This retrospective longitudinal cohort study identified individuals within a large managed care network in the United States, who were newly diagnosed with OAG or OHT from 2001 to 2012 and were prescribed either a topical beta blocker (BB) or a prostaglandin analog (PGA). Claims data were analyzed over the 12-month period following their index prescription to determine physician prescribing habits, healthcare resource utilization patterns, and sociodemographic factors which may have contributed to changing the initial treatment strategy. Results: A total of 15,019 beneficiaries were identified with newly diagnosed OAG or OHT and whose index therapy was either a topical BB or PGA. Among these enrollees 80.9% were started on PGAs, while 19.1% were started on BBs. Of these beneficiaries, 29.2% of those started on PGAs and 39.5% of those started on BBs underwent a change in therapy within 12 months of their index prescription. Those in the topical BB treatment group had a 38% increased odds of changing glaucoma therapy relative to those started on PGAs (odds ratio [OR] 0.61, 95% CI:0.56-0.68). Patients who changed therapy required more frequent office visits (P &lt; 0.0001) and incurred higher median eye care related charges (P &lt; 0.0001) compared to those who remained on the index therapy unchanged. Conclusions: Changing initial ocular hypotensive therapy is common. Individuals who undergo a change in therapy required more frequent face-to-face monitoring and incurred higher healthcare related costs. Identifying strategies capable of optimizing the process of initiating ocular hypotensive therapy are appealing and possess the potential to improve patient outcomes and reduce healthcare costs. abstract_id: PUBMED:18321581 Rates of glaucoma medication utilization among persons with primary open-angle glaucoma, 1992 to 2002. Objective: To determine the percentage of Medicare beneficiaries with primary open-angle glaucoma (POAG) treated medically or surgically, utilization rates for each major class of glaucoma medication, and factors influencing treatment. Design: Longitudinal observational study using data from the Medicare Current Beneficiary Survey (MCBS). Participants: Persons age 65 and older with POAG, 1992 to 2002 (N = 6446). Methods: By using MCBS data merged with Medicare claims, rates of medical and surgical treatment for participants with POAG were determined. Logistic analysis was used to assess factors associated with use of care. Main Outcome Measures: Receipt/nonreceipt of medical or surgical therapy in a year and rates of drug utilization by class and of surgery by type among persons who did not receive medical therapy in a year. Results: On average from 1992 to 2002, 27.4% of persons diagnosed with POAG received no medical or surgical treatment. Rates of nonuse increased by 3% annually (odds ratio [OR], 1.03; 95% confidence interval [CI], 1.02-1.05). Beneficiaries with Medicaid were 43% more likely not to receive care for POAG in a year (OR, 1.43; 95% CI, 1.20-1.70). Hispanic, Asian, and beneficiaries of other race/ethnicity were less likely to receive treatment than were whites. Use of beta-blockers and miotics decreased, but utilization rates increased substantially for alpha-agonists, combination beta-blocker-carbonic anhydrase inhibitors, and especially prostaglandin analogs. Conclusions: Despite availability of more efficacious glaucoma medication classes with few side effects and findings of clinical trials underscoring the importance of intraocular pressure reduction in POAG patients, many patients with POAG continue to go untreated. abstract_id: PUBMED:21728183 Mixed treatment comparison of repeated measurements of a continuous endpoint: an example using topical treatments for primary open-angle glaucoma and ocular hypertension. Mixed treatment comparison (MTC) meta-analyses estimate relative treatment effects from networks of evidence while preserving randomisation. We extend the MTC framework to allow for repeated measurements of a continuous endpoint that varies over time. We used, as a case study, a systematic review and meta-analysis of intraocular pressure (IOP) measurements from randomised controlled trials evaluating topical ocular hypotensives in primary open-angle glaucoma or ocular hypertension because IOP varies over the day and over the treatment course, and repeated measurements are frequently reported. We adopted models for conducting MTC in WinBUGS (The BUGS Project, Cambridge, UK) to allow for repeated IOP measurements and to impute missing standard deviations of the raw data using the predictive distribution from observations with standard deviations. A flexible model with an unconstrained baseline for IOP variations over time and time-invariant random treatment effects fitted the data well. We also adopted repeated measures models to allow for class effects; assuming treatment effects to be exchangeable within classes slightly improved model fit but could bias estimated treatment effects if exchangeability assumptions were not valid. We enabled all timepoints to be included in the analysis, allowing for repeated measures to increase precision around treatment effects and avoid bias associated with selecting timepoints for meta-analysis.The methods we developed for modelling repeated measures and allowing for missing data may be adapted for use in other MTC meta-analyses. Answer: The study comparing resource utilization in the treatment of open-angle glaucoma between two cities in Finland found that patients in the Oulu district consumed more resources and had more expensive glaucoma treatment than those in the Turku area. The total treatment cost over an 11-year period was 6010 € in Oulu and 4452 € in Turku. Despite the higher costs and resource consumption in Oulu, there was no statistically significant difference in health-related quality of life (HRQoL) scores between the two areas as assessed by the 15D instrument. Additionally, 40% of patients did not display any structural or functional damage suggesting glaucoma after 11 years of continuous medical treatment and follow-up. Patients with higher glaucoma stage (worse disease) had statistically higher treatment costs and resource consumption. However, higher resource consumption and treatment costs did not correlate with an increase in patients' HRQoL. The major cost source in open-angle glaucoma treatment was found to be medication, accounting for up to 74% of annual costs. The study concluded that higher resource consumption and treatment costs do not necessarily increase patients' HRQoL (PUBMED:23621767).
Instruction: Effect of temozolomide on male gametes: an epigenetic risk to the offspring? Abstracts: abstract_id: PUBMED:23652788 Effect of temozolomide on male gametes: an epigenetic risk to the offspring? Introduction: Temozolomide is an oral alkylating agent with proven efficacy in recurrent high-grade glioma. The antitumour activity of this molecule is attributed to the inhibition of replication through DNA methylation. However, this methylation may also perturb other DNA-dependent processes, such as spermatogenesis. The ability to father a child may be affected by having this treatment. Here we report a pregnancy and a baby born after 6 cures of temozolomide. Methods: The quality of gametes of the father has been studied through these cures and after the cessation of treatment. Sperm parameters, chromosomal content and epigenetic profiles of H19, MEST and MGMT have been analysed. Results: Sperm counts decrease significantly and hypomethylation of the H19 locus increase with time even staying in the normal range. Conclusion: This is the first report of an epigenetic modification in sperm after temozolomide treatment suggesting a potential risk for the offspring. A sperm cryopreservation before the initiation of temozolomide treatment should be recommended. abstract_id: PUBMED:31085184 The humanin analogue (HNG) prevents temozolomide-induced male germ cell apoptosis and other adverse effects in severe combined immuno-deficiency (SCID) mice bearing human medulloblastoma. Subfertility is a major concern of long-term cancer survivors at the reproductive age. We have previously demonstrated that a potent humanin analogue, HNG, protected chemotherapy-induced apoptosis in germ cells but not cancer cells in a metastatic melanoma allograft model. In this study, we utilized severe combined immuno-deficiency (SCID) mice bearing human medulloblastoma to study the effect of HNG in Temozolomide (TMZ) induced male germ cell apoptosis and white blood cell (WBC) suppression. Human medulloblastoma DAOY cells were injected subcutaneously into the right flank of male SCID mice. Three weeks later, groups of tumor-bearing mice received one of the following treatments: vehicle, HNG, TMZ, or TMZ + HNG. 24 h after last injection, the tumors weights, complete blood counts, liver and spleen weights, male germ cell apoptosis was assessed. HNG did not affect TMZ's significant anti-tumor action. HNG significantly prevented TMZ-induced germ cell apoptosis and attenuated the suppressed total WBC and granulocyte counts in SCID mice with or without TMZ treatment. HNG also attenuated TMZ-induced body weight loss and decrease of spleen and liver weights. In conclusion, HNG ameliorated TMZ-induced germ cell apoptosis; WBC and granulocytes loss; and decreased body/organ weights without compromising the TMZ's anti-cancer action on medulloblastoma xenografts in SCID mice. abstract_id: PUBMED:29485076 Risk Factors for Malignant Transformation of Low-Grade Glioma. Purpose: The incidence, risk factors, and outcomes of low-grade glioma patients who undergo malignant transformation (MT) in the era of temozolomide are not well known. This study evaluates these factors in a large group of World Health Organization grade 2 glioma patients treated at a tertiary-care institution. Methods And Materials: Patient, tumor, and treatment factors were analyzed using an institutional review board-approved low-grade glioma database. Characteristics were compared using χ2 and Wilcoxon signed rank tests. Time to event was summarized using proportional hazards models. Univariate and multivariate survival analyses were performed. Results: Of a total of 599 patients, 124 underwent MT; 76 (61.3%) had biopsy-proven MT. The MT incidence was 21%, and the median time to MT was 56.4 months. The 5- and 10-year progression-free survival rates were 30.6% ± 4.2% and 4.8% ± 1.9%, respectively, for MT patients and 60% ± 2.4% and 38% ± 2.7%, respectively, for non-MT patients. The 5- and 10-year overall survival rates were 75% ± 4.0% and 46% ± 5.0%, respectively, for MT patients and 87% ± 1.7% and 78% ± 2.3%, respectively, for non-MT patients. On multivariate analysis, older age (P = .001), male sex (P = .004), multiple tumor locations (P = .004), chemotherapy alone (P = .012), and extent of resection (P = .045) remained significant predictors of MT. Conclusions: MT affects survival. Risk factors include older age, male sex, multiple tumor locations, use of chemotherapy alone, and presence of residual disease. Our finding that initial interventions could affect the rate of MT is provocative, but these data should be validated using data from prospective trials. In addition to improving survival, future therapeutic efforts should focus on preventing MT. abstract_id: PUBMED:19179423 Risk analysis of severe myelotoxicity with temozolomide: the effects of clinical and genetic factors. A benefit of temozolomide (TMZ) is that myelotoxicity is uncommon. Recently, several small series have reported significant myelotoxicity resulting in treatment delays or death. The ability to predict risk of myelotoxicity may influence patient care. We retrospectively reviewed 680 malignant glioma patients and developed a clinical risk formula for myelotoxicity for each gender by logistic regression. The variables that remained were assigned a score of 1 and added together for a final risk score. Women experienced more myelotoxicity than did men (p = 0.015). For males, risk factors included body surface area (BSA) &gt; or = 2 m(2) (odds ratio [OR] = 2.712, p = 0.04), not on steroids (OR = 2.214, p = 0.06), and on bowel medication (OR = 3.955, p = 0.008). For females, final factors included no prior chemotherapy (OR = 3.727, p = 0.001), creatinine &gt; or = 1 mg/dl (OR = 6.08, p = 0.002), platelets &lt; 270,000/mm(3) (OR = 2.438, p = 0.03), BSA &lt; 2 m(2) (OR = 4.178, p = 0.04), not on medication for gastroesophageal reflux disease (OR = 2.942, p = 0.01), and on analgesics (OR = 2.169, p = 0.05). Age was included because of observable trends. Risk of developing myelotoxicity ranged from 0% to 33% (male) and from 0% to 100% (females). Polymorphisms in NQO1 (NAD(P)H dehydrogenase, quinone 1), MGMT (O(6)-methylguanine-DNA methyltransferase), and GSTP1 (glutathione S-transferase pi 1) were related to risk of developing myelotoxicity in a subset of patients. Myelotoxicity with TMZ is a significant clinical issue for those at risk. Use of a clinical model to predict risk and evaluation of identified genetic polymorphisms related to myelotoxicity may allow for individualized dosing, optimizing patient management. abstract_id: PUBMED:32019288 Risk Factors for Cognitive Impairment in High-Grade Glioma Patients Treated with Postoperative Radiochemotherapy. Purpose: Fractionated radiotherapy as well as concomitant and adjuvant chemotherapy such as temozolomide for postoperative high-grade glioma (HGG) patients improves progression-free survival and overall survival. Multiple factors such as chemotherapy, radiotherapy, tumor grade, residual tumor volume, and genetic modifications might play a role in the formation of cognitive impairment. The risk factors of cognitive impairment in postoperative patients with HGG receiving radiotherapy and chemotherapy remains a concern in this population. The purpose of this study was to identify risk factors for cognitive impairment in patients of postoperative HGG. Materials And Methods: A total of 229 patients with HGG who underwent surgery were analyzed. Cognitive impairment was defined as a decrease of Cognitive Assessment Montreal (MoCA)'s score in at least two cognitive domains or any MoCA's score of less than 26 points at the time of study compared with baseline level. Multiple potential risk factors including methylated status of the O6-methylguanine-DNA methyltransferase (MGMT) promoter, glioma World Health Organization (WHO) grade, residual tumor volume, education, and sex were analyzed. Cox univariate and multivariate regression analysis was used to detect the significant risk factors for cognitive impairment. Results: At the end of follow-up among the 229 patients, 147 patients (67%) developed cognitive impairment. 82 patients (36%) remained in normal cognitive condition. In multivariate analysis, unmethylated MGMT promoter (hazard ratio [HR], 1.679; 95% confidence interval [CI], 1.212 to 2.326; p=0.002), glioblastoma (HR, 1.550; 95% CI, 1.117 to 2.149; p=0.009), and residual tumor volume &gt; 5.58 cm3 (HR, 1.454; 95% CI, 1.047 to 2.020; p=0.026) were independent risk factors for cognitive impairment. Conclusion: Methylated status of the MGMT promoter, glioma WHO grade, and residual tumor volume might be risk factors for the cognitive impairment in postoperative patients with HGG. abstract_id: PUBMED:24141059 Seroprevalence and risk factors of Chlamydia infection in dogs in Southwestern China. Chlamydia spp. are obligate intracellular bacteria distributed globally, known to cause various forms of diseases in animals and humans. To date, there is limited information about the seroprevalence of Chlamydia and the risk factors associated with Chlamydia infection in dogs in the world. In the present study, a serological survey was undertaken to examine the seroprevalence and risk factors associated with dog chlamydiosis in Yunnan Province, southwestern China. A total of 591 dogs were sampled, antibodies to Chlamydia were determined by indirect hemagglutination assay (IHA). The overall seroprevalence was estimated at 17.6%. The risk factors associated with seroprevalence were determined by a multivariate logistic regression analysis. Gender and age of dogs were not significant in the logistic regression analysis (P&gt;0.05) and left out of the final model. Type and geographical origin of dogs were considered as main risk factors associated with Chlamydia infection, stray dogs (31.37%) were more than 16 times (OR=16.167, 95% CI=6.283-41.599, P&lt;0.01) at risk of acquiring the infection compared to the police dogs (7.62%), while pet dogs (14.41%) had a 3 times (OR=2.968, 95% CI=1.349-6.529, P=0.007) higher risk. Positive dogs were found in 5 districts of Yunnan Province with prevalence ranging from 2.56% to 31.67% except Diqing (0/56). Dogs in Kunming (20.21%) had a 9 times higher risk of being seropositive compared to dogs in Lijiang (2.56%) (OR=9.057, 95% CI=1.211-67.714, P=0.032), although no regional differences were found in other 4 administrative divisions compared to Lijiang (P&gt;0.05). Our study revealed a widespread and high prevalence of Chlamydia infection in dogs in Yunnan Province, southwestern China, with higher exposure risk in stray dogs and distinct geographical distribution. These findings suggest the potential importance of dogs in the transmission of zoonotic Chlamydia infection, and thus Chlamydia should be taken into consideration in diagnosing dog diseases. abstract_id: PUBMED:35061210 High-risk pituitary adenomas and strategies for predicting response to treatment. High-risk pituitary adenomas are aggressive. They show clinical and imaging features similar to those of carcinomas, including infiltration of the surrounding brain structures, but lack cerebrospinal or systemic metastases. In addition, they display distinct behavior, including tendency for fast growth and frequent recurrences, which are difficult to control. The term "high-risk" adenoma was first introduced in the 4th edition of the World Health Organization Classification of Endocrine Tumors in 2017. Five defined adenoma types belong to this category, including sparsely granulated somatotroph, lactotroph in men, Crooke cell, silent corticotroph, and plurihormonal PIT-1 positive adenomas. The morphological and immunohistochemical characteristics of high-risk adenomas are herein described in detail. In addition, the clinical features and the treatment options are presented. This review focuses on predictive markers assessed by immunohistochemistry, which help clinicians to design the appropriate treatment strategies for high-risk adenomas. Somatostatin receptor status predicts effectiveness of postsurgical treatment with somatostatin analogs, and MGMT expression predicts response to treatment with temozolomide. This comprehensive review presents the clinical and pathological features of high-risk pituitary adenomas, underlines the contribution of immunohistochemistry, and emphasizes the leading role of pathology in the design of optimal clinical management. abstract_id: PUBMED:22180305 NK cell immunotherapy for high-risk neuroblastoma relapse after haploidentical HSCT. Little is known on strategies to prevent or to treat relapses occurring after haploidentical stem cell transplantation (haplo-HSCT) performed for the high-risk neuroblastoma (NB). We describe a 6-year-old male with refractory NB who relapsed 22 months after haplo-HSCT. A complete remission was obtained with a combination of immuno-chemotherapy based on donor NK cells transplants, IL2 infusions and temozolomide/topotecan. This case is an incentive to explore both the immediate therapeutic effect of haplo-graft provided via haplo-NK cells and the immunogenic platform that haplo-HSCT offers for future treatment. Our post-relapse strategy shows that chemo- and bio-treatment should be viewed as complementary therapeutic options. abstract_id: PUBMED:30341098 LGALS3 Promotes Treatment Resistance in Glioblastoma and Is Associated with Tumor Risk and Prognosis. Background: LGALS3 promotes tumor progression in diverse cancers. However, the involvement of LGALS3 in glioblastoma has not yet been broadly illuminated. Methods: Microarray was performed to detect the gene expression profiles of radioresistance in T98G cells and identified a universally upregulated gene, LGALS3. The impact of LGALS3 on the survival of glioblastoma cells facing ionizing irradiation or temozolomide was investigated by the Cell Counting Kit-8 (CCK-8). A total of 120 glioblastoma cases were collected to analyze the relationship between LGALS3 expression and patient prognosis. Another 961 patients with glioma and 1,351 healthy controls were recruited to study the association of SNPs across the LGALS3 gene with glioblastoma susceptibility. The functional SNP sites were also studied in cellular experiments. Results: An effective protection of LGALS3 from ionizing irradiation or temozolomide-induced cell death in T98G and U251 cells was found. In addition, high expression of LGALS3 could work as an independent risk factor for survival of patients with glioblastoma. Two SNP sites (rs4644 and rs4652) across the LGALS3 gene were associated with increased risk for glioblastoma, and the C allele of rs4652 and the A allele of rs4644 could enhance glioblastoma resistance to radio-chemotherapy, but not cell proliferation. Conclusions: Our results suggest that LGALS3 is an important biomarker influencing glioblastoma risk and prognosis and a potential target for treating the malignancy, especially ones with resistance against the standard therapy. Impact: LGALS3 promotes glioblastoma cells' resistance to ionizing irradiation and temozolomide and predicts poor prognosis. Targeting LGALS3 may limit the therapeutic resistance in glioblastoma and increase patient survival. abstract_id: PUBMED:31549281 Oligodendroglioma confers higher risk of radiation necrosis. Background: Radiation therapy (RT) remains a mainstay for the treatment of lower grade gliomas. Radiation neurotoxicity is a serious complication, carrying high morbidity in the absence of tumor progression. The incidence remains poorly categorized and known risk factors identified are related to the radiation modality. We hypothesized that patients with oligodendroglioma have a higher risk of radiation necrosis (RN) as compared to patients with astrocytoma. Methods: We conducted a retrospective review of adults with lower grade diffuse gliomas over a 10-year span. The primary outcome was RN, either pathologically confirmed or clinically diagnosed. Cases without pathological confirmation must have been symptomatic, requiring administration of bevacizumab or high-dose steroids. Cox proportional hazard ratios were used for multivariate analyses. Results: In 319 patients, we identified RN in 41 patients (12.9%): 28 patients (21.3%) with oligodendroglioma and 13 (6.9%) with astrocytoma (HR 3.42, p &lt; 0.001). Patients with oligodendroglioma who received &gt; 54 Gy had a higher incidence (31.2%) than those receiving ≤ 54 Gy (14.3%, HR 6.9, p = 0.002). There was no similar correlation among patients with astrocytoma. There was no difference in incidence based on use of concomitant temozolomide. Radiation necrosis appeared within 24 months from radiation in 80.5% of patients. Conclusion: Our study suggests that patients with oligodendroglioma are at higher risk of developing RN. The incidence increases with increasing radiation dose in patients with oligodendroglioma but not with astrocytoma. RN usually appears within 24 months from RT. Patients with oligodendroglioma receiving &gt; 54 Gy are at highest risk. Answer: The use of temozolomide, an oral alkylating agent, has been shown to have an impact on male gametes, suggesting a potential epigenetic risk to offspring. A study reported a pregnancy and a baby born after the father underwent six cures of temozolomide treatment. The quality of the father's gametes was studied throughout these cures and after the cessation of treatment, including sperm parameters, chromosomal content, and epigenetic profiles of H19, MEST, and MGMT. The results indicated that sperm counts decreased significantly and hypomethylation of the H19 locus increased with time, although it remained within the normal range. This was the first report of an epigenetic modification in sperm after temozolomide treatment, suggesting a potential risk for the offspring. Consequently, it was recommended that sperm cryopreservation should be considered before the initiation of temozolomide treatment (PUBMED:23652788). In summary, temozolomide treatment has been associated with significant changes in sperm counts and epigenetic modifications, which may pose a risk to the offspring, highlighting the importance of considering sperm preservation before treatment.
Instruction: Increased mortality at low-volume orthotopic heart transplantation centers: should current standards change? Abstracts: abstract_id: PUBMED:18805171 Increased mortality at low-volume orthotopic heart transplantation centers: should current standards change? Background: The Centers for Medicare and Medicaid Services (CMS) mandate that orthotopic heart transplantation (OHT) centers perform 10 transplants per year to qualify for funding. We sought to determine whether this cutoff is meaningful and establish recommendations for optimal center volume using the United Network for Organ Sharing (UNOS) registry. Methods: We reviewed UNOS data (years 1999 to 2006) identifying 14,401 first-time adult OHTs conducted at 143 centers. Stratification was by mean annual institution volume. Primary outcomes of 30-day and 1-year mortality were assessed by multivariable logistic regression (adjusted for comorbidities and risk factors for death). Sequential volume cutoffs were examined to determine if current CMS standards are optimal. Pseudo R2 and area under the receiver operating curve assessed goodness of fit. Results: Mean annual volume ranged from 1 to 90. One-year mortality was 12.6% (n = 1,800). Increased center volume was associated with decreased 30-day mortality (p &lt; 0.001). Decreased center volume was associated with increases in 30-day (odds ratio [OR] 1.03, 95% confidence interval [CI]: 1.02 to 1.03, p &lt; 0.001) and 1-year mortality (OR 1.01, 95% CI: 1.01 to 1.02, p = 0.03--censored for 30-day death). The greatest mortality risk occurred at very low volume centers (&lt;or= 2 cases = 2.15 times increase in death, p = 0.03). Annual institutional volume of fewer than 10 cases per year increased 30-day mortality by more than 100% (OR 2.02, 95%CI: 1.46 to 2.80, p &lt; 0.001) and each decrease in mean center volume by one case per year increased the odds of 30-day mortality by 2% (OR 1.02, 95% CI: 1.01 to 1.03, p &lt; 0.001]. Additionally, centers performing fewer than 10 OHTs per year had increased cumulative mortality by Cox proportional hazards regression (hazard ratio 1.35, 95% CI: 1.14 to 1.60, p &lt; 0.001). Sequential multivariable analyses suggested that current CMS standards may not be optimal, as all centers performing more than 40 transplants per year demonstrated less than 5% 30-day mortality. Conclusions: Annual center volume is an independent predictor of short-term mortality in OHT. These data support reevaluation of the current CMS volume cutoff for OHT, as high-volume centers achieve lower mortality. abstract_id: PUBMED:26298167 The Effect of Institutional Volume on Complications and Their Impact on Mortality After Pediatric Heart Transplantation. Background: This study evaluated the potential association of institutional volume with survival and mortality subsequent to major complications in a modern cohort of pediatric patients after orthotopic heart transplantation (OHT). Methods: The United Network of Organ Sharing database was queried for pediatric patients (aged ≤18 years) undergoing OHT between 2000 and 2010. Institutional volume was defined as the average number of transplants completed annually during each institution's active period and was evaluated as categoric and as a continuous variable. Logistic regression models were used to determine the effect of institutional volumes on postoperative outcomes, which included renal failure, stroke, rejection, reoperation, infection, and a composite complication outcome. Cox modeling was used to analyze the risk-adjusted effect of institutional volume on 30-day, 1-year, and 5-year mortality. Kaplan-Meier estimates were used to compare differences in unconditional survival. Results: A total of 3,562 patients (111 institutions) were included and stratified into low-volume (&lt;6.5 transplants/year, 91 institutions), intermediate-volume (6.5 to 12.5 transplants/year, 12 institutions), and high-volume (&gt;12.5 transplants/year, 8 institutions) tertiles. Unadjusted survival was significantly different at 30 days (p = 0.0087) in the low-volume tertile (94.2%; 95% confidence interval, 92.7% to 95.4%) compared with the high-volume tertile (96.8%; 95% confidence interval, 95.7% to 97.7%). No difference was observed at 1 or 5 years. Risk-adjusted Cox modeling demonstrated that low-volume institutions had an increased rate of mortality at 30 days (hazard ratio, 1.91; 95% confidence interval, 1.02 to 3.59; p = 0.044), but not at 1 or 5 years. High-volume institutions had lower incidences of postoperative complications than low-volume institutions (30.3% vs 38.4%, p &lt; 0.001). Despite this difference in the rate of complications, survival in patients with a postoperative complication was similar across the volume tertiles. Conclusions: No association was observed between institutional volume and adjusted or unadjusted long-term survival. High-volume institutions have a significantly lower rate of postoperative complications after pediatric OHT. This association does not correlate with increased subsequent mortality in low-volume institutions. Given these findings, strategies integral to the allocation of allografts in adult transplantation, such as regionalization of care, may not be as relevant to pediatric OHT. abstract_id: PUBMED:22964734 Institutional factors beyond procedural volume significantly impact center variability in outcomes after orthotopic heart transplantation. Objective: To evaluate the contribution of institutional volume and other unmeasured institutional factors beyond volume to the between-center variability in outcomes after orthotopic heart transplantation (OHT). Background: It is unclear if institutional factors beyond volume have a significant impact on OHT outcomes. Methods: The United Network for Organ Sharing registry was used to identify OHTs performed between 2000 and 2010. Separate mixed-effect logistic regression models were constructed, with the primary endpoint being post-OHT mortality. Model A included only individual centers, model B added validated recipient and donor risk indices as well as the year of transplantation, and model C added institutional volume as a continuous variable to model B. The reduction in between-center variability in mortality between models B and C was used to define the contribution of institutional volume. Kaplan-Meier survival curves were also compared after stratifying patients into equal-size tertiles based on center volume. Results: A total of 119 centers performed OHT in 19,156 patients. After adjusting for transplantation year and differences in recipient and donor risk, decreasing center volume was associated with an increased risk of 1-year mortality (P &lt; 0.001). However, procedural volume only accounted for 16.7% of the variability in mortality between centers, and significant between-center variability persisted after adjusting for institutional volume (P&lt;0.001). In Kaplan-Meier analysis, there was significant variability in 1-year survival between centers within each volume category: low-volume (66.7%-96.6%), intermediate-volume (80.7%-97.3%), and high-volume (83.8%-93.9%). These trends were also observed with 5-year mortality. Conclusions: This large-cohort analysis demonstrates that although institutional volume is a significant predictor of post-OHT outcomes, there are other unmeasured institutional factors that contribute substantially to the between-center variability in outcomes. Institutional volume should therefore not be the sole indicator of "center quality" in OHT. abstract_id: PUBMED:26296959 The influence of institutional volume on the incidence of complications and their effect on mortality after heart transplantation. Background: The aim of this study was to determine whether institutional volume influenced the effect of postoperative complications on short-term and long-term survival after orthotopic heart transplantation (OHT). Methods: The United Network for Organ Sharing database was queried for adult patients (aged ≥18 years) undergoing OHT between 2000 and 2010. Average institutional volume was calculated during the study period and modeled as a categoric and as a continuous variable. Postoperative complications included rejection, dialysis dependence, infection, stroke, reoperation, and a composite event. Kaplan-Meier estimates and Cox regression modeling were performed for each complication to categorize the unadjusted and adjusted influence of institutional volume on survival. Results: The analysis included 19,849 OHT recipients who were stratified into low-volume (≤14.5 per year), intermediate-volume (14.5-26.5 per year), and high-volume (&gt;26.5 per year) tertiles. The overall incidences of postoperative complications were 10.2% for rejection, 7.8% for dialysis dependence, 12.0% for reoperation, 24.1% for infection, and 2.3% for stroke. Recipients in low-volume institutions experienced more complications after OHT than high-volume institutions (43.4% vs 36.2%; p &lt; 0.001). Survival after the composite complication outcome was significantly worse at 90 days, 1 year, and 5 years in the low-volume cohort. After risk adjustment, low institutional volume (when modeled as a continuous and as a categoric variable) was also independently predictive of mortality at each time point. As expected, survival at 5 years in patients without a postoperative complication (81%; 95 confidence interval [CI], 80.0%-82.8%) was statistically greater (p &lt; 0.001) than those with 1 (72.8%; 95% CI, 69.9%-75.5%), 2 (59.8%; 95% CI, 54.4%-64.8%), or 3 (39.9%; 95% CI, 31.6%-48.2%) complications. Conclusions: Postoperative complications after OHT have a greater incidence and effect on short-term and long-term survival at low-volume institutions. Accordingly, best practice guidelines established at high-volume institutions could better equip lower-volume hospitals to manage these events in hopes of optimizing transplant outcomes. abstract_id: PUBMED:35012779 Redo orthotopic heart transplantation in the current era. Objective: This study aims to investigate the trends, outcomes, and risk factors for mortality after redo orthotopic heart transplantation. Methods: The United Network for Organ Sharing registry was used to identify adult orthotopic heart transplantation recipients from 2000 to 2020 and stratify into primary and redo cohorts. Five-year post-transplant survival was compared between 2 propensity-matched cohorts. Multivariable modeling was performed to identify risk-adjusted predictors of redo post-transplant mortality, both conditional and nonconditional on shorter-term survival. Results: A total of 40,711 recipients were analyzed, 39,657 (97.4%) primary and 1054 (2.6%) redo. Redo recipients had a lower median age and were more frequently bridged with intravenous inotropes, intra-aortic balloon pump, or extracorporeal membrane oxygenation (all P &lt; .05). One- and 5-year survivals were lower after redo orthotopic heart transplantation (90.0% vs 83.4% and 77.6% vs 68.6%, respectively) and remained lower after comparing 2 propensity-matched cohorts. Multivariable modeling found factors such as increasing donor age and graft ischemic times, along with pretransplant mechanical ventilation and blood transfusion, to negatively affect 90-day survival. Contingent on 1-year survival, donor factors such as hypertension (hazard ratio, 1.51; 95% confidence interval, 1.15-2.00, P = .004) and left ventricular ejection fraction less than 50% (hazard ratio, 2.22, 95% confidence interval, 1.16-4.24, P = .016) negatively affected survival at 5 years. Conclusions: Although infrequently performed, redo orthotopic heart transplantation remains associated with worse post-transplant outcomes compared with primary orthotopic heart transplantation. Although several high-risk features were identified to affect post-retransplant outcomes in the acute perioperative period, donor characteristics such as hypertension and decreased ejection fraction continue to have lasting negative impacts in the longer term. abstract_id: PUBMED:31019175 Institutional Case-Volume and Mortality After Heart Transplantation. The relationship between lower institutional case-volume and higher mortality after complex high-risk procedures has been shown. The aim of this study is to examine the effect of institutional volume on patient outcome after heart transplantation (HT) in the entire Korean population.We analyzed all adult HTs performed in Korea between 2007 and 2016 using data from the National Health Insurance Service. The association between case-volume and in-hospital mortality after HT was analyzed after categorizing hospitals performing HT into low-, medium-, or high-volume centers depending on the number of HTs performed. The effect of case-volume on long-term mortality was also assessed.A total of 833 adult HTs were performed in 17 centers. In-hospital mortality was 3.7% (13/356), 10.1% (38/375), and 18.6% (19/102) in high-, medium-, and low-volume centers, respectively. Medium-, and low-volume centers showed increased risk of in-hospital mortality (odds ratio [95% confidence interval]; 2.11 [1.42-3.13] and 3.68 [2.16-2.27], respectively.). Long-term survival of up to 10 years was worse in lower-volume centers compared to high-volume centers (P &lt; 0.001).In conclusion, lower case-volume was associated with increased in-hospital mortality and long-term mortality after HT. A minimum case-volume mandate may be required for hospitals performing HT to ensure the best patient outcome and effective resource allocation. abstract_id: PUBMED:28331331 The influence of surgical technique on early posttransplant atrial fibrillation - comparison of biatrial, bicaval, and total orthotopic heart transplantation. Purpose: Early posttransplant atrial fibrillation (AF) has been associated with worse clinical outcomes after heart transplantation (HTX). The type of surgical technique may constitute a relevant risk factor for AF. Patients And Methods: This retrospective single-center study included 530 adult patients. Patients were stratified by surgical technique (biatrial, bicaval, or total orthotopic HTX) and early posttransplant heart rhythm (AF or sinus rhythm). Univariate and multivariate analyses were performed to evaluate risk factors for AF. Results: A total of 161 patients received biatrial HTX (30.4%), 115 bicaval HTX (21.7%), and 254 total orthotopic HTX (47.9%). Sixty-one of 530 patients developed early posttransplant AF (11.5%). Patients with AF showed a statistically inferior 5-year survival compared to those with sinus rhythm (P&lt;0.0001). Total orthotopic HTX had the lowest rate of AF (total orthotopic HTX [6.3%], bicaval HTX [14.8%], biatrial HTX [17.4%], P=0.0012). Multivariate analysis showed pretransplant valvular heart disease (P=0.0372), posttransplant enlarged left atrium (P=0.0066), posttransplant mitral regurgitation (P=0.0370), and non-total orthotopic HTX (P=0.0112) as risk factors for AF. Conclusion: Early posttransplant AF was associated with increased mortality (P&lt;0.0001). Total orthotopic HTX showed the lowest rate of AF compared to biatrial or bicaval HTX (P=0.0012). abstract_id: PUBMED:37709206 Higher-risk SIPAT score predicts increased risk of long-term mortality in orthotopic heart transplant recipients. Background: Orthotopic heart transplantation (OHT) improves survival in eligible patients. Organ scarcity necessitates extensive clinical and psychosocial evaluations before listing. The Stanford Integrated Psychosocial Assessment for Transplant (SIPAT) predicts risk for poor psychosocial outcomes and morbidity in the first year post-transplant, yet it is unknown whether it predicts long-term outcomes. Methods: Blinded examiners obtained data from a retrospective cohort of 51 OHT recipients from a high-volume center. Patients with "Excellent" or "Good" SIPAT score indicating low psychosocial risk for transplant (E/G) were compared with those who met "Minimum Acceptable Criteria" or were "High Risk" (MAC/HR). Associations were examined between SIPAT group and outcomes. Results: MAC/HR versus E/G recipients had significantly reduced survival in the 10 years post-OHT (mean 6.7 vs 8.8 years, p = 0.027; 55% vs 82% survival proportions, p = 0.037). MAC/HR patients were more likely to live in a county with greater income inequality (p = 0.025) and have psychiatric history pre-OHT (p = 0.046). Both groups had otherwise similar demographics and medical history. A lower proportion of MAC/HR patients adhered to medications post-OHT and a greater proportion had psychiatric illness, though differences were not significant. Conclusions: Higher-risk SIPAT scores predict reduced long-term survival post-OHT. Further efforts are crucial to improve outcomes in higher-risk patients. abstract_id: PUBMED:22172752 Institutional volume and the effect of recipient risk on short-term mortality after orthotopic heart transplant. Objective: We developed a validated 50-point recipient risk index predicting short-term mortality after orthotopic heart transplant (OHT). This study examined the relationship between institutional volume and recipient risk on post-OHT mortality. Methods: We used United Network for Organ Sharing (UNOS) data to identify primary OHT recipients between January 2000 and April 2010. Centers were stratified by mean annual volume. Preoperative Index for Mortality Prediction After Cardiac Transplantation risk scores were calculated for each patient with our validated 50-point system. Primary outcomes were 30-day and 1-year survivals. Multivariable logistic regression analysis included interaction terms to examine effect modification of risk and volume on mortality. Results: In all, 18,226 patients underwent transplant at 141 centers: 1173 (6.4%) recipients at low-volume centers (&lt;7 procedures/y), 5353 (29.4%) at medium-volume centers (7-15 procedures/y), and 11,700 (64.2%) at high-volume centers (&gt;15 procedures/y). Low center volume was associated with worse 1-year mortality (odds ratio, 1.58; 95% confidence interval, 1.30-1.92; P &lt; .001). For 1-year survival, there was significant positive interaction between center volume and recipient risk score (odds ratio, 1.04; 95% confidence interval, 1.01-1.07; P = .02), indicating effect of risk on mortality at low-volume centers greater than from either variable analyzed individually. Among high-risk recipients (score ≥10), 1-year survival was improved at high-volume centers (high, 79%; medium, 75%; low, 64%). Conclusions: In analysis of UNOS data with our validated recipient risk index, institutional volume acted as an effect modifier on association between risk and mortality. High-risk patients had higher mortality at low-volume centers; differences dissipated among lower-risk recipients. These data support a mandate for high-risk transplants at higher-volume centers. abstract_id: PUBMED:3301030 Effect of cardiac rejection on blood volume The variations of blood volume after autotransplantation have been documented recently by our group. Since, in the homotransplanted heart, acute rejection affects cardiac function we felt that the blood volume variations might be different than that of the autotransplants. In thirty-one immunosuppressed dogs with an orthotopic homotransplanted heart, daily assessments of blood volume with iodine 131 tagged albumin was carried out from preoperative period (control) until death from acute rejection. In the immediate postoperative period there was a significant fall of 21% of the blood volume caused by a 20% fall of plasmatic volume and of a 23% fall in erythrocyte volume. From the second to the fifth postoperative day in spite of persistence of a low erythrocyte volume the blood volume progressed to normal due to a compensatory increase of the plasmatic volume. In the terminal phase of rejection there was a 32% increase of the plasmatic volume leading to an 8% increase of the total blood volume. Answer: The question of whether current standards should change in light of increased mortality at low-volume orthotopic heart transplantation (OHT) centers is supported by several studies that have found a correlation between lower institutional volume and higher mortality rates post-transplantation. One study found that annual center volume is an independent predictor of short-term mortality in OHT, with increased mortality risk occurring at very low volume centers (≤2 cases per year). Centers performing fewer than 10 OHTs per year had increased 30-day mortality by more than 100%, suggesting that the current Centers for Medicare and Medicaid Services (CMS) volume cutoff of 10 transplants per year may not be optimal, as high-volume centers achieve lower mortality (PUBMED:18805171). Another study focusing on pediatric OHT found that high-volume institutions had a significantly lower rate of postoperative complications, although this did not correlate with increased subsequent mortality in low-volume institutions. This suggests that strategies such as regionalization of care, which are integral to adult transplantation, may not be as relevant to pediatric OHT (PUBMED:26298167). Further research indicated that while institutional volume is a significant predictor of post-OHT outcomes, there are other unmeasured institutional factors that contribute substantially to the between-center variability in outcomes. This suggests that institutional volume should not be the sole indicator of "center quality" in OHT (PUBMED:22964734). An additional study found that postoperative complications after OHT have a greater incidence and effect on short-term and long-term survival at low-volume institutions. This suggests that best practice guidelines established at high-volume institutions could better equip lower-volume hospitals to manage these events in hopes of optimizing transplant outcomes (PUBMED:26296959). In the context of redo OHT, it was found that redo recipients had lower survival rates compared to primary OHT recipients, with several high-risk features identified that affect post-retransplant outcomes in the acute perioperative period (PUBMED:35012779). A study in Korea also demonstrated that lower case-volume was associated with increased in-hospital mortality and long-term mortality after HT, suggesting that a minimum case-volume mandate may be required for hospitals performing HT to ensure the best patient outcome and effective resource allocation (PUBMED:31019175). In summary, the evidence suggests that current standards may need to be reevaluated, with consideration given to not only the volume of transplants performed at a center but also other institutional factors that impact outcomes. This could lead to changes in policy and practice to improve patient survival rates following OHT.
Instruction: Optimal (≤1 cm) but visible residual disease: is extensive debulking warranted? Abstracts: abstract_id: PUBMED:23672929 Optimal (≤1 cm) but visible residual disease: is extensive debulking warranted? Objectives: To determine if extensive upper abdominal surgery (UAS) affected overall survival (OS) in patients left with ≤ 1 cm but visible residual disease after undergoing primary cytoreductive surgery for ovarian cancer. Our secondary objective was to determine if leaving ≤ 1cm but visible residual throughout the small bowel (SB) conferred a worse prognosis. Methods: All stage IIIB-IV ovarian cancer patients who had visible but ≤ 1 cm residual disease at time of primary cytoreductive surgery from 2001 to 2010 were identified. Extensive UAS procedures and residual SB involvement were recorded. Results: The 219 patients identified with ≤1 cm but visible residual disease had a median OS of 51 months. In this cohort, 127 had extensive UAS performed, and 87 had residual disease involving the SB. Univariate OS analysis was performed. There was no significant difference in OS between patients who did or did not have extensive UAS (45 vs. 52 months, P=0.56), or between patients with or without residual SB disease (45 vs. 51 months, P=0.84). Factors that were significantly associated with OS were age, ASA score, family history, and stage. Conclusions: Patients cytoreduced to ≤ 1 cm but visible residual disease who required UAS did not have a worse OS than those who did not require UAS. OS was similar if residual disease involved the SB or not. For ovarian cancer patients with disease not amenable to complete gross resection, extensive surgery should still be considered to achieve ≤ 1 cm but visible residual disease status, including cases where the residual disease involves the SB. abstract_id: PUBMED:33842263 Prediction of optimal debulking surgery in ovarian cancer. The mainstay management of advanced ovarian cancer is maximal cytoreductive surgery followed by chemotherapy. Neoadjuvant chemotherapy (NACT) and interval debulking surgery (IDS) are alternative treatments for patients with comorbidity, poor performance status, and predicted for suboptimal debulking surgery. It is the invariable principle in any situation that no residual disease after the completion of surgery is useful for patients with ovarian cancer. Therefore, the prediction of optimal debulking before the treatment of ovarian cancer is of utmost importance. Many studies have reported on the use of serum biomarkers, such as cancer antigen 125 (CA125) or human epididymis 4 (HE4), and imaging studies, such as computed tomography (CT), diffusion-weighted magnetic resonance imaging (DW-MRI), and positron emission tomography (PET)/CT, to identify adequate surgical candidates for primary debulking surgery (PDS). Laparoscopy has also been studied as a reliable tool for the prediction of optimal debulking. Here, we summarize a review of the related literature. abstract_id: PUBMED:29933927 Moving beyond "complete surgical resection" and "optimal": Is low-volume residual disease another option for primary debulking surgery? Objectives: To examine the relationship between volume of residual disease and oncologic outcomes among patients with advanced-stage epithelial ovarian/fallopian tube/primary peritoneal carcinoma undergoing primary debulking surgery (PDS). For patients that did not undergo a complete surgical resection (CSR), a surrogate for volume of residual disease was used to assess oncologic outcomes. Methods: Medical records of patients with FIGO stage IIIC and IV epithelial ovarian/fallopian tube/primary peritoneal carcinoma undergoing PDS between January 2010 and November 2014 were reviewed. Patient demographics, operative characteristics, residual disease, anatomic site of residual disease and outcome data were collected. Among patients who did not undergo CSR, but had ≤1 cm of residual disease, the number of anatomic sites (single location vs. multiple locations) with residual disease was utilized as a surrogate for volume of residual disease. The effect of residual disease volume on progression-free survival (PFS) and overall survival (OS) was evaluated. Results: Of 240 patients undergoing PDS, 94 (39.2%) had CSR, 41 (17.1%) had ≤1 cm of residual disease confined to a single anatomic location (≤1 cm-SL), 67 (27.9%) had ≤1 cm of residual disease in multiple anatomic locations (≤1 cm-ML) and 38 (15.8%) were sub-optimally (SO) debulked. Median PFS for CSR, ≤1 cm-SL, ≤1 cm-ML and SO-debulked were: 23, 19, 13 and 10 months, respectively (p &lt; 0.001). Median OS for CSR, ≤1 cm-SL, ≤1 cm-ML and SO-debulked were: Not yet reached, 64, 50 and 49 months, respectively (p = 0.001). Conclusions: Following PDS, CSR and ≤ 1 cm-SL patients have the best prognosis. In contrast, despite being considered "optimally debulked", ≤1 cm-ML patients have survival similar to those SO-debulked. abstract_id: PUBMED:36761962 Predictors of residual disease after debulking surgery in advanced stage ovarian cancer. Objective: Optimal debulking with no macroscopic residual disease strongly predicts ovarian cancer survival. The ability to predict likelihood of optimal debulking, which may be partially dependent on tumor biology, could inform clinical decision-making regarding use of neoadjuvant chemotherapy. Thus, we developed a prediction model including epidemiological factors and tumor markers of residual disease after primary debulking surgery. Methods: Univariate analyses examined associations of 11 pre-diagnosis epidemiologic factors (n=593) and 24 tumor markers (n=204) with debulking status among incident, high-stage, epithelial ovarian cancer cases from the Nurses' Health Studies and New England Case Control study. We used Bayesian model averaging (BMA) to develop prediction models of optimal debulking with 5x5-fold cross-validation and calculated the area under the curve (AUC). Results: Current aspirin use was associated with lower odds of optimal debulking compared to never use (OR=0.52, 95%CI=0.31-0.86) and two tissue markers, ADRB2 (OR=2.21, 95%CI=1.23-4.41) and FAP (OR=1.91, 95%CI=1.24-3.05) were associated with increased odds of optimal debulking. The BMA selected aspirin, parity, and menopausal status as the epidemiologic/clinical predictors with the posterior effect probability ≥20%. While the prediction model with epidemiologic/clinical predictors had low performance (average AUC=0.49), the model adding tissue biomarkers showed improved, but weak, performance (average AUC=0.62). Conclusions: Addition of ovarian tumor tissue markers to our multivariable prediction models based on epidemiologic/clinical data slightly improved the model performance, suggesting debulking status may be in part driven by tumor characteristics. Larger studies are warranted to identify those at high risk of poor surgical outcomes informing personalized treatment. abstract_id: PUBMED:31082382 A novel classification of residual disease after interval debulking surgery for advanced-stage ovarian cancer to better distinguish oncologic outcome. Background: Complete surgical resection affords the best prognosis at the time of interval debulking surgery. When complete surgical resection is unachievable, optimal residual disease is considered the next best alternative. Despite contradicting evidence on the survival benefit of interval debulking surgery if macroscopic residual disease remains, the current definition of "optimal" in patients undergoing interval debulking surgery is defined as largest diameter of disease measuring ≤1.0 cm, independent of the total volume of disease. Objective: To examine the relationship between volume and anatomic distribution of residual disease and oncologic outcomes among patients with advanced-stage epithelial ovarian/fallopian tube/primary peritoneal carcinoma undergoing neoadjuvant chemotherapy then interval debulking surgery. For patients who did not undergo a complete surgical resection, a surrogate for volume of residual disease was used to assess oncologic outcomes. Study Design: Patient demographics, operative characteristics, anatomic site of residual disease, and outcome data were collected from medical records of patients with International Federation of Gynecology and Obstetrics stage IIIC and IV epithelial ovarian cancer undergoing interval debulking surgery from January 2010 to July 2015. Among patients who did not undergo complete surgical resection but had ≤1 cm of residual disease, the number of anatomic sites (single location vs multiple locations) with residual disease was used as a surrogate for volume of residual disease. The effect of residual disease volume on progression-free survival and overall survival was evaluated. Results: Of 270 patients undergoing interval debulking surgery, 173 (64.1%) had complete surgical resection, 34 (12.6%) had ≤1 cm of residual disease in a single anatomic location, 47 (17.4%) had ≤1 cm of residual disease in multiple anatomic locations, and 16 (5.9%) were suboptimally debulked. Median progression-free survival for each group was 14, 12, 10, and 6 months, respectively (P&lt;.001). Median overall survival for each group was: 58, 37, 26, and 33 months, respectively (P&lt;.001). Conclusion: Following interval debulking surgery, patients with complete surgical resection have the best prognosis, followed by patients with ≤1 cm single-anatomic location disease. In contrast, despite being considered "optimally debulked," patients with ≤1 cm multiple-anatomic location disease have a survival similar to suboptimally debulked patients. abstract_id: PUBMED:28546700 Advanced Primary Epithelial Ovarian and Peritoneal Carcinoma-Does Diagnostic Accuracy of Preoperative CT Scan for Detection of Peritoneal Metastatic Sites Reflect into Prediction of Suboptimal Debulking? A Prospective Study. Ovarian cancer is the seventh most common cancer in females worldwide. Optimal debulking is the standard treatment but possible only in 30-85% of advanced stages. Knowing exactly the disease extent preoperatively may predict suboptimal debulking. We analyzed diagnostic accuracy of preoperative CT scan in disease mapping and prediction of suboptimal debulking in a prospective observational study from March 2013 to May 2015 in a tertiary hospital. Adults below the age of 75 years with ECOG PS-0, 1, 2, clinically/radiologically newly diagnosed stage IIIc epithelial ovarian (EOC), and primary peritoneal carcinoma (PPC) were included. Neoadjuvant chemotherapy recipients were excluded. Preoperative multidetector CT (MDCT) scan showing deposits at 19 predetermined abdominopelvic sites were compared with the same sites seen at laparotomy and corresponding accuracies of CT scan calculated. Primary debulking surgery was done to achieve debulking to nil or less than 1-cm residual disease. Stepwise logistic regression models were used to determine the frequent suboptimal debulking sites and the predictive performance of the clinical and CT scan findings. A total of 36 patients were enrolled. The optimal debulking rate was 50%. The CT scan could detect the disease-bearing sites with overall sensitivity of 68.29%, specificity of 89%, accuracy of 78.07%, and positive and negative predictive values of 99 and 50.1%, respectively. Upon multivariate analysis, bowel mesentery (p 0.011) and omental extension (p 0.025) were associated with suboptimal debulking. CT scan accuracy at these sites (predictive performance) was 86.1%. We identified small bowel mesentery and omental extension (to spleen/stomach/colon) as sites associated with suboptimal debulking. MDCT accurately depicts peritoneal metastases, although sensitivity is reduced in certain areas of significance for optimal debulking. Further validation with more number of patients is warranted. abstract_id: PUBMED:33842358 Role of Lymphadenectomy During Interval Debulking Surgery Performed After Neoadjuvant Chemotherapy in Patients With Advanced Ovarian Cancer. Objective: The role of lymphadenectomy in interval debulking surgery (IDS) performed after neoadjuvant chemotherapy (NACT) in advanced ovarian cancer remains unclear. We aimed to investigate the clinical significance of lymphadenectomy in IDS. Methods: We retrospectively reviewed and analyzed the data of patients with advanced ovarian cancer who underwent NACT followed by IDS. Results: In 303 patients receiving NACT-IDS, lymphadenectomy was performed in 127 (41.9%) patients. One hundred and sixty-three (53.8%) patients achieved no gross residual disease (NGRD), and 69 (22.8%) had residual disease &lt; 1 cm, whereas 71 (23.4%) had residual disease ≥ 1cm. No significant difference in progression-free survival (PFS) and overall survival (OS) was observed between the lymphadenectomy group and the no lymphadenectomy group in patients with NGRD, residual disease &lt; 1 cm, and residual disease ≥ 1 cm, respectively. The proportions of pelvic, para-aortic and distant lymph node recurrence were 7.9% (10/127), 4.7% (6/127) and 5.5% (7/127) in the lymphadenectomy group, compared with 5.7% (10/176, P = 0.448), 4.5% (8/176, P = 0.942) and 5.1% (9/176, P = 0.878), respectively, in no lymphadenectomy group. Multivariate analysis identified residual disease ≥ 1 cm [hazard ratios (HR), 4.094; P = 0.008] and elevated CA125 levels after 3 cycles of adjuvant chemotherapy (HR, 2.883; P = 0.004) were negative predictors for OS. Conclusion: Lymphadenectomy may have no therapeutic value in patients with advanced ovarian cancer underwent NACT-IDS. Our findings may help to better the therapeutic strategy for advanced ovarian cancer. More clinical trials are warranted to further clarify the real role of lymphadenectomy in IDS. abstract_id: PUBMED:29078115 Only complete tumour resection after neoadjuvant chemotherapy offers benefit over suboptimal debulking in advanced ovarian cancer. Objective: The aim of this study was to compare surgical results and survival outcome of advanced ovarian cancer patients who were treated with primary versus interval debulking surgery. Study Design: In this retrospective study stage III and IV ovarian cancer patients who received debulking surgery from 2006 to 2015 were included. Surgical results were described as complete, optimal or suboptimal debulking and chi-square test was used to assess significant differences. Overall survival was measured using Kaplan-Meier curves, the log-rank test and uni- and multivariable Cox regression analyses. Results: Of 146 patients included in the study, 55 patients were treated with primary debulking surgery (PDS) followed by adjuvant chemotherapy and 91 patients received neoadjuvant chemotherapy (NAC) followed by interval debulking surgery (IDS). Complete or optimal debulking (0-10mm of residual disease) was achieved in 76.4% (n=42) of the PDS group and in 79.1% (n=72) of the IDS group. Overall median survival was 38 months for PDS and 31 months for IDS, which was not significantly different (p=0.181). In the IDS group, a significant difference was found in OS between complete and optimal resection (p=0.013). Besides that, no difference in survival outcome was found in the IDS group between patients with optimal or suboptimal debulking (median survival were 20 and 19 months respectively). Conclusion: Complete debulking surgery is of utmost importance, both in case of PDS and IDS. Achieving optimal interval debulking of 1-10mm residual disease did not show any survival benefit over suboptimal interval debulking. abstract_id: PUBMED:1890837 Five-year survival for cisplatin-based chemotherapy versus single-agent melphalan in patients with advanced ovarian cancer and optimal debulking surgery. The purpose of this study was to evaluate 5-year survival and 5-year progression-free survival in previously untreated patients with advanced ovarian cancer treated with single-agent melphalan in which very few patients underwent optimal debulking surgery (less than 2 cm residual) as compared with the patients treated with Cisplatin-based chemotherapy in which most patients underwent optimal debulking surgery. Significant increases in 5-year survival and 5-year progression-free survival were noted as we changed from the melphalan trial, in which only 14% underwent optimal debulking surgery, to PAC-H, in which 57% and the PAC trial in which 90%, respectively, underwent optimal debulking surgery. However, for those patients whose tumors were optimally debulked in the three trials, there were no statistically significant differences in median survival, median progression-free survival, 5-year survival, or 5-year progression-free survival in those patients treated with melphalan, PAC-H, or PAC. Without optimal debulking surgery, Cisplatin-based multiagent chemotherapy offered a small survival advantage. These results are similar to that reported by Gruppo Interregionale Cooperativo Oncologico Ginecologia, in which survival curves were identical for all the subgroups of chemotherapy regimens for those patients with residual disease less than 2 cm at the onset of chemotherapy whether they received (1) cyclophosphamide; (2) cyclophosphamide and Adriamycin; (3) cyclophosphamide, Adriamycin, and Cisplatin; (4) cyclophosphamide, Adriamycin, and hexamethylmelamine; (5) Cisplatin and cyclophosphamide; (6) low-dose Cisplatin; (7) high-dose Cisplatin; or (8) carboplatin. abstract_id: PUBMED:29843747 A predictive score for optimal cytoreduction at interval debulking surgery in epithelial ovarian cancer: a two- centers experience. Background: Optimal cytoreduction (macroscopic Residual Tumor, RT = 0) is the best survival predictor factor in epithelial ovarian cancer (EOC). It doesn't exist a consolidated criteria to predict optimal surgical resection at interval debulking surgery (IDS). The aim of this study is to develop a predictive model of complete cytoreduction at IDS. Methods: We, retrospectively, analyzed 93 out of 432 patients, with advanced EOC, underwent neoadjuvant chemotherapy (NACT) and IDS from January 2010 to December 2016 in two referral cancer centers. The correlation between clinical-pathological variables and residual disease at IDS has been investigated with univariate and multivariate analysis. A predictive score of cytoreduction (PSC) has been created by combining all significant variables. The performance of each single variable and PSC has been reported and the correlation of all significant variables with progression free survival (PFS) has been assessed. Results: At IDS, 65 patients (69,8%) had complete cytoreduction with no residual disease (R = 0). Three criteria independently predicted R &gt; 0: age ≥ 60 years (p = 0.014), CA-125 before NACT &gt; 550 UI/dl (p = 0.044), and Peritoneal Cancer Index (PCI) &gt; 16 (p &lt; 0.001). A PSC ≥ 3 has been associated with a better accuracy (85,8%), limiting the number of incomplete surgeries to 16,5%. Moreover, a PCI &gt; 16, a PSC ≥ 3 and the presence of R &gt; 0 after IDS were all significantly associated with shorter PFS (p &lt; 0.001, p &lt; 0.001 and p = 0.004 respectively). Conclusions: Our PSC predicts, in a large number of patients, complete cytoreduction at IDS, limiting the rate of futile extensive surgeries in case of presence of residual tumor (R &gt; 0). The PSC should be prospectively validated in a larger series of EOC patients undergoing NACT-IDS. Answer: The question of whether extensive debulking is warranted when optimal (≤1 cm) but visible residual disease is present in ovarian cancer patients has been explored in several studies. According to the findings from a study with 219 patients identified with ≤1 cm but visible residual disease, there was no significant difference in overall survival (OS) between patients who did or did not have extensive upper abdominal surgery (UAS). Similarly, there was no significant difference in OS between patients with or without residual small bowel (SB) disease. These results suggest that for ovarian cancer patients with disease not amenable to complete gross resection, extensive surgery should still be considered to achieve ≤1 cm but visible residual disease status, even in cases where the residual disease involves the SB (PUBMED:23672929). Another study examined the relationship between the volume of residual disease and oncologic outcomes among patients with advanced-stage epithelial ovarian/fallopian tube/primary peritoneal carcinoma undergoing primary debulking surgery (PDS). The study found that patients who did not undergo complete surgical resection (CSR) but had ≤1 cm of residual disease confined to a single anatomic location had better prognosis compared to those with ≤1 cm of residual disease in multiple anatomic locations. This suggests that the volume and distribution of residual disease are important factors to consider, and achieving low-volume residual disease in a single location may be a viable option for PDS (PUBMED:29933927). In the context of interval debulking surgery (IDS) after neoadjuvant chemotherapy, a novel classification of residual disease was proposed to better distinguish oncologic outcomes. The study concluded that patients with complete surgical resection have the best prognosis, followed by those with ≤1 cm single-anatomic location disease. However, patients with ≤1 cm multiple-anatomic location disease had survival similar to suboptimally debulked patients, indicating that the total volume and distribution of residual disease are critical factors in determining outcomes (PUBMED:31082382). Overall, these studies suggest that while achieving no residual disease is the ideal goal, extensive debulking to achieve minimal residual disease (≤1 cm) may still be beneficial and should be considered, taking into account the patient's specific disease characteristics and distribution.
Instruction: Do active surveillance and contact precautions reduce MRSA acquisition? Abstracts: abstract_id: PUBMED:31637429 The Effectiveness of Contact Precautions on Methicillin-Resistant Staphylococcus aureus in Long-term Care Across the United States. Background: Methicillin-resistant Staphylococcus aureus (MRSA) is a common cause of health care-associated infections in long-term care facilities (LTCFs). The Centers for Disease Control and Prevention recommends contact precautions for the prevention of MRSA within acute care facilities, which are being used within the United States Department of Veterans Affairs (VA) for LTCFs in a modified fashion. The impact of contact precautions in long-term care is unknown. Methods: To evaluate whether contact precautions decreased MRSA acquisition in LTCFs, compared to standard precautions, we performed a retrospective effectiveness study (pre-post, with concurrent controls) using data from the VA health-care system from 1 January 2011 until 31 December 2015, 2 years before and after a 2013 policy recommending a more aggressive form of contact precautions. Results: Across 75 414 patient admissions from 74 long-term care facilities in the United States, the overall unadjusted rate of MRSA acquisition was 2.6/1000 patient days. Patients were no more likely to acquire MRSA if they were cared for using standard precautions versus contact precautions in a multivariable, discrete time survival analysis, controlling for patient demographics, risk factors, and year of admission (odds ratio, 0.97; 95% confidence interval, .85-1.12; P = .71). Conclusions: MRSA acquisition and infections were not impacted by the use of active surveillance and contact precautions in LTCFs in the VA. abstract_id: PUBMED:23555568 Do active surveillance and contact precautions reduce MRSA acquisition? A prospective interrupted time series. Background: Consensus for methicillin-resistant Staphylococcus aureus (MRSA) control has still not been reached. We hypothesised that use of rapid MRSA detection followed by contact precautions and single room isolation would reduce MRSA acquisition. Methods: This study was a pre-planned prospective interrupted time series comparing rapid PCR detection and use of long sleeved gowns and gloves (contact precautions) plus single room isolation or cohorting of MRSA colonised patients with a control group. The study took place in a medical-surgical intensive care unit of a tertiary adult hospital between May 21(st) 2007 and September 21(st) 2009. The primary outcome was the rate of MRSA acquisition. A segmented regression analysis was performed to determine the trend in MRSA acquisition rates before and after the intervention. Findings: The rate of MRSA acquisition was 18.5 per 1000 at risk patient days in the control phase and 7.9 per 1000 at-risk patient days in the intervention phase, with an adjusted hazard ratio 0.39 (95% CI 0.24 to 0.62). Segmented regression analysis showed a decline in MRSA acquisition of 7% per month in the intervention phase, (95%CI 1.9% to 12.8% reduction) which was a significant change in slope compared with the control phase. Secondary analysis found prior exposure to anaerobically active antibiotics and colonization pressure were associated with increased acquisition risk. Conclusion: Contact precautions with single room isolation or cohorting were associated with a 60% reduction in MRSA acquisition. While this study was a quasi-experimental design, many measures were taken to strengthen the study, such as accounting for differences in colonisation pressure, hand hygiene compliance and individual risk factors across the groups, and confining the study to one centre to reduce variation in transmission. Use of two research nurses may limit its generalisability to units in which this level of support is available. abstract_id: PUBMED:28597809 Discontinuation of contact precautions with the introduction of universal daily chlorhexidine bathing. Contact precautions are a traditional strategy to prevent transmission of methicillin-resistant Staphylococcus aureus (MRSA). Chlorhexidine bathing is increasingly used to decrease MRSA burden and transmission in intensive care units (ICUs). We sought to evaluate a hospital policy change from routine contact precautions for MRSA compared with universal chlorhexidine bathing, without contact precautions. We measured new MRSA acquisition in ICU patients and surveyed for MRSA environmental contamination in common areas and non-MRSA patient rooms before and after the policy change. During the baseline and chlorhexidine bathing periods, the number of patients (453 vs. 417), ICU days (1999 vs. 1703) and MRSA days/1000 ICU days (109 vs. 102) were similar. MRSA acquisition (2/453 vs. 2/457, P = 0·93) and environmental MRSA contamination (9/474 vs. 7/500, P = 0·53) were not significantly different between time periods. There were 58% fewer contact precaution days in the ICU during the chlorhexidine period (241/1993 vs. 102/1730, P &lt; 0·01). We found no evidence that discontinuation of contact precautions for patients with MRSA in conjunction with adoption of daily chlorhexidine bathing in ICUs is associated with increased MRSA acquisition among ICU patients or increased MRSA contamination of ICU fomites. Although underpowered, our findings suggest this strategy, which has the potential to reduce costs and improve patient safety, should be assessed in similar but larger studies. abstract_id: PUBMED:37738565 Are contact precautions "essential" for the prevention of healthcare-associated methicillin-resistant Staphylococcus aureus? The recently updated SHEA/IDSA/APIC practice recommendations for MRSA prevention in acute care facilities list contact precautions (CP) for patients known to be infected or colonized with MRSA as an "essential practice", meaning that it should be adopted in all acute care facilities. We argue that existing evidence on benefits and harms associated with CP do not justify this recommendation. There are no controlled trials that support broad use of CP for MRSA prevention. Data from hospitals that have discontinued CP for MRSA have found no impact on MRSA acquisition or infection. The burden and harms of CP remain concerning, including the environmental impact of increased gown and glove use. We suggest that CP be included among other "additional approaches" to MRSA prevention that can be implemented under specific circumstances (e.g. outbreaks, evidence of ongoing transmission despite application of essential practices). abstract_id: PUBMED:37390613 Active Surveillance and Contact Precautions for Preventing Methicillin-Resistant Staphylococcus aureus Healthcare-Associated Infections During the COVID-19 Pandemic. Background: Statistically significant decreases in methicillin-resistant Staphylococcus aureus (MRSA) healthcare-associated infections (HAIs) occurred in Veterans Affairs (VA) hospitals from 2007 to 2019 using a national policy of active surveillance (AS) for facility admissions and contact precautions for MRSA colonized (CPC) or infected (CPI) patients, but the impact of suspending these measures to free up laboratory resources for testing and conserve personal protective equipment for coronavirus disease 2019 (COVID-19) on MRSA HAI rates is not known. Methods: From July 2020 to June 2022 all 123 acute care VA hospitals nationwide were given the rolling option to suspend (or re-initiate) any combination of AS, CPC, or CPI each month, and MRSA HAIs in intensive care units (ICUs) and non-ICUs were tracked. Results: There were 917 591 admissions, 5 225 174 patient-days, and 568 MRSA HAIs. The MRSA HAI rate/1000 patient-days in ICUs was 0.20 (95% confidence interval [CI], .15-.26) for facilities practicing "AS + CPC + CPI" compared to 0.65 (95% CI, .41-.98; P &lt; .001) for those not practicing any of these strategies, and in non-ICUs was 0.07 (95% CI, .05-.08) and 0.12 (95% CI, .08-.19; P = .01) for the respective policies. Accounting for monthly COVID-19 facility admissions using a negative binomial regression model did not change the relationships between facility policy and MRSA HAI rates. There was no significant difference in monthly facility urinary catheter-associated infection rates, a non-equivalent dependent variable, in the policy categories in either ICUs or non-ICUs. Conclusions: Facility removal of MRSA prevention practices was associated with higher rates of MRSA HAIs in ICUs and non-ICUs. abstract_id: PUBMED:25799481 Effect of contact precautions for MRSA on patient satisfaction scores. Contact precautions may have an adverse effect on a patient's hospital experience and the delivery of care. This case-control study compared patient satisfaction scores between 70 patients isolated for MRSA and 139 non-isolated patients. Based on an adjusted analysis, there was no difference in patient satisfaction between the two groups. Age and educational status were found to affect patient satisfaction. abstract_id: PUBMED:33512528 Effectiveness of Contact Precautions to Prevent Transmission of Methicillin-Resistant Staphylococcus aureus and Vancomycin-Resistant Enterococci in Intensive Care Units. Background: Contact precautions for endemic methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant enterococci (VRE) are under increasing scrutiny, in part due to limited clinical trial evidence. Methods: We retrospectively analyzed data from the Strategies to Reduce Transmission of Antimicrobial Resistant Bacteria in Intensive Care Units (STAR*ICU) trial to model the use of contact precautions in individual intensive care units (ICUs). Data included admission and discharge times and surveillance test results. We used a transmission model to estimate key epidemiological parameters, including the effect of contact precautions on transmission. Finally, we performed multivariate meta-regression to identify ICU-level factors associated with contact precaution effects. Results: We found that 21% of admissions (n = 2194) were placed on contact precautions, with most for MRSA and VRE. We found little evidence that contact precautions reduced MRSA transmission. The estimated change in transmission attributed to contact precautions was -16% (95% credible interval, -38% to 15%). VRE transmission was higher than MRSA transmission due to contact precautions, but not significantly. In our meta-regression, we did not identify associations between ICU-level factors and estimated contact precaution effects. Importation and transmission were higher for VRE than for MRSA, but clearance rates were lower for VRE than for MRSA. Conclusions: We found little evidence that contact precautions implemented during the STAR*ICU trial reduced transmission of MRSA or VRE. We did find important differences in the transmission dynamics between MRSA and VRE. Differences in organism and healthcare setting may impact the efficacy of contact precautions. abstract_id: PUBMED:32285336 Justice in control of methicillin-resistant Staphylococcus aureus transmission: a fair question to ask? Active surveillance cultures and contact precautions is a strategy to control the transmission of methicillin-resistant Staphylococcus aureus (MRSA) within healthcare facilities. Whether to implement this strategy to routinely screen and isolate inpatients with MRSA in non-outbreak (endemic) settings, or to remove it and use standard infection control precautions only is scientifically and ethically controversial, in view of the potential adverse effects of contact precautions on patients. To support the use of standard precautions only, it has been argued that active surveillance to identify patients who are asymptomatically colonised with MRSA to place them in contact precautions is unjust or unfair to these patients in various ways. This paper will unpack and examine four distinct arguments, which are advanced from a medical ethics or quality improvement ethical framework, for why this is so. Our analysis shows that while these arguments highlight the injustice of current practices, they do not provide strong ethical reasons for justifying the removal of active surveillance and contact precautions to control MRSA transmission and infection. An implication of our arguments is that the ethical frame for evaluating prevention and control strategies for MRSA, a multi-drug resistant bacteria, should shift from healthcare to primarily public health. From a public health ethics perspective, whether a strategy is unjust, or how ethically significant its lack of fairness is, depends on assessing the evidence for its public health effectiveness and necessity in a given setting, and the extent of the harms and burdens patients with MRSA bear when they are on contact precautions, which remain matters of scientific debate or uncertainty. As an ethical consideration in the debate, the chief normative implication of justice is to provide us further reasons to revise current active surveillance-contact precautions practices, and for the need for research and interventions to minimise their potential adverse effects on patients. abstract_id: PUBMED:36869371 Active surveillance testing to reduce transmission of carbapenem-resistant, gram-negative bacteria in intensive care units: a pragmatic, randomized cross-over trial. Background: In intensive care unit (ICU) settings, the transmission risk of carbapenem-resistant, gram-negative bacteria (CRGNB) is high. There is a paucity of data regarding the effectiveness of interventions, including active screening, preemptive isolation, and contact precautions, to reduce transmission of CRGNB. Methods: We conducted a pragmatic, cluster-randomized, non-blinded cross-over study in 6 adult ICUs in a tertiary care center in Seoul, South Korea. ICUs were randomly assigned to perform active surveillance testing with preemptive isolation and contact precautions (intervention) or standard precautions (control) during the initial 6-month study period, followed by a 1-month washout period. During a subsequent 6-month period, departments that used standard precautions switched to using interventional precautions and vice versa. The incidence rates of CRGNB were compared between the two periods using Poisson regression analysis. Results: During the study period, there were 2268 and 2224 ICU admissions during the intervention and control periods, respectively. Because a carbapenemase-producing Enterobacterales outbreak occurred in a surgical ICU (SICU), we excluded admissions to the SICU during both the intervention and control periods and performed a modified intention-to-treat (mITT) analysis. In mITT analysis, a total of 1314 patients were included. The acquisition rate of CRGNB was 1.75 cases per 1000 person-days during the intervention period versus 3.33 cases per 1000 person-days during the control period (IRR, 0.53 [95% confidence interval (CI) 0.23-1.11]; P = 0.07). Conclusions: Although this study was underpowered and showed borderline significance, active surveillance testing and preemptive isolation could be considered in settings with high baseline prevalence of CRGNB. Trial registration Clinicaltrials.gov Identifier: NCT03980197. abstract_id: PUBMED:25458743 Value of an active surveillance policy to document clearance of meticillin-resistant Staphylococcus aureus and vancomycin-resistant enterococci amongst inpatients with prolonged admissions. This article reports the impact of an active surveillance policy to identify clearance of meticillin-resistant Staphylococcus aureus (MRSA) or vancomycin-resistant enterococci (VRE) in known colonized inpatients with prolonged admissions in order to discontinue isolation precautions. Amongst 365 colonized patients with hospital admissions exceeding 30 days, clearance rates of 11% for MRSA and 18% for VRE were found after a median of 23 days and 26.5 days, respectively, resulting in a saving of 2152 patient-days of contact precautions over one year. This has proven to be a cost-beneficial policy. Answer: The effectiveness of active surveillance and contact precautions in reducing MRSA acquisition is a subject of ongoing debate, with studies showing mixed results. One study found that the use of active surveillance and contact precautions did not significantly impact MRSA acquisition and infections in long-term care facilities within the VA healthcare system (PUBMED:31637429). Similarly, another study analyzing data from the STAR*ICU trial found little evidence that contact precautions reduced MRSA transmission in intensive care units (PUBMED:33512528). In contrast, a prospective interrupted time series study reported a 60% reduction in MRSA acquisition with the use of contact precautions and single room isolation or cohorting (PUBMED:23555568). Additionally, during the COVID-19 pandemic, Veterans Affairs hospitals that suspended active surveillance and contact precautions for MRSA saw higher rates of MRSA healthcare-associated infections, suggesting that these measures were effective in preventing MRSA transmission (PUBMED:37390613). Another study found no significant difference in MRSA acquisition or environmental MRSA contamination when contact precautions were discontinued in ICUs with the introduction of universal daily chlorhexidine bathing (PUBMED:28597809). Moreover, a study on the impact of contact precautions on patient satisfaction scores found no difference between isolated and non-isolated patients, indicating that contact precautions may not adversely affect patient experience (PUBMED:25799481). The debate extends to ethical considerations, with some arguing that the use of active surveillance and contact precautions may be unjust or unfair to patients who are asymptomatically colonized with MRSA (PUBMED:32285336). However, the ethical frame for evaluating prevention and control strategies for MRSA should consider public health effectiveness and necessity, as well as the potential adverse effects on patients. In summary, the evidence on the effectiveness of active surveillance and contact precautions in reducing MRSA acquisition is mixed, with some studies showing benefits and others showing no significant impact. The decision to implement these measures may depend on the specific healthcare setting, the prevalence of MRSA, and the potential benefits and harms associated with their use.
Instruction: Saudi women in academic medicine. Are they succeeding? Abstracts: abstract_id: PUBMED:15573179 Saudi women in academic medicine. Are they succeeding? Objective: The main purpose of this study is to assess the achievements and barriers to advancement for Saudi women in a medical academic setup. Methods: We studied the career progression of female medical graduates, who were appointed an academic position in King Faisal University, Dammam, Kingdom of Saudi Arabia (KSA) between 1982 and 2003 and compared it to the male counterpart. The information was collected from the Dean's ship of admission and registration, employees and faculty affairs administration office, self completed and telephone surveys. Results: The percentage of medical graduates who were appointed on an academic post in the University was 4.4% for females and 4% for males. The females specialized in various fields and progressed equitably with the males in their postgraduate studies. Academic promotion to higher ranks was slower for females in comparison to males. This was related to various reasons related to family responsibilities, social strains, lack of family friendly policies in the institutions, lack of mentoring relationship, and bias against females. Conclusion: Saudi women in academic medicine have succeeded at the junior level. They specialized in various fields and excelled. Their further academic progression needs the support of senior academic staff, the chairs and the institution administration. abstract_id: PUBMED:23670692 International women physicians' perspectives on choosing an academic medicine career. Unlabelled: Concerns about recruiting physicians into academic careers is an international issue. A qualitative study with United States (US) women physicians revealed insights into how, when, and why physicians choose an academic career in medicine. The current study explored international women physicians' perspectives on their career choice of academic medicine and determined if different themes emerged. We expanded the 2012 study of US women physicians by interviewing women physicians in Canada, Pakistan, Mexico, and Sweden to gain an international perspective on choosing an academic career. Interviews were thematically analyzed against themes identified in the previous study. Based on themes identified in the study of US physicians, qualitative analysis of 7 international women physicians revealed parallel themes for the following areas: Why academic medicine? Fit; People; Aspects of academic health centre environment. How the decision to enter academic medicine was made? Decision-making style; Emotionality When the decision to enter academic medicine was made? Practising physician; Fellowship; Medical student. Work-life balance, choosing academic medicine by default, serendipity, intellectual stimulation, mentors, research and teaching were among the areas specifically highlighted. Conclusion: Parallel themes exist regarding how, why, and when US and international women physicians choose academic medicine as a career path. abstract_id: PUBMED:31037545 Mentorship of Women in Academic Medicine: a Systematic Review. Background: Women remain underrepresented in academic medicine, particularly in leadership positions. This lack of women in leadership has been shown to have negative implications for both patient care and educational outcomes. Similarly, the literature demonstrates that female physicians are less likely to have mentors, despite the proven benefits of mentorship for career advancement. The objective of this review is to identify and describe models of mentorship for women in academic medicine. Methods: We searched PubMed, PsycINFO, Education Resources Information Center, and Cochrane Databases of Systematic Reviews following PRISMA guidelines in June 2017. We included original English language studies that described a mentorship program in the USA that involved academic medical doctorates and that were created for women or provided results stratified by gender. Results: Our search returned 3625 results; 3309 references remained after removal of duplicates. Twenty articles met inclusion criteria. The majority of the programs were designed for junior faculty and used the dyad model of mentoring (i.e., one mentor/one mentee). Frequently cited objectives of these programs were to improve scholarship, promotion, and retention of female faculty. Program evaluations were primarily survey-based, with participant-reported satisfaction being the most frequent measured outcome. Most results showed very high satisfaction. Gender concordance between mentor and mentee did not impact satisfaction. Eight articles reported objective outcomes, including publications, retention, and promotion, and each of these demonstrated an improvement after program implementation. Discussion: Our review suggests that mentorship programs designed for women, regardless of the model, are met with high satisfaction and can help promote and retain women in academic medicine. No clear best practices for mentorship emerged in the literature. Institutions, therefore, can individualize their mentorship programs and models to available resources and goals. These results demonstrate the importance of more widespread implementation of mentorship programs to more effectively facilitate professional development and success of women in academic medicine. abstract_id: PUBMED:38061049 Satisfaction of Women Faculty in Academic Medicine. Purpose: Research about academic medicine women faculty has focused on comparisons of men and women or specific groups who achieved leadership. To better understand the low percentages of women in academic medicine leadership, attention should be paid to the career continuum within genders. Study findings will inform policies and programs to support women in building careers and acquiring leadership positions. Materials and Methods: Association of American Medical Colleges (AAMC) StandPoint Faculty Engagement Survey data are used to describe and compare women assistant, associate and full professors' perceptions of (1) career development and advancement opportunities, and (2) a culture and climate that fosters diversity, equity, and inclusion. Specific similarities and differences with men are highlighted. Results: Fifty-nine percent of women respondents were assistant, 25% associate, and 16% full professors. Associate professors of both genders were the least satisfied on the main measures. Women were less satisfied than men at each career stage across the majority of variables. Among women, fewer than half of full and associate professors, and 52% of assistant professors believe they can express their opinions without fear of retribution. While the majority at all ranks (69%-75%) report feeling respected in the workplace, among those who did not, the highest percentage of disrespect based on gender was among associate professors. Conclusions: The perceptions of &gt;7,500 academic medicine women faculty, representing different generations and ranks, underscore the need to broadly address gender inequity and sexism throughout the career continuum. It identifies the mid-career stage as a challenging experience for both men and women. Women, especially at the associate professor rank, remain a critically dissatisfied and underresourced group that is at risk for underutilization and potentially exit from academic medicine. All ranks of women need career development and equitable policies to support their sense of belonging and career advancement. abstract_id: PUBMED:34077145 Women empowerment among academic and administrative staff in Saudi universities: A cross-sectional study. Women in Saudi Arabia constitute nearly 50% of the population, but their participation in economic and social activities are far below the kingdom's potential. According to the 2030 vision, women empowerment is an essential requirement for community transformation and development. The study aims to explore women empowerment among academic and administrative staff in Saudi Universities. A cross-sectional research design was conducted at 15 Saudi governmental universities. A multistage cluster sampling technique was followed to select 5587 participants. The data collection starts from April to September 2020. The current study results illustrate statistically significant differences between academic and administrative staff in the total women empowerment score and all of its dimensions (p &lt;0.05). The majority of academic staff (84.4%) have high personal empowerment compared to 73.7% of the administrative staff. The study concluded that women empowerment is higher among academics compared to administrative staff in Saudi Universities. abstract_id: PUBMED:37227763 Six Practical Strategies to Mentor and Sponsor Women in Academic Medicine. This article focuses on the importance of mentorship and sponsorship for women in academic medicine, including trainees and faculty, and emphasizes the need for flexible and expanded definitions. Both the benefits and potential harms associated with sponsorship are described. There are 6 actionable strategies illustrated that may be added to a multidimensional mentoring model in order to better support women in medicine. abstract_id: PUBMED:10680399 Women in academic medicine. Women now constitute 43% of US medical students, 37% of residents, and 27% of full-time medical school faculty. Less than 11% of women faculty are full professors, however, compared to 31% of men, and these proportions haven't changed in more than 15 years. Since the proportion of women reaching the top ranks remains relatively low, the pool of women available for leadership positions in academic medicine is still small. This review article first summarizes recent data on women's representation in academic medicine and then discusses why they are not succeeding at the same pace as men. Reasons include a complex combination of women's choices, sexism, cultural stereotypes, constraints in combining family responsibilities with professional opportunities, and lack of effective mentoring. Multiple approaches are required to overcome these "cumulative disadvantages," among them improving the gender climate at academic medical centers; the mentoring of women faculty, residents, and students; and skill-building opportunities for women. abstract_id: PUBMED:11043995 Women in academic emergency medicine. Objective: To evaluate the achievement of women in academic emergency medicine (EM) relative to men. Methods: This study was a cross-sectional mail survey of all emergency physicians who were employed at three-fourths full-time equivalent or greater at the 105 EM residency programs in the United States from August 1997 to December 1997. The following information was obtained: demographics, training and practice issues, roles and responsibilities in academic EM, percentage of time spent per week in clinical practice, teaching, administrative and research activities, academic productivity, and funding. Results: Of the 1,575 self-administered questionnaires distributed by the office of the chairs, 1,197 (76%) were returned. Two hundred seventy-four (23%) of the respondents were women, and 923 (77%) were men. There was a significant difference noted between men and women in all demographic categories. The numbers of respondents who were nonwhite were extremely small in the sample and, therefore, the authors are hesitant to draw any conclusions based on race/ethnicity. There was no difference in training in EM between men and women (82% vs 82%, p = 0.288), but a significantly higher proportion of male respondents were board-certified in EM (84% vs 76%, p &lt; 0.002). Women in academic EM were less likely to hold major leadership positions, spent a greater percentage of time in clinical and teaching activities, published less in peer-reviewed journals, and were less likely to achieve senior academic rank in their medical schools. Conclusions: These findings mirror those of most medical specialties: academic achievement of women in academic EM lags behind that of men. The paucity of minority physicians in academic EM didn't permit analysis of their academic achievements. abstract_id: PUBMED:18783488 Recommendations from the Society for Academic Emergency Medicine (SAEM) Taskforce on women in academic emergency medicine. The Society for Academic Emergency Medicine (SAEM) convened a taskforce to study issues pertaining to women in academic emergency medicine (EM). The charge to the Taskforce was to "Create a document for the SAEM Board of Directors that defines and describes the unique recruitment, retention, and advancement needs for women in academic emergency medicine." To this end, the Taskforce and authors reviewed the literature to highlight key data points in understanding this issue and made recommendations for individuals at four levels of leadership and accountability: leadership of national EM organizations, medical school deans, department chairs, and individual women faculty members. The broad range of individuals targeted for recommendations reflects the interdependent and shared responsibility required to address changes in the culture of academic EM. The following method was used to determine the recommendations: 1) Taskforce members discussed career barriers and potential solutions that could improve the recruitment, retention, and advancement of women in academic EM; 2) the authors reviewed recommendations in the literature by national consensus groups and experts in the field to validate the recommendations of Taskforce members and the authors; and 3) final recommendations were sent to all Taskforce members to obtain and incorporate additional comments and ensure a consensus. This article contains those recommendations and cites the relevant literature addressing this topic. abstract_id: PUBMED:37192448 Career Advancement Among Women Physicians in Nine Academic Medicine Specialties. Objective: Gender parity lags in academic medicine. We applied the Rank Equity Index (REI) to compare the longitudinal progress of women's academic medicine careers. We hypothesized that women have different rank parity in promotion by specialty based on the proportion of women in the specialty. Materials and Methods: Aggregate data by sex for medical students, residents, assistant professors, associate professors, and professors in nine specialties were obtained from the Association of American Medical Colleges for 2019-2020. Specialties were clustered into terciles based on the proportion of women in the field: upper (obstetrics and gynecology, pediatrics, psychiatry), middle (internal medicine, emergency medicine, anesthesia), and lower (surgery, urology, and orthopedic surgery). We calculated the percentage representation by sex by specialty and rank to calculate REI. Specialty-specific REI comparisons between each rank were performed to assess parity in advancement. Results: Only specialties in the upper tercile recruited proportionally more women medical students to residency training. All specialties advanced women for the resident-to-assistant professor with psychiatry, internal medicine, emergency medicine, anesthesia, urology, and orthopedic surgery that promoted women faculty at rates above parity. No specialty demonstrated parity in advancement based on sex for the assistant professor-to-associate professor or associate professor-to-professor transitions. Conclusion: Gender inequity in advancement is evident in academic medicine starting at the assistant professor-to-associate professor stage, regardless of overall proportion of women in the specialty. This suggests a common set of barriers to career advancement of women faculty in academic medicine that must be addressed starting at the early career stage. Answer: Saudi women in academic medicine have made significant strides, particularly at the junior level. They have specialized in various fields and have shown excellence in their postgraduate studies. However, their progression to higher academic ranks has been slower compared to their male counterparts. This slower advancement has been attributed to a combination of factors, including family responsibilities, social strains, lack of family-friendly policies in institutions, absence of mentoring relationships, and bias against females (PUBMED:15573179). Despite these challenges, Saudi women in academic medicine have achieved higher empowerment compared to administrative staff in Saudi universities, indicating progress in the academic domain (PUBMED:34077145). However, to further their academic progression, Saudi women in academic medicine require support from senior academic staff, chairs, and institution administration (PUBMED:15573179). The importance of mentorship and sponsorship is highlighted as a crucial factor for the career advancement of women in academic medicine. Mentorship programs designed for women, regardless of the model, have been met with high satisfaction and can help promote and retain women in academic medicine (PUBMED:31037545). Practical strategies to mentor and sponsor women in academic medicine have been suggested to support women in medicine more effectively (PUBMED:37227763). In conclusion, while Saudi women in academic medicine have succeeded at the junior level and have shown potential for excellence, their further advancement to senior positions remains a challenge. Addressing the barriers to their progression and implementing supportive mentorship programs are essential steps towards ensuring their success in academic medicine.
Instruction: Anatomic predictors of formation of lower caliceal calculi: is it the time for three-dimensional computed tomography urography? Abstracts: abstract_id: PUBMED:18811576 Anatomic predictors of formation of lower caliceal calculi: is it the time for three-dimensional computed tomography urography? Objective: To evaluate the various anatomic factors, using computed tomography urography (CTU), that predispose to a lower-pole kidney stone formation on one side compared with the other. Materials And Methods: The study included 51 patients with a solitary lower-pole stone. Lower-pole infundibulopelvic angle (IPA), infundibular width (IW), infundibular length (IL), caliceal volume (CV), and number of minor calices of the affected lower calyx and normal contralateral kidney were measured based on CTU. The IPA was measured according to the methods of Sampaio and Elbahnasy. A comparison was made using multivariate analysis to determine whether any of these measurements predisposed one side to form stones. Results: The mean IPA was 80.9 degrees (Sampaio), and 52.3 degrees (Elbahnasy) on the affected kidney and 87.5 degrees (P = 0.39) and 54 degrees (P = 0.36) on the normal side. The mean IW was 2.6 mm on affected side and 3.15 mm on the normal side (P = 0.03). The mean IL was 19.7 mm and 15.3 mm (P &lt; 0.001) on the affected and normal kidneys, respectively. Mean CV was 1123 mm(3) on affected side and 286 mm(3) on the normal side (P &lt; 0.001). The mean number of minor calices was comparable (n = 3) on both sides with no statistical significance. Using multivariate analysis, only caliceal volume and IW sustained their significance in the final model. Conclusions: CTU is a very useful imaging modality in the diagnosis of intrarenal anatomic variation. Larger caliceal volume and narrower IW are associated with stone formation in lower calyx. abstract_id: PUBMED:37064393 Dual-energy computed tomography in reducing the effective radiation dose of computed tomography urography in patients with urinary calculi. Background: To evaluate the diagnostic performance of split-bolus single-phase dual-energy computed tomography (DECT) with virtual non-contrast computed tomography (VNCT) compared to three-phase computed tomography (CT) urography in patients with urinary calculi, and to examine the performance of split-bolus single-phase DECT when reducing the effective dose. Methods: A total of 48 patients with abdominal pain or hematuria suggestive of unilateral urinary calculi were enrolled and randomly divided into the experimental and control groups, with 24 cases in each group. Patients in the experimental group underwent split-bolus single-phase DECT to obtain a mixed nephrographic excretory phase. Patients in the control group accepted a single-bolus three-phase CT urography scan (non-contrast, nephrographic phase, and excretory phase). The CT values and the contrast-to-noise ratio (CNR) of 7 segments of the urinary tract were measured and compared between the two groups by using the Mann-Whitney U test. The dose-length product (DLP) and effective dose of each patient were compared between the two groups using an independent t-test. Results: Among all 48 patients, 35 calculi were detected in the experimental group (n=24), and 47 calculi were detected in the control group (n=24). There was no significant difference between the two groups in both CT value measurements and the CNR. The mean DLP and mean effective dose of the experimental group were significantly lower than those of the control group, and the effective dose in the experimental group was decreased by 40% compared with the control group. Conclusions: The application of DECT combined with split-bolus nephrographic excretory phase CT urography can reveal the urinary calculi covered by a contrast medium and also reduce the effective dose exposure to patients. abstract_id: PUBMED:26750188 Role of computed tomography urography in the clinical evaluation of upper tract urothelial carcinoma. Intravenous urography has been widely used for the evaluation of upper tract urothelial carcinoma. However, computed tomography urography presently has a higher diagnostic accuracy for upper tract urothelial carcinoma (94.2-99.6%) than intravenous urography (80.8-84.9%), and has replaced intravenous urography as the first-line imaging test for investigating patients with a high risk of upper tract urothelial carcinoma. Although the detection rate for bladder tumors using standard computed tomography urography is not yet high enough to replace cystoscopy, the addition of a 60- to 80-s delayed scan after the administration of contrast material for the whole pelvis improves the detection rate. A drawback to computed tomography urography is the higher radiation dose of 15-35 mSv, compared with a mean effective dose of 5-10 mSv for intravenous urography. Among several approaches to reducing the radiation dose, the use of an iterative reconstruction algorithm is most likely to become an effective solution because of its simplicity. One advantage of computed tomography urography over intravenous urography is its ability to reliably differentiate between upper tract urothelial carcinoma and calculi or blood clots. Computed tomography urography also shows characteristic findings of other benign conditions. These findings, in combination with negative cytology, are very important diagnostic clues for avoiding an unnecessary nephroureterectomy. For the clinical staging, a recent study has reported the high diagnostic accuracy of computed tomography urography with respect to ≥pT3 tumors. The present review shows the current status of computed tomography urography for the evaluation of upper tract urothelial carcinoma. abstract_id: PUBMED:26792577 Predictors for urologic intervention and alternate diagnoses in people having computed tomography urography for suspected renal colic. Objective: The majority of ureteric calculi pass spontaneously and are uncomplicated, yet use of computed tomography urography (CTU) has increased in recent years. This study describes a cohort of ED patients undergoing CTU for renal colic and assesses the predictors of urologic intervention. Methods: A retrospective cohort study enrolled patients who underwent CTU at three Melbourne EDs. Demographic data, clinical assessments, laboratory and radiological findings and interventions were abstracted. Univariate analysis was performed and significant predictors were entered into a multivariate logistic regression model to calculate adjusted odds ratios for associations with urologic intervention. Results: Six hundred and seventeen patients underwent 626 CTUs; mean age was 48 and 67.7% were male. 58.2% of scans found calculi, of which median size was 4 mm. 9.2% of scans revealed an alternate diagnosis, of which 2.7% were acutely important. 14.6% of patients with calculi received an intervention. Multivariate analysis found the factors associated with intervention were female sex (OR 3.9, 95% CI 1.8-8.7), proximal calculus site (OR 4.1, 95% CI 1.5-11.7), single kidney (OR 9.0, 95% CI 1.7-49.0) and calculus size &gt; 5 mm (OR 7.0, 95% CI 3.3-14.7). Conclusion: Factors associated with urologic intervention included female sex, single kidney, calculus size &gt;5 mm and proximal calculus. Information on acute alternate diagnoses was uncommon. A prospective study is needed to further clarify clinical parameters that could predict intervention to allow targeting of CTU to those most likely to benefit. abstract_id: PUBMED:16526256 Diagnosis of ureteral calculi using ultrasonography, intravenous urography and unenhanced helical computed tomography Introduction: The pain in the loins is usually the consequence of renal colic caused by small ureteral calculi. Ultrasonography: Ultrasonography is an efficient tool in the detection of calculi in the pyelocalyx system, but it may not be sufficient for the area of renal pelvis and ureter, as wll as for the ureter and the ureterovesical junction. Intravenous Urography: Radiolucent stones, dilatation of the ureter and of the pyelocalyx system were identified on the urogram, which are common signs of ureter calculosis. Defects in the contrast can be caused by blood clots and papillary or malignant tumors of the ureter, thus causing a differential diagnosis problem. Unenhanced Helical Computed Tomography: Noncontrast spiral CT is useful for detection fo calculi as small as 3mm. It can identify their localization and size without enlargements. The primary and secondary CT signs also confirm the established diagnosis. Conclusion: The noncontrast spiral CT is a highly precise method for assessment of kidney pain. It is a realiable and rapid diagnostic modality for the detection of urinary stones which differentiates them from blood clots, tumors, air and other factors which create a differential diagnosis problem. abstract_id: PUBMED:6385055 Can ultrasound and computed tomography replace high-dose urography in patients with impaired renal function? Ninety-one patients with unexplained impaired renal function were investigated by high-dose urography, ultrasound and computed tomography (CT) without contrast. The aim was to evaluate the role of ultrasound and CT in renal failure, in particular their ability to define renal length and to show collecting system dilatation. In the majority of patients, renal length could be measured accurately by ultrasound. Measurements were less that those at urography because of the absence of magnification. Renal measurement by CT was not a sufficiently accurate indicator of renal length to be of clinical use. Both ultrasound and CT were sensitive detectors of collecting system dilatation: neither technique missed any case diagnosed by urography. However, in the presence of staghorn calculi or multiple cysts, neither ultrasound nor CT could exclude collecting system dilatation. CT was the only technique which demonstrated retroperitoneal nodes or fibrosis causing obstruction. It is proposed that the first investigation when renal function is impaired should be ultrasound, with plain films and renal tomograms to show calculi. CT should be reserved for those patients in whom ultrasound is not diagnostic or in whom ultrasound shows collecting system dilatation but does not demonstrate the cause. Using this scheme, ultrasound, plain radiography and CT would have demonstrated collecting system dilatation and, where appropriate, shown the cause of obstruction in 84 per cent of patients in this series. Only 16 per cent of patients would have required either high-dose urography or retrograde ureterograms. abstract_id: PUBMED:38092717 Essentials of Computed Tomography Imaging of Hematuria. Hematuria is defined usually as the presence of blood in the urine, either on voiding or in a catheterized specimen. Hematuria is broadly divided into microscopic and gross hematuria and may be symptomatic or asymptomatic. The causes of hematuria include a very wide spectrum of conditions. However, here, we have filtered the causes causing gross hematuria, including calculus, trauma, tumors, vascular, and miscellaneous causes. Plain X-rays of the kidney, ureter, and bladder; ultrasound; intravenous urography; computed tomography (CT); magnetic resonance imaging; retrograde ureterography and pyelography (RGP); cystoscopy; and ureteroscopy are techniques that are useful for diagnosis. In the past, one or a combination of several techniques was used to evaluate hematuria but recently, advances in CT urography mean that it can be used alone for this task. This article briefly reviews the common causes of gross hematuria in adults and their evaluation by CT-based urography. Gross hematuria is evaluated well with CT scan urography which includes an unenhanced scan, the nephrographic phase, and the excretory phase. Unenhanced scans are routinely performed to evaluate the basic parameters such as the size, shape, position, and outline of the kidneys and calculus disease, which is the most common cause of hematuria. Renal parenchymal diseases including masses are best visualized in the nephrographic phase along with other abdominal organs. Delayed excretory phases including the kidneys, ureters, and bladder are useful for detecting urothelial diseases. CT urography's protocol permits evaluations of hematuria through a single examination. abstract_id: PUBMED:9338725 Spiral computed tomography for staghorn calculi. Objectives: To assess the utility of spiral computed tomography (CT) with three-dimensional reconstruction for preoperative planning of percutaneous nephrostolithotomy in patients with complex branched calculi (full staghorns). Methods: Patients with complex branched stones were imaged with spiral CT with three-dimensional reconstruction. These images were compared with standard imaging modalities, including excretory urography and plain radiographs, for planning percutaneous access for nephrostolithotomy. The utility of the scan was evaluated. Results: Ten patients with branched calculi were studied. Anatomic abnormalities were present in 5 patients. Excellent three-dimensional images were obtained in all patients without any complications related to the study. In 1 patient with multiple calculi in a horseshoe kidney, the three-dimensional image indicated a branched stone. The spiral CT scan was not helpful in directing percutaneous access in any patient. In a single patient, residual fragments noted during nephrostolithotomy were located by reference to the spiral CT scan. Conclusions: Spiral CT scans with three-dimensional reconstruction provide three-dimensional imaging of branched renal calculi. This modality provides minimal additional information over that obtained from standard radiographic studies for guiding nephrostolithotomy and cannot be recommended as a routine preoperative study. It was helpful in 1 patient to locate a residual fragment. abstract_id: PUBMED:21739909 Multislice computed tomography urography in the diagnosis of urinary tract diseases. Background/aim: . Multislice computed tomography (MSCT) has triggered considerable changes in uroradiological imaging. The aim of this study was to establish the place of MSCT urography (MSCTU) in comparison with intravenous urography (IVU) and to determine the sensitivity and specificity of MSCT in the evaluation of urothelial abnormalities. Methods: This prospective study included 120 patients with a high clinical suspicion of urinary tract diseases divided into two groups. The group I consisted of 60 patients with macroscopic hematuria, bladder carcinoma and malignant pelvic tumors after radiotherapy or operation. They underwent both IVU and MSCTU. The group II included 60 patients (&gt; or = 40 years old) with retroperitoneal and malignant pelvic tumors, complicated pyelonephritis, microscopic hematuria, acute urinary tract obstruction (without visible calculi on unenhanced scans), and they were submitted to computed tomography with additional scan phase enabling MSCTU. Results: Compared with IVU, MSCTU is more sensitive for the detection of urinary tract diseases (parenchymal changes, renal tumors, urolithiasis, fibrosis) and extraurinary processes. MSCTU is more specific than IVU for renal parenchymal abnormalities, tumors of the excretory system, urolithiasis, bladder tumors, fibrosis and extraurinary diseases. MSCTU is equally sensitive, but more specific for hydronephrosis compared to MSCT. The diagnosis made by the use of MSCTU in patients with macroscopic and microscopic hematuria and with obstruction not caused by stones, perfectly comply with operative findings and histological diagnosis. Conclusion: The obtained results support MSCTU to be the modality of choice in the diagnostic algorithm of patients with macroscopic hematuria and in the evaluation of microscopic hematuria and unexplained obstruction of the urinary tract. The only remaining role for IVU in our institution is imaging of the upper urinary tract in patients with hematuria under the age of 40. abstract_id: PUBMED:15892817 Planning percutaneous nephrolithotomy using multidetector computed tomography urography, multiplanar reconstruction and three-dimensional reformatting. Objective: To assess a modified technique of multidetector computed tomographic urography (CTU) which can reproducibly and accurately map the pelvicalyceal system (PCS) and complex renal calculi, as such information is essential in choosing the optimal percutaneous approach into the PCS for safe and successful percutaneous nephrolithotomy (PCNL). Patients And Methods: Ten consecutive patients with renal calculi underwent a modified four-detector multislice CTU with frusemide, abdominal compression and subsequent injection with contrast medium. After unenhanced CT of the abdomen, a high-resolution contrast-enhanced scan was taken through the kidneys in the pyelographic phase. Data were analysed using multiplanar reconstruction and three-dimensional (3D) reformatting. Results: In 10 CTUs there were three staghorn, two diverticular, 25 calyceal, two infundibular and two renal pelvic calculi; nine showed posterior calyces and good infundibular anatomy, and provided a good map of the PCS. Seven patients had PCNL, with the remaining three having either primary extracorporeal shock wave lithotripsy or conservative management. CTU detected stones in all patients and accurately located their relation to the PCS. With reconstructed images, subjectively the 3D imaging provided an advantage over conventional imaging in optimizing nephrostomy placement. Conclusion: CTU with this protocol and post-processing techniques enables an accurate and confident, reproducible prediction of the site, number and size of stones in complex pelvicalyceal anatomy, optimal site(s) for placing the percutaneous track, and potential hazards when placing the track, with no significant increase in the patient's radiation burden. 3D CTU should become the standard imaging method for planning PCNL in selected patients. Answer: The study of anatomic predictors of lower caliceal calculi formation using computed tomography urography (CTU) suggests that it may indeed be time to consider three-dimensional CTU as a valuable tool in this context. The research included in abstract PUBMED:18811576 indicates that CTU is a very useful imaging modality for diagnosing intrarenal anatomic variations. The study found that larger caliceal volume and narrower infundibular width (IW) are associated with stone formation in the lower calyx. These findings were based on measurements such as the lower-pole infundibulopelvic angle (IPA), IW, infundibular length (IL), caliceal volume (CV), and the number of minor calices, which were compared between the affected lower calyx and the normal contralateral kidney. Multivariate analysis confirmed the significance of caliceal volume and IW in stone formation. The use of CTU, particularly with advancements such as dual-energy CT (DECT), has been shown to not only provide diagnostic performance comparable to traditional three-phase CT urography but also to reduce the effective radiation dose to patients (PUBMED:37064393). This suggests that modern CTU techniques, including three-dimensional imaging, can offer detailed anatomical information that is crucial for understanding the predisposition to stone formation while also addressing concerns related to radiation exposure. Moreover, CTU has been established as having higher diagnostic accuracy for various urological conditions, such as upper tract urothelial carcinoma, compared to other imaging modalities like intravenous urography (PUBMED:26750188). This further supports the potential utility of CTU in the evaluation of anatomic factors related to stone formation. In conclusion, the evidence suggests that three-dimensional CTU is a valuable imaging modality for identifying anatomic predictors of lower caliceal calculi formation, offering detailed anatomical insights that can inform clinical decision-making and potentially improve patient outcomes.
Instruction: Understanding the epidemiology of MRSA in Europe: do we need to think outside the box? Abstracts: abstract_id: PUBMED:22695171 Understanding the epidemiology of MRSA in Europe: do we need to think outside the box? Background: The epidemiology of meticillin-resistant Staphylococcus aureus (MRSA) infections, using bacteraemia as a marker, shows a striking geographical pattern in Europe. The prevalence of MRSA is low in Northern European countries, increases into central Europe and reaches its highest levels in the Mediterranean region. This has been attributed to varying levels of implementation of infection control and antibiotic stewardship (ICAS) programmes, but reasons for this variation have not been clearly established. Aim: To investigate the possible impact of national cultural dimensions on the epidemiology of MRSA in Europe. Methods: Median proportions of MRSA bacteraemia were sourced for countries participating in the EARS-Net surveillance network in 2010, and correlated with the national cultural dimension scores of Hofstede et al. Findings: Significant associations were identified between MRSA proportions and the cultural constructs of uncertainty avoidance (UAI), masculinity (MAS) and power distance. Multiple regression models found significant associations for UAI, MAS and short-term orientation (R(2) adjusted = 0.475; P &lt; 0.001). The model was found to be predictive of MRSA trends identified in several European countries between 2006 and 2010. Conclusion: Implementation of ICAS programmes often requires behavioural change. Cultural dimensions appear to be key factors affecting perceptions and values among healthcare workers, which in turn are critical for compliance and uptake. Customizing ICAS initiatives to reflect the local cultural background may improve their chances of success. abstract_id: PUBMED:36475822 Psychosis Outside the Box: A User-Led Project to Amplify the Diversity and Richness of Experiences Described as Psychosis. Despite some progress toward greater inclusion, individuals with personal experience of psychosis are rarely integrated into the training of clinicians or knowledge generation. Their exclusion increases the risk that dominant ways of conceptualizing psychosis primarily reflect second- and third-person observations rather than first-person experiences. Observed only from the outside, the richness, complexity, and depth of experiences falling under the psychosis umbrella are easily lost. The authors describe a project-Psychosis Outside the Box-to solicit, compile, and amplify direct accounts of experiences that have been neglected in mainstream research and pedagogy, including the subjective experiences of visuals, felt presences, alterations of time and space, and "negative symptoms." abstract_id: PUBMED:36704547 MRSA compendium of epidemiology, transmission, pathophysiology, treatment, and prevention within one health framework. Staphylococcus aureus is recognized as commensal as well as opportunistic pathogen of humans and animals. Methicillin resistant strain of S. aureus (MRSA) has emerged as a major pathogen in hospitals, community and veterinary settings that compromises the public health and livestock production. MRSA basically emerged from MSSA after acquiring SCCmec element through gene transfer containing mecA gene responsible for encoding PBP-2α. This protein renders the MRSA resistant to most of the β-lactam antibiotics. Due to the continuous increasing prevalence and transmission of MRSA in hospitals, community and veterinary settings posing a major threat to public health. Furthermore, high pathogenicity of MRSA due to a number of virulence factors produced by S. aureus along with antibiotic resistance help to breach the immunity of host and responsible for causing severe infections in humans and animals. The clinical manifestations of MRSA consist of skin and soft tissues infection to bacteremia, septicemia, toxic shock, and scalded skin syndrome. Moreover, due to the increasing resistance of MRSA to number of antibiotics, there is need to approach alternatives ways to overcome economic as well as human losses. This review is going to discuss various aspects of MRSA starting from emergence, transmission, epidemiology, pathophysiology, disease patterns in hosts, novel treatment, and control strategies. abstract_id: PUBMED:30209034 Methicillin-Resistant Staphylococcus aureus: Molecular Characterization, Evolution, and Epidemiology. Staphylococcus aureus, a major human pathogen, has a collection of virulence factors and the ability to acquire resistance to most antibiotics. This ability is further augmented by constant emergence of new clones, making S. aureus a "superbug." Clinical use of methicillin has led to the appearance of methicillin-resistant S. aureus (MRSA). The past few decades have witnessed the existence of new MRSA clones. Unlike traditional MRSA residing in hospitals, the new clones can invade community settings and infect people without predisposing risk factors. This evolution continues with the buildup of the MRSA reservoir in companion and food animals. This review focuses on imparting a better understanding of MRSA evolution and its molecular characterization and epidemiology. We first describe the origin of MRSA, with emphasis on the diverse nature of staphylococcal cassette chromosome mec (SCCmec). mecA and its new homologues (mecB, mecC, and mecD), SCCmec types (13 SCCmec types have been discovered to date), and their classification criteria are discussed. The review then describes various typing methods applied to study the molecular epidemiology and evolutionary nature of MRSA. Starting with the historical methods and continuing to the advanced whole-genome approaches, typing of collections of MRSA has shed light on the origin, spread, and evolutionary pathways of MRSA clones. abstract_id: PUBMED:23050349 MRSA--development, distribution and prevention Multiresistant Staphylococcus aureus (MRSA) are one of the big problems occurring in the hospitals, especially because of the shortage of potent antibiotics. In addition, MRSA are increasingly found outside the hospitals. Beside the development of MRSA, pathogenesis and epidemiology, the article will give advises how to avoid an MRSA infection and how to deal with patients infected with MRSA. abstract_id: PUBMED:37125211 Molecular Epidemiology and Characterization of Multidrug-Resistant MRSA ST398 and ST239 in Himachal Pradesh, India. Aim: Methicillin-resistant Staphylococcus aureus (MRSA) is a common cause of severe and difficult-to-treat infections in humans and animals. We aimed to identify the predominant lineages of methicillin-resistant S. aureus in Himachal Pradesh, India, to understand the genomic epidemiology along with the genotypic and phenotypic characteristics. Methods: We isolated 250 S. aureus from two district hospitals in Himachal Pradesh, India. Methicillin-Resistant S. aureus (MRSA) isolates were subjected to MLST, SCCmec typing, and resistance as well as virulence determinants were determined by PCR and sequencing. Bio-typing was also performed for source tracking. Results: A 17.6% (44/250) of isolates were classified as MRSA by both the MRSA detection kit and disc diffusion methods. Antimicrobial Susceptibility Testing of MRSA isolates (n = 44) showed high resistance to oxacillin (77.27%), erythromycin (77.27%), tetracycline (75%), cefoxitin (65.9%), and gentamicin (61.36%), while low resistance was observed for teicoplanin (36.36%), vancomycin and levofloxacin (31.81%) and fusidic acid (18.18%). All isolates were sensitive to linezolid, quinupristin-dulfopristin, dalbavancin, and cefazoline. The SCCmec-II was observed in 20.45% of isolates, SCCmec-I in 11.36%, SCCmec-III in 9%, SCCmec-IV in 40.9% and SCCmec-V in 18.18%. The mecA gene was present in all isolates (n = 44) and 50% also had the vanA gene. 35% of isolates had the lukS-PV/lukf-PV toxin gene and 11.36% had the co-existence of mecA, vanA, and lukS-PV/lukf-PV. The major strain was ST398 (39%) followed by ST239 (27%), ST217 (16%), ST121 (11%), and ST338 (7%). The MRSA isolates produced staphylokinase and β-hemolysis but were negative for bovine plasma coagulation tests. In Conclusion: The predominant MRSA clones in Himachal Pradesh, India, were hospital-associated multi-drug resistant-MRSA ST239 with PVL and community-associated MRSA ST398. abstract_id: PUBMED:33061472 Methicillin-Resistant Staphylococcus aureus (MRSA): One Health Perspective Approach to the Bacterium Epidemiology, Virulence Factors, Antibiotic-Resistance, and Zoonotic Impact. Methicillin-resistant Staphylococcus aureus (MRSA) is a major human pathogen and a historically emergent zoonotic pathogen with public health and veterinary importance. In humans, MRSA commonly causes severe infectious diseases, including food poisoning, pyogenic endocarditis, suppurative pneumonia, otitis media, osteomyelitis, and pyogenic infections of the skin, soft tissues. In the horse, MRSA could cause a localized purulent infection and botryomycosis; in cattle and ewe, localized pyogenic infection and severe acute mastitis with marked toxemia; in sheep, abscess disease resembles caseous lymphadenitis caused by anaerobic strains; in dogs and cats, pustular dermatitis and food poisoning; in pig, exudative epidermatitis "greasy pig disease; in birds, MRSA causes bumble-foot. The methicillin resistance could be determined by PCR-based detection of the mecA gene as well as resistance to cefoxitin. In Egypt, MRSA is one of the important occasions of subclinical and clinical bovine mastitis, and the prevalence of MRSA varies by geographical region. In this review, we are trying to illustrate variable data about the host susceptibility, diseases, epidemiology, virulence factors, antibiotic resistance, treatment, and control of MRSA infection. abstract_id: PUBMED:30555431 Modification Targeting the "Rana Box" Motif of a Novel Nigrocin Peptide From Hylarana latouchii Enhances and Broadens Its Potency Against Multiple Bacteria. Public health is confronting the threat caused by antibiotic resistance and this means new antibacterial strategies must be developed urgently. Antimicrobial peptides (AMPs) have been considered as promising therapeutic candidates against infection in the post-antibiotic era. In this paper, we dismissed the significance of "Rana box" in the natural nigrocin-HL identified from skin secretion of Hylarana latouchii by comparing its activity with nigrocin-HLD without the motif. By substituting the "Rana box" sequence with an amidated phenylalanine residue, the natural peptide was modified into a shorter AMP nigrocin-HLM. Activities and toxicities of these two peptides in vitro and in vivo were compared. As a result, nigrocin-HLM not only displayed significantly increased potency against several representative microbes, but also high activity against the antibiotic-resistant methicillin-resistant S. aureus (MRSA, NCTC 12493 and ATCC43300 and several clinical isolates) as evidenced by markedly reduced minimal inhibitory concentration (MIC), minimal bactericidal concentration (MBC), and minimum biofilm eradication concentration (MBEC). More strikingly, nigrocin-HLM exhibited prominent inhibition against MRSA infection in a pneumonia mice model. In addition, the substitution attenuated the toxicity of nigrocin-HLM as evidenced by precipitously decreased hemolytic and cytotoxic activities in vitro, and acute toxicity to mice in vivo. Taken these results into consideration, nigrocin-HLM should be a promising therapeutic candidate for anti-infection. And in addition to dismiss an indispensable role of "Rana box" in maintaining antimicrobial activity of nigrocin-HL, our data provided an inspired strategy for peptide optimization. abstract_id: PUBMED:20204100 Epidemiology of MRSA and current strategies in Europe and Japan. The prevalence of health-care associated infections caused by multi-drug resistant organisms has significantly increased over the past decade. Among these organisms, Methicillin-resistant Staphylococcus aureus (MRSA) plays a prominent and increasing role. Because of consequences for patients and the economic burden in course of prolonged treatment following MRSA infections and additional indirect costs for e.g. isolation or antiseptic treatment, this trend will further damage European health-care systems.In 2006, a workshop was initiated at the 8(th) International Congress of the German Society of Hospital Hygiene held in Berlin. The aim of this workshop was to give an overview of the current situation of MRSA in selected European countries and to elaborate on potential strategies to prevent MRSA-infections and dissemination. A questionnaire encompassing 20 questions addressed topics such as epidemiology, current measures and future prospects was distributed to representatives from various European countries and Japan. A variety of widely different answers was obtained. It was shown that in all countries prevalence of MRSA is on a rising tide. This trend is observable in all European countries, albeit less strong in The Netherlands, Slovenia, France, Austria and Scandinavian countries. It was conclude that prevention strategies in a united and expanding European Community will become of utmost importance and that rapid screening strategies, e.g. PCR, might be of assistance in such an approach. A potential strategy to improve infection control measures could be the requirement of health-insurance providers to sign contracts only with hospitals able to proof having an infection control management in place. abstract_id: PUBMED:35331908 Genomic characterisation of CC398 MRSA causing severe disease in Australia. Clonal complex 398 (CC398) livestock-associated methicillin-resistant Staphylococcus aureus (LA-MRSA) has been reported worldwide in a variety of food-animal species. Although CC398 is synonymous with LA-MRSA, community-associated MRSA (CA-MRSA) variants have emerged, including the Panton-Valentine leukocidin (PVL)-positive ST398-V and ST398 single-locus variant ST1232-V, and the PVL-negative ST398-V clones. Using comparative genomic analysis, we determined whether ten CC398 MRSA bacteraemia episodes recently identified in Australia were due to LA-MRSA or CA-MRSA CC398. Isolates were sourced from the Australian Group on Antimicrobial Resistance S. aureus surveillance programme and episodes occurred across Australia. Whole-genome sequencing (WGS) and phylogenetic comparison of the ten CC398 bacteraemia isolates with previously published CC398 MRSA whole-genome sequences identified that the Australian CC398 isolates were closely related to the human-associated II-GOI clade and the livestock-associated IIa clade. The identified CC398 MRSA clones were: PVL-positive ST1232-V (5C2&amp;5), PVL-negative community-associated ST398-V (5C2&amp;5) and livestock-associated ST398-V (5C2&amp;5). Our findings demonstrate the importance of using WGS and comparing the sequences with international sequences to distinguish between CC398 CA-MRSA and LA-MRSA and to determine the isolates' origin. Furthermore, our findings suggest that CC398 CA-MRSA has become established in the Australian community and that ST398-V (5C2&amp;5) LA-MRSA is now widespread in Australian piggeries. Our study emphasises the need for national One Health antimicrobial resistance surveillance programmes to assist in monitoring the ongoing epidemiology of MRSA and other clinically significant antimicrobial-resistant organisms. Answer: To understand the epidemiology of MRSA in Europe, it may indeed be necessary to think outside the box. The epidemiology of MRSA infections in Europe displays a distinct geographical pattern, with lower prevalence in Northern European countries and higher levels in the Mediterranean region. This variation has been attributed to different levels of implementation of infection control and antibiotic stewardship (ICAS) programs, but the reasons for this variation have not been clearly established (PUBMED:22695171). One approach to thinking outside the box is to consider the impact of national cultural dimensions on the epidemiology of MRSA. A study found significant associations between MRSA proportions and cultural constructs such as uncertainty avoidance, masculinity, and power distance. It was suggested that cultural dimensions are key factors affecting perceptions and values among healthcare workers, which in turn are critical for compliance and uptake of ICAS initiatives. Therefore, customizing ICAS initiatives to reflect the local cultural background may improve their chances of success (PUBMED:22695171). Moreover, the molecular characterization, evolution, and epidemiology of MRSA are complex and constantly changing, with new MRSA clones emerging that can invade community settings and infect people without predisposing risk factors. This evolution continues with the buildup of the MRSA reservoir in companion and food animals (PUBMED:30209034). Understanding these dynamics is crucial for developing effective strategies to combat MRSA. Additionally, the molecular epidemiology and characterization of MRSA strains, such as ST398 and ST239 in Himachal Pradesh, India, indicate the presence of hospital-associated multi-drug resistant MRSA and community-associated MRSA, highlighting the global nature of MRSA epidemiology and the need for international surveillance and control strategies (PUBMED:37125211). In conclusion, to effectively understand and address the epidemiology of MRSA in Europe, it is necessary to consider a wide range of factors, including cultural influences on healthcare practices, the molecular evolution of MRSA strains, and the global context of MRSA transmission and resistance. This holistic approach, which goes beyond traditional epidemiological methods, may provide the insights needed to develop more effective prevention and control measures.
Instruction: Awareness of the effects of postponing motherhood among hospital gynecologists: is their knowledge sufficient to offer appropriate help to patients? Abstracts: abstract_id: PUBMED:26727932 Awareness of the effects of postponing motherhood among hospital gynecologists: is their knowledge sufficient to offer appropriate help to patients? Purpose: The aim of this study is to ascertain the awareness of hospital gynecologists about the effects of woman's age on spontaneous fecundity and on the efficacy of assisted reproduction techniques (ARTs). Methods: One hundred fifty-six gynecologists working in public or private Italian hospitals, without specific experience in reproductive medicine and ART, were administered a multiple-choice answer questionnaire addressing (a) the effect of age on woman's spontaneous fecundity, (b) the tools to estimate the ovarian follicular reserve, and (c) the outcome of ART in women above 40 years. Results: Approximately half of the interviewed gynecologists indicated the woman's age limit for successful reproduction between 44 and 50 years; fertility lifespan was believed to be prolonged by oral contraception, pro-fertility medical treatments, or ART. The correct meaning of serum FSH measurement was known by approximately one third of the interviewed doctors. The effectiveness of ART for women of advanced age was overestimated by half of the gynecologists, especially in case of patients having regular cycles and/or small follicles at ultrasound. Conclusions: Overall, the survey clearly showed that the knowledge of hospital gynecologists about the effects of age on woman's fertility and ART effectiveness is largely insufficient to offer scientifically correct, helpful information to patients. Properly targeted corrections to academic and periodical educational programs for Ob/Gyn specialists are warranted. abstract_id: PUBMED:34687450 "It's hard to become mothers": The moral economy of postponing motherhood in neoliberal Chile. The delay of childbearing is one of the most prominent transformations of contemporary fertility and reproductive patterns. This article provides a novel approach to understanding why women are postponing motherhood and having children later in life. Drawing on 24 life story interviews with women from Santiago de Chile, I argue that the transition to motherhood is shaped by a moral economy in which women postpone childbearing to enable becoming "good" mothers. In a context in which social fertility is being redefined by neoliberalism, intensive mothering, and lone motherhood, I find that women delay childbearing until after achieving professional and financial milestones that allow them to fulfil the normative conditions for having children. These findings suggest that women postpone the transition to motherhood not because they reject childbearing and traditional gender roles, but rather because they aspire to become "good" mothers in a context characterized by institutional precariousness, relational insecurity, and increasing demands on mothering. Through these findings, I challenge prevalent interpretations of why women are having children later in life, extend accounts of the gendered norms of social fertility, and contest the nature of autonomy driving change in women's lives. abstract_id: PUBMED:28031617 Knowledge, awareness and practice of ethics among doctors in tertiary care hospital. Introduction: With the advancement of healthcare and medical research, doctors need to be aware of the basic ethical principles. This cross-sectional study is an attempt to assess the knowledge, awareness, and practice of health-care ethics among health-care professionals. Materials And Methods: After taking written informed consent, a standard questionnaire was administered to 117 doctors. No personal information was recorded on the questionnaire so as to ensure the confidentiality and anonymity of participants. Data analysis was done using SPSS version 21 (IBM Corp., Armonk, NY, USA). Results: Statistically significant difference observed between the opinions of consultant and senior resident (SRs) on issues like, adherence to confidentiality; paternalistic attitude of doctors (doctors should do their best for the patient irrespective of patient's opinion); doctor's decision should be final in case of disagreement and interest in learning ethics (P &lt; 0.05). However, no difference reported among them with respect to patient wishes, informing patient regarding wrongdoing, informing close relatives, seeking consent for children and patients' consent for procedures. Furthermore, no significant difference observed between the two with respect to the practice of health-care ethics. Surprisingly, the response of clinical and nonclinical faculty did not differ as far as awareness and practice of ethics were concerned. Conclusion: The significant difference is observed in the knowledge, awareness, and practice of ethics among consultants and SRs. Conferences, symposium, and workshops, on health-care ethics, may act as a means of sensitizing doctors and thus will help to bridge this gap and protect the well-being and confidentiality of the patients. Such an effort may bring about harmonious change in the doctor-patient relationship. abstract_id: PUBMED:36755286 Women's reflections on timing of motherhood: a meta-synthesis of qualitative evidence. Introduction: Fertility declines with increasing age, especially in women. In recent decades women's age at the birth of their first child has risen markedly in many countries, and an increasing number of women do not establish a family until their late-twenties to mid-thirties. Although there can be various reasons that couples experience fertility problems, advanced maternal age is the most frequent cause for difficulties with achieving pregnancy. Objective: In this meta-synthesis, we investigated reflections on timing of motherhood in women who have not yet had children. Methods: A systematic literature search of six electronic databases and manual searches of reference lists identified eight qualitative studies published between 2011 and 2018 that focused on women's reflections on timing of motherhood. The studies were assessed with the Critical Assessment Skills Programme (CASP) quality appraisal tool. The results were synthesized using Noblit and Hare's meta-ethnographic approach as described by Malterud. Findings: An overall theme of 'Timing of motherhood' and four overlapping subthemes were identified: Making a life-changing decision, The right time, Fear of regret, and Plan B. The dilemmas associated with timing of motherhood leave women of reproductive age balancing their priorities and values against a biological deadline for having children naturally or through assisted reproductive technology. Conclusions: Women of reproductive age are aware that they must make a life-changing decision as to if or when to have children, but they consider having children at 'the right time' to be important. Simultaneously, while some women are reluctant to have children for various reasons, they express fear that waiting too long could result in their regretting not having children later in life. Although women of reproductive age express concern about their ability to achieve pregnancy, they have limited focus on the medical risks associated with postponing motherhood. There is a need to establish preventive health initiatives to support women of reproductive age in their considerations regarding timing of motherhood. Trial Registration Number: PROSPERO: CRD42020175151. abstract_id: PUBMED:36421854 Action Postponing and Restraint Varies among Sensory Modalities. Proactive inhibition is divided into two components: action postponing (AP), which refers to slowing the onset of response, and action restraint (AR), which refers to preventing the response. To date, several studies have reported alterations in proactive inhibition and its associated neural processing among sensory modalities; however, this remains inconclusive owing to several methodological issues. This study aimed to clarify the differences in AP and AR and their neural processing among visual, auditory, and somatosensory modalities using an appropriate experimental paradigm that can assess AP and AR separately. The postponing time calculated by subtracting simple reaction time from Go signal reaction time was shorter in the visual modality than in the other modalities. This was explained by faster neural processing for conflict monitoring induced by anticipating the presence of the No-go signal, supported by the shorter latency of AP-related N2. Furthermore, the percentage of false alarms, which is the reaction to No-go signals, was lower in the visual modality than in the auditory modality. This was attributed to higher neural resources for conflict monitoring induced by the presence of No-go signals, supported by the larger amplitudes of AR-related N2. Our findings revealed the differences in AP and AR and their neural processing among sensory modalities. abstract_id: PUBMED:25641993 Motherhood after the age of 35 in Poland. Postponing motherhood is a widespread phenomenon across developed countries, however only few studies look into very late motherhood in post-socialist countries using individual level data. In this study, I look at the context of the first childbirth in Poland in the midst of the political transformation of 1989. Employing sequence analysis I reconstructed life trajectories of women who experienced the transition to adulthood during the late 1980's and the early 1990's and have just completed their fertility histories. Individual data from the 2011 GGS-PL and the 2011 FAMWELL Survey were used. Comparing paths of mothers' lives, I searched for differences in educational, professional and conjugal careers between women who gave birth before the age of 30 and after the age of 35. The results show how various life careers crisscross over the life course leading women to late motherhood. abstract_id: PUBMED:36756087 Level of Patient Safety Culture Awareness Among Healthcare Workers. Background: Patient safety is a major concern at all levels of the healthcare system, with the primary objective of reducing patient hazards when providing care. The Kingdom of Saudi Arabia has made great progress in improving the health of its citizens over the last few decades, particularly in terms of patient safety. Currently, many health-care institutions throughout the world are interested in implementing a safety culture to lower the risk of harm and the number of accidents associated with routine treatment. Purpose: This study aims to assess the level of awareness concerning patient safety culture among health-care providers in hospitals in Riyadh, Saudi Arabia. Methods: This is a descriptive cross-sectional quantitative study that used the Agency for Healthcare Research and Quality's patient safety culture questionnaire. The survey was conducted online and was completed by 204 health-care employees from three distinct hospital settings in Riyadh, Saudi Arabia: government, quasi-government, and private hospitals. Results: The results showed a low level of patient safety culture awareness among health-care workers in the 10 dimensions of patient safety. The mean score was 2.24 out of 5 and the standard deviation was 0.50. Participants assigned the highest mean scores to Organisational Learning and Continuous Improvement (Mean: 2.48, Standard deviation: 0.66), Communication About Error (Mean: 2.44, Standard deviation: 0.70), and Teamwork (Mean: 2.40, Standard deviation: 0.64). Interestingly, the scores for hospital supervisor and managerial positions are significant compared to other clinical positions (p-value = 0.047). Moreover, the patient safety rating and the number of reported events were significantly related to the overall level of awareness (p-value &lt;0.001), (p-value = 0.042), respectively. Conclusion: In Riyadh hospitals, overall patient safety awareness is low, necessitating adequate attention and effective solutions such as encouraging reporting errors, implementing safety huddles and leadership walkarounds, and establishing a patient safety culture. Health-care executives and managers are encouraged to develop safety cultures that promote trust and justice, allowing employees to be candid about their mistakes and failings without retaliation. Moreover, this safety culture should be defined by the following qualities: a just culture, a learning culture, and a reporting culture. abstract_id: PUBMED:37200677 Awareness of symptoms, anticipated barriers and delays to help-seeking among women at higher risk of breast cancer: A UK multicentre study. Women with a family history of breast cancer have an increased lifetime risk of the disease. Delay in symptom presentation can lead to poorer outcomes. Low awareness of breast cancer symptoms and help-seeking barriers have been associated with delay in presentation in the general population. Symptom awareness and help-seeking barriers among women at increased risk of breast cancer are unknown. We conducted analysis of survey data which included women with moderate and high risk of breast cancer from 20 secondary and tertiary care clinics in England (n = 408). Women completed a validated survey assessing breast cancer symptom awareness, barriers to help-seeking and anticipated delay in help-seeking. Women recognised an average of 9.1/11 breast cancer symptoms (SD = 2.1). Nipple rash was the least recognised symptom (51.0%). Women educated to at least degree level had higher awareness than those with lower education (β = 0.14, 95% CI 0.13, 0.99, p = 0.011). Women at lower socioeconomic status (SES) had lower awareness than those at higher SES (β = -0.13, 95% CI -1.09, -0.07, p = 0.027). Women reported several anticipated help-seeking barriers (mean = 4.0/11, SD = 2.8). Waiting to see if a symptom will pass was the most commonly reported barrier to help-seeking (71.5%). Most women (376/408; 92.2%) reported that they would seek medical help within 2 weeks of discovering a breast cancer symptom. Interventions to increase awareness of non-lump breast cancer symptoms and reduce help-seeking barriers are needed, with considerations of appropriate reading levels and modalities for women with lower education and SES. abstract_id: PUBMED:36578857 Patients' Awareness of Their Rights and Responsibilities: A Cross-Sectional Study From Al-Ahsa. Introduction Patient rights are an essential part of healthcare practice. In fact, patients are one of the most vulnerable members of society. As a result, improving the rights of patients is considered a priority in medical services. Aim The purpose of this study is to measure the level of patients' awareness of their rights. Subjects and methods This is a cross-sectional study conducted among patients living in Al-Ahsa, Saudi Arabia. A self-administered questionnaire was distributed among Al-Ahsa patients using an online survey. The questionnaire was composed of socio-demographic variables (i.e., age, gender, education, etc.), sources of patient information regarding patients, means of increasing awareness toward patients' rights, and a 15-item questionnaire to measure the awareness about patient rights and responsibilities. Results Among the 295 patients, 59.7% were males and 39% were aged between 31 and 40 years old. The overall mean awareness score was 54.6 (SD 7.44). 53.2% of the patients were categorized as having moderate awareness levels, 44.1% were good and only 2.7% were categorized as having poor levels of awareness. Factors associated with increased awareness were being older, hospital admission, hospital visitation for the last three months, and healthcare providers as the sources of the patient's right information. Conclusion There was sufficient awareness of patient rights and responsibilities in our region. Increasing age, frequent hospital visitation, and education given by healthcare providers could effectively improve awareness of patient rights and responsibilities. A multicenter study is required to shed more light on the awareness of patients regarding their rights and responsibilities. abstract_id: PUBMED:38164240 Assessment of awareness and knowledge regarding nuclear medicine and appropriate use of Nuclear medicine modalities, among medical students and faculty members in two academic medical institutes in North India: A Cross sectional Study. Objectives: Despite significant progress in the field of nuclear medicine, basic nuclear medicine awareness and understanding among clinicians remains unsatisfactory, leading to under utilization of nuclear medicine modalities. To evaluate the awareness and knowledge regarding nuclear medicine and appropriate use of Nuclear medicine modalities, among medical students and faculty members. Method: In this descriptive cross sectional study, a self timer limited objective questionnaire based on Google forms was distributed to the study population and scores obtained by the participants were analyzed. Results: Percent scores range for intern, residency trainees, and senior resident/faculty groups for general awareness were 16-46%, 37-58% and 62-91% and for knowledge and appropriate use were 7-21%, 28-43%, and 35-85% respectively. Overall, 61% of the participants had poor awareness and knowledge regarding nuclear medicine modalities. None of the participants had received nuclear medicine exposure or education during their academics or training. Only 49% of the participants considered utilizing nuclear medicine modalities for their patient management. Conclusion: Undergraduate interns and residency trainees had a poor to fair level of awareness and knowledge regarding nuclear medicine. Hence creating more awareness in early stages of their career by incorporating Nuclear medicine basic education in medical undergraduate curriculum is required. The senior residents/faculty members had a moderate to good level of awareness and knowledge but still improvement in their knowledge would lead to a more appropriate and better utilization of nuclear medicine modalities for optimum patient management in a variety of clinical settings. Answer: The study conducted to ascertain the awareness of hospital gynecologists about the effects of a woman's age on spontaneous fecundity and the efficacy of assisted reproduction techniques (ARTs) revealed that their knowledge is largely insufficient to offer scientifically correct, helpful information to patients (PUBMED:26727932). Approximately half of the interviewed gynecologists indicated the woman's age limit for successful reproduction between 44 and 50 years, and many believed fertility lifespan could be prolonged by oral contraception, pro-fertility medical treatments, or ART. However, the effectiveness of ART for women of advanced age was overestimated by half of the gynecologists. The survey showed that the correct meaning of serum FSH measurement, a tool to estimate ovarian follicular reserve, was known by only about one-third of the interviewed doctors. These findings suggest that there is a significant gap in the knowledge of hospital gynecologists regarding the effects of age on women's fertility and the outcomes of ART, especially for women above 40 years. Therefore, properly targeted corrections to academic and periodical educational programs for Ob/Gyn specialists are warranted to improve their ability to provide accurate and helpful information to patients considering postponing motherhood.
Instruction: Does the timing of surgery for proximal humeral fracture affect inpatient outcomes? Abstracts: abstract_id: PUBMED:24925700 Does the timing of surgery for proximal humeral fracture affect inpatient outcomes? Background: Delayed surgical treatment of hip fractures is associated with adverse medical outcomes, but it is unclear whether the same is true for proximal humeral fractures. The purpose of this study was to evaluate the relationship between surgical delay for proximal humeral fracture and inpatient adverse events, in-hospital death, prolonged postoperative stay, and nonroutine discharge. Methods: Of the more than 70,000 patients with an operatively treated proximal humeral fracture identified in the Nationwide Inpatient Sample between 2008 and 2011, 87% underwent surgery within 2 days of admission and 13% underwent surgery 3 days or more after admission. Multivariable logistic regression analyses were performed to evaluate the effect of surgical delay on inpatient outcomes and to identify risk factors associated with late surgery. Results: Surgery 3 days or more after admission for fracture of the proximal humerus had no influence on in-hospital death but was independently associated with inpatient adverse events (odds ratio [OR], 2.1; 95% confidence interval [CI], 2.0-2.2), prolonged postoperative stay (OR, 1.7; 95% CI, 1.7-1.9), and increased nonroutine discharge (OR, 2.7; 95% CI, 2.6-2.9). Risk factors for surgery 3 days or more after admission included advanced age, male sex, Elixhauser comorbidity score, polytrauma, Hispanic race or black race, no insurance coverage, low household income, and weekend admission. Conclusions: Even when comorbidities and complexity are controlled for, delaying surgery for proximal humeral fracture is likely to increase inpatient morbidity, postoperative length of stay, and nonroutine discharge. It appears that avoiding nonmedical delays is advantageous. abstract_id: PUBMED:34985565 Does the surgical treatment of concomitant upper limb fractures affect the outcomes of hip fractures in the elderly population? Introduction: Elderly patients with concomitant upper limb and hip fractures present a management dilemma because upper limb fractures potentially affect rehabilitation outcomes for the hip fracture. This study aims to evaluate whether the site of upper limb fractures and the decision to surgically treat such fractures affect the functional outcome of surgically treated hip fracture patients. Methodology: We retrospectively reviewed 1828 hip fracture patients treated at a single trauma centre over 3 years, of whom 42 with surgically treated hip fractures had concomitant upper limb fractures. Outcome measures, such as length of hospital stay, complications, mortality and readmission rates, were assessed, whilst the functional outcomes were evaluated using the Modified Barthel Index (MBI) on admission, post-operatively and at 6 and 12 months of follow-up. Results: Amongst the 42 patients with surgically treated hip fractures, 31.0% had proximal humerus fractures, 50.0% had wrist fractures, 16.7% had elbow fractures and 2.4% had forearm fractures. 50.0% of these upper limb fractures were treated surgically. There was no difference in complications, inpatient morbidity, readmission rates or the length of hospital stay for patients whose upper limb fractures were surgically treated as compared to those non-surgically treated. There was no difference in absolute MBI scores at 6 and 12 months based on the management of upper limb fractures. However, patients with surgically treated wrist fractures had statistically significant higher MBI scores at 6 months as compared to those treated non-surgically. Conclusion: Surgical treatment of concomitant upper limb fractures does not appear to change the outcomes of the hip fractures. Hip fracture patients with surgically treated wrist fractures had better functional outcomes at 6 months compared to those treated non-surgically; however, there was no difference at 12 months. Hip fracture patients with concomitant wrist fractures had better functional outcomes compared to hip fracture patients with proximal humerus fractures. abstract_id: PUBMED:34140139 Acute versus delayed reverse total shoulder Arthroplasty for the management of Proximal Humerus Fractures. Objectives: Proximal humerus fractures (PHF) are common, yet their optimal management remains debated. Reverse total shoulder arthroplasty (rTSA) is an increasingly popular option, particularly for non-reconstructible or osteoporotic fractures. Despite this trend, current literature provides limited guidance with regards to surgical timing and patient selection for rTSA. A trial of non-operative management might be beneficial for many patients who are not clearly indicated for surgery, provided this does not have a major negative impact on results for those who ultimately require rTSA. The purpose of this study was to investigate whether delayed reverse shoulder arthroplasty for fracture (&gt;28 days from injury) is associated with any difference in complication rates or functional outcomes relative to acute surgery. Design: Retrospective cohort study PATIENTS/PARTICIPANTS: 114 consecutive patients who underwent rTSA as the primary management of a PHF at two Level 1 trauma centers and one academic community hospital between 2004 and 2016. Intervention: rTSA as primary management of proximal humerus fracture MAIN OUTCOME MEASUREMENTS: Complications, range of motion, and patient-reported functional outcomes scores (DASH, PROMIS physical function, and EQ-5D) RESULTS: Eighty-two of 114 patients (72%) underwent early surgery. Complex (4-part, head-split, dislocated) fractures were significantly more common in the acutely treated group. There was no significant difference in complications. Overall complication rate was 11.4%. There was a significant difference in DASH score favoring early surgery, with an average score of 22.4 in acutely treated patients versus 35.1 in delayed patients (p = 0.034). There was a non-statistically significant trend towards better PROMIS physical function scores and ROM in the acutely treated group. Conclusion: Delay in performing primary rTSA for management of PHF does not lead to an increase in complication rates but it may come at the cost of worse functional outcomes in patients who ultimately require rTSA. abstract_id: PUBMED:38312897 Timing of Surgery for Proximal Humeral Fracture Treated with Shoulder Hemiarthroplasty, Best Results with Surgery Within 2 Weeks. Background: Preoperative delay may affect the outcome of proximal humerus fractures treated with shoulder hemiarthroplasty. There is currently no consensus for the recommended preoperative time interval. The aim was to examine how the time to surgery with shoulder hemiarthroplasty after a proximal humerus fracture affected the patient-reported outcome. Methods: 380 patients with proximal humerus fractures treated with shoulder hemiarthroplasty recorded from the Swedish Shoulder Arthroplasty Registry were included. Three self-reporting outcome instruments were used at follow-up after 1-5 years: a shoulder-specific score, the Western Ontario Osteoarthritis of the Shoulder index (WOOS), the EuroQol-5 Dimension index (EQ-5D), and subjective patient satisfaction assessment. Results: The preoperative delay had a negative impact on the WOOS, EQ-5D, and patient satisfaction level (p &lt; 0.01). The best result, measured with WOOS at a minimum 1-year follow-up, was found when surgery was performed 6-10 days after the reported date of fracture. WOOS% 8-14 days was 69.4% (± 24.2). A delay of more than 10 days was shown to be correlated with poorer outcomes. WOOS% 15-60 days was 55.8% (± 25.0) and continued to decrease. Conclusion: The current recommendation in Sweden to perform shoulder hemiarthroplasty within 2 weeks after sustaining a proximal humerus fracture is considered valid. abstract_id: PUBMED:26542867 Timing of surgery for open reduction and internal fixation of displaced proximal humeral fractures. Background: Open reduction and internal fixation is one established method for treatment of displaced fractures of the proximal humerus. However, the timing of surgery and its effect on complications have not yet been investigated in the literature. Hence, aim of this study was to analyze the occurrence of complication following locked plating of proximal humeral fractures when surgery was delayed in comparison to early intervention. Methods: Between February 2002 and November 2010, 497 patients with displaced proximal humeral fractures were treated by open reduction and locked plating. 329 patients were available for follow-up with a minimum of 12 months after surgery. Outcome analysis included radiographic evidence of loss of fixation (&gt; 10° of secondary displacement), screw-cutout and avascular head necrosis. Outcomes were analyzed with regards to age, gender and fracture pattern and were compared between time intervals in which the primary surgery had been conducted; early intervention (&lt; 48 h), timely scheduled for surgery (3-5 days) and delayed intervention (&gt;5 days). Results: Of 329 patients (68.4% women; median age at time of surgery: 69.9 years, 95% Confidence Interval (CI) 68.2, 71.2) the median time between fracture incident and surgical intervention was 3.2 days (95%CI: 3.1, 3.3). Surgery was performed in a 2-part fracture at a median of 3.3 days (95%CI: 3.2, 3.4) after trauma, in a 3-part fracture after 3.3 days (95%CI: 3.1, 3.4), in a 4-part fracture 2.9 days (95%CI: 2.8, 3.0), in head split type fracture 2.2 days (95%CI: 2.0, 2.4) and in dislocation type fracture 0.8 days after trauma (95%CI: 0.7, 0.9, p = 0.40). Loss of fixation was observed in 12.8% (n = 42 patients), of which in 4.9% (n = 16) screw cutout was evident and in 6.8% of cases (n = 20) avascular head necrosis was diagnosed. Patients in which complication was observed were treated at median 2.5 days after trauma (95% CI, 1.8, 3.2), in comparison, patients without evidence of complications were treated at a median of 3.2 days (95% CI, 2.8-3.8, p = 0.35). The odds ratio regarding occurrence of complications for patients treated &lt;48 hours was 0,924, for patients in which surgery was performed 3-5 days after the incident the odds ratio was 0,836 and in patients treated &gt; 5 days the odds ratio was 1,637. Conclusions: Loss of fixation following open reduction and internal fixation of proximal humeral fractures was not more frequently observed when surgery was performed 3-5 days after the incident in comparison to early intervention (&lt; 48 h). However, a delay of intervention &gt; 5 days is related to significant increase of complications. Thus, if open reduction and internal fixation is indicated, reconstruction of the proximal humerus should be performed within 5 days of the fracture event. In head split and dislocated fracture types anatomic reconstruction completed within 48 h from the incident may be beneficial with regards to risk of avascular necrosis. abstract_id: PUBMED:38449969 Satisfactory Short-Term Outcomes of Reverse Shoulder Arthroplasty for Complex Three- and Four-Part Fractures of the Humeral Head in Octogenarians. Background: Proximal humeral fractures with severe comminution and poor bone quality are among the most common injuries in the elderly population. Reverse shoulder arthroplasty (RSA) has been widely used to manage complex three- and four-part humeral head fractures. The purpose of the present study was to report the result of this technique in the demanding population of octogenarians. Materials And Methods: Twenty-six patients above the age of 80 years were included in the study and followed for a minimum of one-year follow-up. To assess the functional outcomes the postoperative range of motion (ROM), the Constant score, the visual analog scale for pain, and the disability of the arm and shoulder score (DASH) were measured at 6 and 12 months. Radiological assessment and potential complications were also recorded. Results: The mean age of the study population was 81.9 years (81-86) at the time of surgery. There was a statistically significant improvement in all outcomes over the follow-up intervals. Shoulder ROM was 125.7o for flexion, 98.2o for abduction, 42.2o for internal rotation, and 43.2o for external rotation at 12 months. The mean Constant, DASH, and VAS scores at the last follow-up were 61.3, 31.9, and 0.5, respectively. Reported complications include one superficial surgical site infection. Conclusion: RSA is a safe and reliable surgical option with satisfactory outcomes to manage complex three- and four-part fractures of the humeral head as it can provide prompt pain relief and function in octogenarians. abstract_id: PUBMED:38428478 Reverse Total Shoulder Arthroplasty for Acute Proximal Humeral Fracture Has Comparable 10-Year Outcomes to Elective Indications: Results From the New Zealand Joint Registry. Hypothesis And Background: Recently, the indication of reverse total shoulder arthroplasty (RTSA) has expanded beyond rotator cuff arthropathy to include treatment of complex acute proximal humeral fracture (PHF). Limited previous studies have compared the long-term clinical and functional outcomes of patients undergoing RTSA for PHF versus elective indications for degenerative conditions. The purpose of this study was to compare implant survivorship, reasons for revision and functional outcomes in patients undergoing RTSA for acute PHF with those undergoing elective RTSA in a population-based cohort study. Methods: Prospectively collected data from the New Zealand Joint Registry from 1999 to 2021 and identified 6,862 patients who underwent RTSA. Patients were categorized by preoperative indication, including PHF (10.8%), rotator cuff arthropathy (RCA) (44.5%), osteoarthritis (OA) (34.1%), rheumatoid arthritis (RA) (5.5%) and old traumatic sequelae (5.1%). Revision-free implant survival and functional outcomes (Oxford Shoulder Score (OSS) at 6-months, 5- and 10-years follow-up) were adjusted by age, sex, American Society of Anesthesiologists (ASA) class and surgeon experience and compared. Results: Revision-free implant survival at 10 years for RTSA for PHF was 97.3%, compared to 96.1%, 93.7%, 92.8% and 91.3% for OA, RCA, RA and traumatic sequelae, respectively. When compared with RTSA for PHF, the adjusted risk of revision was significantly higher for traumatic sequelae (hazard ratio [HR] = 2.3; p = 0.023) but not for other elective indications. The most common reason for revision in the PHF group was dislocation/instability (42.9%), which was similar to the OA (47.6%) and traumatic sequelae (33.3%) groups. At 6 months post-surgery, OSS were significantly lower for the PHF group compared to RCA, OA and RA groups (31.1 vs. 35.6, 37.7, 36.5, respectively, p &lt; 0.001), and similar to traumatic sequelae (31.7, p = 0.431). At 5 years, OSS were only significantly lower for PHF compared to OA (37.4 vs 41.0, p &lt;0.001) and there was no difference between PHF and other groups. At 10 years, there were no significant differences between groups. Conclusions: RTSA for PHF demonstrated reliable long-term survivorship and functional outcomes compared to elective indications. Despite lower functional outcomes in the early postoperative period for the PHF group, implant survivorship was similar in patients undergoing RTSA for the primary indication of acute PHF compared to RCA, OA and RA; and superior compared to the primary indication of traumatic sequelae. abstract_id: PUBMED:34106339 Outcomes following fracture fixation with the Equinoxe® proximal humerus plate: an improvement over PHILOS®? Introduction: The purpose of this study is to compare patient outcomes between the Equinoxe® (Exactech, Gainesville, Fla) proximal humerus locking plate and the PHILOS® (Synthes, Paoli, PA) proximal humerus locking plate. Methods: Two hundred and seventy-one patients with a displaced proximal humerus fracture presented to our academic medical center between February 2003 and October 2020. Functional outcomes assessed included the Disabilities of the Arm, Shoulder, and Hand questionnaire and shoulder range of motion. Radiographs were utilized to determine fracture healing and development of posttraumatic osteoarthritis or osteonecrosis. Results: Overall, 108 Equinoxe® and 87 PHILOS® patients treated by a single surgeon with complete clinical, functional, and radiographic follow-up were included in the study. Demographics were similar between groups. One hundred and eight Equinoxe® patients healed at a mean time to union by 3.7 ± 2.2 months, and 86 PHILOS® patients healed (p = 0.24) by 4.1 ± 2.3 months after surgery (p = 0.31). Shoulder external rotation was greater in Equinoxe® patients by 7 degrees (p = 0.044), and forward elevation was greater by 16 degrees (p = 0.005) at one-year follow-up. DASH scores were similar between patients at the 3-, 6-, and 12-month follow-up (p = 0.86, p = 0.77, p = 0.64). Fewer Equinoxe® patients experienced complications (p = 0.043). Conclusion: Fixation of a proximal humerus fracture can safely be performed with both Equinoxe® and PHILOS® proximal humerus locking plates. Patients fixed with Equinoxe® plates achieved a slightly greater degree of external rotation, forward elevation, and experienced fewer complications. This difference may be due to surgeon experience rather than the implant itself. abstract_id: PUBMED:26665249 Analysis of Predictors and Outcomes of Allogenic Blood Transfusion After Shoulder Arthroplasty. In shoulder arthroplasty, patients often receive postoperative blood transfusions. Studies of predictors of allogeneic blood transfusion (ABT) in these patients have been limited by sample size. We conducted a study to identify predictors of ABT in patients undergoing shoulder arthroplasty and to evaluate the effect of ABT on postoperative outcomes, including inpatient mortality, adverse events, prolonged hospital stay, and nonroutine discharge. Using the Nationwide Inpatient Sample, we stratified an estimated 422,371 patients who presented for shoulder arthroplasty between January 1, 2002, and December 31, 2011, into total shoulder arthroplasty (59.3%) and hemiarthroplasty (40.7%) cohorts, and then subdivided these cohorts into patients who received blood transfusions and those who did not. Patients who received ABTs were older, female, and nonwhite and had Medicare or Medicaid insurance. Many had a primary diagnosis of proximal humerus fracture. Those who received ABT were more likely to experience adverse events or a prolonged hospital stay and were more often discharged to a nursing home or an extended-care facility. The 5 most significant predictors of ABT in a population of 422,371 patients who underwent shoulder arthroplasty were fracture, fracture nonunion, deficiency anemia, coagulopathy, and avascular necrosis. Given these findings, it is important to identify at-risk patients before surgery in order to provide education and minimize risk. abstract_id: PUBMED:35050410 Patient obesity is associated with severity of proximal humerus fractures, not outcomes. Purpose: The purpose of this study is to evaluate the effect of obesity on the outcome of operatively treated proximal humerus fractures. Methods: Between December 2003 and October 2020, 240 patients with proximal humerus fractures requiring surgery were prospectively followed and classified according to the international AO/Orthopedic Trauma Association (AO/OTA) and Neer classifications. Patients' body mass indexes (BMI) were calculated and used to identify two groups, BMI ≥ 30 kg/cm (obese) and &lt; 30 kg/cm (non-obese). Independent t tests were used for statistical analysis of continuous variables and χ2 tests for categorical variables. Regression analysis was performed to determine if BMI was a predictor of fracture pattern severity as determined by the AO/OTA classification. Results: Overall, 223 patients who sustained proximal humerus fractures were analyzed. Patient age at time of injury was 60.5 ± 13.7 years. There were 67 AO/OTA 11A, 79 AO/OTA 11B, and 77 AO/OTA 11C fracture types. Seventy-two patients (32.3%) were obese. No significant differences were seen between groups in regard to demographic variables, Neer classification, or functional and clinical outcomes as determined by DASH scores and shoulder ROM, respectively. Statistical analyses confirmed that obesity is associated with more severe fracture patterns of the proximal humerus as categorized by the AO/OTA classification. An independent t test confirmed that BMI was significantly higher in the complex fracture group based on the AO/OTA classification (p = 0.047). Regression analysis also demonstrated that age (p = 0.005) and CCI (p = 0.021) were predictors of more severe fractures, while BMI approached significance (p = 0.055) based on the AO/OTA classification. Conclusion: A significantly higher incidence of complex proximal humerus fracture patterns is observed in patients with higher body mass indexes based on the AO/OTA classification. Age and CCI are also associated with more severe fracture patterns of the proximal humerus as determined by the AO/OTA classification. No differences were seen in outcomes or complication rates between obese patients and non-obese patients. Level Of Evidence: Level III. Answer: Yes, the timing of surgery for proximal humeral fracture does affect inpatient outcomes. Surgery performed 3 days or more after admission for a fracture of the proximal humerus is independently associated with inpatient adverse events, prolonged postoperative stay, and increased nonroutine discharge, even when controlling for comorbidities and complexity (PUBMED:24925700). Delaying surgery for proximal humeral fracture is likely to increase inpatient morbidity, postoperative length of stay, and nonroutine discharge, suggesting that avoiding nonmedical delays is advantageous (PUBMED:24925700). Additionally, delayed reverse total shoulder arthroplasty for fracture (more than 28 days from injury) does not lead to an increase in complication rates but may result in worse functional outcomes in patients who ultimately require reverse total shoulder arthroplasty (PUBMED:34140139). Furthermore, the best patient-reported outcomes for proximal humerus fractures treated with shoulder hemiarthroplasty were found when surgery was performed 6-10 days after the reported date of fracture, with outcomes decreasing with delays of more than 10 days (PUBMED:38312897). In the context of open reduction and internal fixation, surgery performed within 5 days of the fracture event is recommended, as a delay of more than 5 days is related to a significant increase in complications (PUBMED:26542867). For complex three- and four-part fractures of the humeral head in octogenarians, reverse shoulder arthroplasty is a safe and reliable surgical option with satisfactory outcomes (PUBMED:38449969). In summary, the timing of surgery for proximal humeral fractures is an important factor that can influence inpatient outcomes, complication rates, and functional recovery. Early surgical intervention within a specific timeframe is generally associated with better outcomes and fewer complications.
Instruction: Increased lymph node evaluation with colorectal cancer resection: does it improve detection of stage III disease? Abstracts: abstract_id: PUBMED:19620539 Increased lymph node evaluation with colorectal cancer resection: does it improve detection of stage III disease? Hypothesis: Evaluation of 12 or more lymph nodes (LNs) with colorectal cancer (CRC) resection may not improve detection of stage III disease. Design: Retrospective review after intervention. Setting: Community teaching hospital. Patients: We evaluated 701 consecutive operative CRC cases ascertained from our Cancer Registry. Intervention: Patients undergoing resection before (n = 553) a multidisciplinary initiative emphasizing the importance of LN counts were compared with those undergoing operation afterward (n = 148). Main Outcome Measures: Number of LNs evaluated, proportion of patients with stage III disease, and proportion of patients with N1 vs N2 disease. Results: Demographic, tumor, and treatment variables were similar for both groups, except for younger age, fewer white patients, and more laparoscopic resections in the late period. Lymph node counts increased from a mean (SEM [median]) of 12.8 (0.3 [12]) to 17.3 (0.7 [16]) (P &lt; .001), with 53.0% of the early vs 71.6% of the late patients having at least 12 LNs examined. The proportion diagnosed as having stage III CRC was 204 of 553 (36.9%) for the early group vs 48 of 148 (32.4%) for the late group (P = .31). Among patients with positive LNs, the distribution of N1 and N2 disease was unchanged (early, 50.5% N1 and 49.5% N2; late, 54.2% N1 and 45.8% N2; P = .54). Conclusions: Increased LN retrieval does not identify a greater number of patients with stage III CRC nor does it increase the proportion of patients with positive LNs with N2 disease. Our data suggest that harvest of at least 12 LNs as a quality or performance measure appears unfounded. abstract_id: PUBMED:23833661 Evaluation of preoperative C-reactive protein aids in predicting poor survival in patients with curative colorectal cancer with poor lymph node assessment. Lymph node status is the most significant prognostic factor of colorectal cancer. However, there is a risk of disease understaging if the extent of lymph node assessment is sub-optimal. Preoperative C-reactive protein (CRP) is known to be a useful tool in predicting postoperative outcomes in patients with colorectal cancer. We retrospectively evaluated whether CRP adds to prognosis information in stage I-III colorectal cancer patients with poor lymph node assessment. In stages I-III, multivariate analysis revealed that CRP-positive status and advanced T-stage were factors that independently affected survival. In stage III, univariate analysis revealed that lymph node number retrieval and lymph node ratio were factors that affected survival. However, CRP positivity was the only independent factor for survival. CRP positivity did not predict poor prognosis in stage II or III patients with adequate lymph node retrieval. By contrast, the prognosis of CRP-positive patients was poorer than that of CRP-negative patients in stage II and III, with inadequate lymph node retrieval. CRP is an independent prognostic marker in patients with stage I-III, II or III colorectal cancer. The evaluation of CRP may provide useful information on prognosis in curative patients with an inadequate examination of lymph nodes. abstract_id: PUBMED:22313594 Prognostic value of the lymph node ratio in stage III colorectal cancer. The nodal stage of colorectal cancer is based on the number of positive nodes. It is inevitably affected by the number of removed lymph nodes, but lymph node ratio can be unaffected. We investigated the value of lymph node ratio in stage III colorectal cancer in this study. The clinicopathologic factors and follow-up data of 145 cases of stage III colorectal cancer between January 1998 and December 2008 were analyzed retrospectively. The Pearson and Spearman correlation analyses were used to determine the correlation coefficient, the Kaplan-Meier method was used to analyze survival, and the Cox proportional hazard regression model was used for multivariate analysis in forward stepwise regression. We found that lymph node ratio was not correlated with the number of removed lymph nodes (r = -0.154, P = 0.065), but it was positively correlated with the number of positive lymph nodes (r = 0.739, P &lt; 0.001) and N stage (r = 0.695, P &lt; 0.001). Kaplan-Meier survival analysis revealed that tumor configuration, intestinal obstruction, serum carcinoembryonic antigen (CEA) concentration, T stage, N stage, and lymph node ratio were associated with disease-free survival of patients with stage III colorectal cancer (P &lt; 0.05). Multivariate analysis showed that serum CEA concentration, T stage, and lymph node ratio were prognostic factors for disease-free survival (P &lt; 0.05), whereas N stage failed to achieve significance (P = 0.664). We confirmed that lymph node ratio was a prognostic factor in stage III colorectal cancer and had a better prognostic value than did N stage. abstract_id: PUBMED:19330780 Nodal stage of stage III colon cancer: the impact of metastatic lymph node ratio. Purpose: We assessed which classification of lymph node metastasis better predicted outcomes in patients with colorectal carcinoma. Methods: We identified 318 patients (176 men) with stage III colon cancer who underwent curative resection. The number of LNs dissected, LNR, and disease-free survival time, were analyzed. Lymph node disease was stratified by the American Joint Committee on Cancer staging and LNR, with the latter categorized into groups with LNR &lt;0.059 (n = 67), 0.059-0.23 (n = 171), and &gt;0.23 (n = 80). Median follow-up time was 37 (range, 1-122) months. Results: LNR significantly increased with the number of metastatic LNs (P &lt; 0.0001). Three-year disease-free survival (DFS) rates differed significantly in the three LNR groups. Within each TNM stage, 3-year DFS rates differed according to LNR, but, within each LNR subgroup, 3-year DFS did not differ according to TNM stage. When both TNM stage and LNR subgroup were considered, 3-year DFS was stratified into four groups, which differed significantly (P &lt; 0.0001). Considering number of retrieved lymph nodes, this stratification was not found when &lt;12 lymph nodes retrieved. Conclusions: Re-stratified lymph node staging, reflecting both LNR and TNM stage, can predict survival in patients with LN-positive colon cancer, especially when more than 12 lymph nodes harvested. abstract_id: PUBMED:28972442 Predictive Value of the Number of Harvested Lymph Nodes and Cut-Off for Lymph Node Ratio in the Prognosis of Stage II and III Colorectal Cancer Patients. Purpose/aim: The appropriate staging of colorectal cancer requires at least 12 lymph nodes to be sampled. We evaluated whether lymph node sampling (LNS) and lymph node ratio (LNR) can predict the prognosis of stage II-III patients. Materials And Methods: This is a retrospective study on 432 patients classified in LNS ≥12 and LNS &lt;12. Disease-free survival (DFS) was computed using the Kaplan-Meier method. We stratified stage III patients into 4 quartiles base on LNR values. To determine the optimal LNR cut-off, receiver operating characteristic (ROC) curve analysis was performed. Results: There was a positive association between the number of lymph node sampled and the number of metastatic lymph nodes (p &lt; 0.01). Among stage II patients, the DFS was 81% for LNS ≥ 12 and 72% for LNS &lt; 12 (p = 0.158). Among stage III patients, the DFS was 58% (p &lt; 0.001). We found a significant association between LNR quartiles and relapse in stage III patients but only in the LNS ≥ 12 group. ROC curve analysis indicated an ideal LNR cut-off value at 0.194 (sensitivity 65% and specificity 61%). The DFS of patients with LNR below 0.194 was 71%, and that of patients with LNR above 0.194 was 45% (log-rank test, p &lt; 0.001). In the patients with LNS ≥ 12, the cut-off of 0.257 could predict recurrence (specificity 86%). Conclusions: Stage II patients with LNS &lt; 12 tend to have shorter DFS than stage II patients with LNS ≥ 12. In stage III patients, an appropriate LNR cut-off is a better prognostic predictor than LNR quartile, especially in patients with LNS ≥ 12. abstract_id: PUBMED:27438774 Prognostic impact of lymph node skip metastasis in Stage III colorectal cancer. Aim: The aim of this study was to evaluate the prognostic impact of lymph node skip metastasis (LNSM) in patients with Stage III colorectal cancer. Method: Between April 2003 and December 2014, a total of 41 patients with lymph node skip metastasis (skip+) were compared with 86 patients with pericolic lymph node metastases [lymph node distribution (LND)1] and 57 patients with intermediate and/or main lymph node metastasis (LND2+3). All patients had radical D3 lymphadenectomy, performed either laparoscopically or as open surgery. Results: The frequency of pT1-2 stage cancer was significantly higher in the skip+ group than in the LND1 group (26.8% vs 5.8%, P = 0.001). The number of metastatic lymph nodes in the skip+ group was lower than in the LND2+3 group (1.9 ± 1.5 vs 6.5 ± 6.0, P &lt; 0.001). The 3-year disease-free survival (DFS) of the skip+, LND1 and LND2+3 groups was 64.8%, 69.7% and 40.1%, respectively (P = 0.008). The 3-year systemic recurrence rates of the skip+, LND1 and LND2+3 groups were 30.2%, 20.3% and 48.1%, respectively; (P = 0.002). Cox regression analysis revealed that preoperative carcinoembryonic antigen (CEA) of ≥ 5 ng/ml [hazard ratio (HR) = 2.2, P = 0.029], poor differentiation (HR = 3.8, P = 0.001) and skip+ (HR = 0.2, P = 0.021) were independently prognostic factors for DFS. Conclusion: The prognosis for the LND1-negative lymph node skip metastasis group was better than for the LND2+3 group and was comparable with that of the LND1 group after radical D3 lymphadenectomy. abstract_id: PUBMED:34293461 Combining tumor deposits with the number of lymph node metastases to improve the prognostic accuracy in stage III colon cancer: a post hoc analysis of the CALGB/SWOG 80702 phase III study (Alliance)☆. Background: In colon cancer, tumor deposits (TD) are considered in assigning prognosis and staging only in the absence of lymph node metastasis (i.e. stage III pN1c tumors). We aimed to evaluate the prognostic value of the presence and the number of TD in patients with stage III, node-positive colon cancer. Patients And Methods: All participants from the CALGB/SWOG 80702 phase III trial were included in this post hoc analysis. Pathology reports were reviewed for the presence and the number of TD, lymphovascular and perineural invasion. Associations with disease-free survival (DFS) and overall survival (OS) were evaluated by multivariable Cox models adjusting for sex, treatment arm, T-stage, N-stage, lymphovascular invasion, perineural invasion and lymph node ratio. Results: Overall, 2028 patients were included with 524 (26%) TD-positive and 1504 (74%) TD-negative tumors. Of the TD-positive patients, 80 (15.4%) were node negative (i.e. pN1c), 239 (46.1%) were pN1a/b (&lt;4 positive lymph nodes) and 200 (38.5%) were pN2 (≥4 positive lymph nodes). The presence of TD was associated with poorer DFS [adjusted hazard ratio (aHR) = 1.63, 95% CI 1.33-1.98] and OS (aHR = 1.59, 95% CI 1.24-2.04). The negative effect of TD was observed for both pN1a/b and pN2 groups. Among TD-positive patients, the number of TD had a linear negative effect on DFS and OS. Combining TD and the number of lymph node metastases, 104 of 1470 (7.1%) pN1 patients were re-staged as pN2, with worse outcomes than patients confirmed as pN1 (3-year DFS rate: 65.4% versus 80.5%, P = 0.0003; 5-year OS rate: 87.9% versus 69.1%, P = &lt;0.0001). DFS was not different between patients re-staged as pN2 and those initially staged as pN2 (3-year DFS rate: 65.4% versus 62.3%, P = 0.4895). Conclusion: Combining the number of TD and the number of lymph node metastases improved the prognostication accuracy of tumor-node-metastasis (TNM) staging. abstract_id: PUBMED:30940175 Lymph node yield and tumour subsite are associated with survival in stage I-III colon cancer: results from a national cohort study. Background: It has been suggested that apart from tumour and nodal status, a range of patient-related and histopathological factors including lymph node yield and tumour location seems to have prognostic implications in stage I-III colon cancer. We analysed the prognostic implication of lymph node yield and tumour subsite in stage I-III colon cancer. Methods: Data on patients with stage I to III adenocarcinoma of the colon and treated by curative resection in the period from 2003 to 2011 were extracted from the Danish Colorectal Cancer Group database, merged with information from the Danish National Patient Register and analysed. Results: A total of 13,766 patients were included in the analysis. The 5-year overall survival ranged from 59.3% (95% CI 55.7-62.9%) (lymph node yield 0-5) to 74.0% (95% CI 71.8-76.2%) (lymph node yield ≥ 18) for patients with stage I-II disease (p &lt; 0.0001) and from 36.4% (95% CI 29.8-43.0%) (lymph node yield 0-5) to 59.4% (95% CI 56.6-62.2%) (lymph node yield ≥ 18) for patients with stage III disease (p &lt; 0.0001). The 5-year overall survival for tumour side left/right was 59.3% (95% CI 57.9-60.7%)/64.8% (CI 63.4-66.2%) (p &lt; 0.0001). In the seven colonic tumour subsites, the 5-year overall survival ranged from 56.6% (95% CI 51.8-61.4%) at splenic flexure to 65.8% (95% CI 64.5-67.2%) in the sigmoid colon (p &lt; 0.0001). In a cox regression analysis, lymph node yield and tumour side right/left were found to be prognostic factors. Tumours at the hepatic and splenic flexures had an adverse prognostic outcome. Conclusion: For stage I-III colon cancer, a lymph node yield beyond the recommended 12 lymph nodes was associated with improved survival. Both subsite in the right colon, as well as subsite in the left colon, turned out with adverse prognostic outcome questioning a simple classification into right-sided and left-sided colon cancer. abstract_id: PUBMED:23780968 Lymph node evaluation and survival in colorectal cancer: review of population-based, prospective studies. Aim: The total number of lymph nodes retrieved, the number of positive nodes, and/or their ratio are used to evaluate the degree of progression of colorectal cancer. The aim of the present study is to review the relevant literature in order to improve lymph node evaluation and the quality of clinical practice. Materials And Methods: The English language literature on large, population-based, prospective clinical studies of the evaluation of lymph nodes in colorectal cancer was reviewed. This review focuses on the lymph node harvest (LNH) and the lymph node ratio (LNR), and the survival was also assessed. Results: The LNH was influenced by patient age, tumor size, Dukes' stage, preoperative radiotherapy, operative urgency, specimen length, pathology template, and academic status of the hospital. Many prospective studies demonstrated a significant correlation between high LNH and increased survival. LNR is an independent prognostic indicator for stage III colorectal cancer. However, there were many different cut-off values allowing for the optimal separation of subgroups according to survival. Conclusion: To improve lymph node evaluation and the quality of clinical practice, daily collaboration between surgeons and pathologists is important. Scientific evidence for reasonable and practical LNH and LNR values should be identified based on large, well-controlled, prospective studies. abstract_id: PUBMED:22102977 Clinical Significance of Lymph Node Ratio in Stage III Colorectal Cancer. Purpose: Recent literature has shown that lymph node ratio is superior to the absolute number of metastatic lymph nodes in predicting the prognosis in several malignances other than colorectal cancer. The aim of this study was to evaluate the prognostic significance of the lymph node ratio (LNR) in patients with stage III colorectal cancer. Methods: We included 186 stage III colorectal cancer patients who underwent a curative resection over a 10-year period in one hospital. The cutoff point of LNR was chosen as 0.07 because there was significant survival difference at that LNR. The Kaplan-Meier and the Cox proportional hazard models were used to evaluate the prognostic effect according to LNR. Results: There was statistically significant longer overall survival in the group of LNR &gt; 0.07 than in the group of LNR ≤ 7 (P = 0.008). Especially, there was a survival difference for the N1 patients group (LN &lt; 4) according to LNR (5-year survival of N1 patients was lower in the group of LNR &gt; 0.07, P = 0.025), but there was no survival difference for the N2 group (4 ≥ LN) according to LNR. The multivariate analysis showed that the LNR is an independent prognostic factor. Conclusions: LNR can be considered as a more accurate and potent modality for prognostic stratifications in patients with stage III colorectal cancer. Answer: The evaluation of an increased number of lymph nodes (LNs) with colorectal cancer (CRC) resection does not appear to improve the detection of stage III disease. A retrospective review conducted at a community teaching hospital compared patients undergoing resection before and after a multidisciplinary initiative emphasizing the importance of LN counts. Despite an increase in the mean number of LNs evaluated, there was no significant difference in the proportion of patients diagnosed with stage III CRC between the early and late groups. Additionally, the distribution of N1 and N2 disease among patients with positive LNs remained unchanged. These findings suggest that harvesting at least 12 LNs as a quality or performance measure may not be justified (PUBMED:19620539). Furthermore, other studies have focused on the prognostic value of lymph node ratio (LNR) in stage III colorectal cancer, indicating that LNR is not correlated with the number of removed lymph nodes but is positively correlated with the number of positive lymph nodes and N stage. LNR was found to be a prognostic factor with better prognostic value than N stage alone (PUBMED:22313594). Another study found that combining tumor deposits with the number of lymph node metastases could improve prognostic accuracy in stage III colon cancer (PUBMED:34293461). In summary, while increasing LN retrieval does not seem to identify more patients with stage III CRC, other factors such as LNR and the presence of tumor deposits may provide more accurate prognostic information.
Instruction: Do larger people have more naevi? Abstracts: abstract_id: PUBMED:11101543 Do larger people have more naevi? Naevus frequency versus naevus density. Background: It is unclear which of the number or the density of naevi on the skin is the more appropriate measure of risk of melanoma. Furthermore, the relationship between the number of naevi and their density in an individual has not been explored. Thus, for example, it is unknown if larger people tend to have more naevi by virtue of having a larger skin area, or if the density of naevi is similar in people of different body sizes. In this study, we explored the relationship between the number and the density of naevi in a sample of adolescents. Subjects And Methods: A sample survey of naevi in 472 grade 9 secondary school students (aged 14-15 years) was conducted in Tasmania, Australia during 1992, and a subset of these individuals was followed up in 1997. Counts of naevi of various sizes were taken on the arm, leg, and back. Naevus density was estimated by using an algorithm to estimate body surface area from the height and weight of an individual. More general relationships of the naevus counts to height and weight were also explored. Finally, we considered whether the relationship between naevus density and the anthropometric variables could be confounded by exposure to ultraviolet radiation. Results: The mean number of naevi was very similar in the two samples. Naevus density was slightly lower in the 1997 sample, mainly because of increasing body size in the cohort. The numbers of naevi were only weakly related to height and weight in males, and there was essentially no relationship in females. Regression analysis showed significant relationships of weight to the back naevus counts in males in 1992 and 1997, and to the arm naevus count in males in 1997; otherwise, none of the regression coefficients for height and weight were statistically significant. This picture did not change following adjustment for potentially confounding variables indicating time spent outdoors or in the sun. Furthermore, there was no evidence that time spent in the sun was related to the body mass index. Conclusions: It appears that the number and density of naevi in an individual are unrelated. Accordingly, with the present state of knowledge concerning the risk of melanoma, both the number and density of naevi should be considered as equally valid in future studies as markers of the risk of melanoma, and in studies on the natural history of naevi. If the disease mechanism is systemic, and not related to particular naevi, naevus density might form the better marker of risk. However, if the disease mechanism is related to effects on particular naevi, then the risk would vary in proportion to the number of naevi. abstract_id: PUBMED:16965417 Dermoscopy in black people. Background: Little is known about the use of dermoscopy in nonwhite-skinned populations and whether it can influence diagnostic performance. Objectives: To evaluate for the first time the utility and efficacy of dermoscopy in a black population for the diagnosis of pigmented cutaneous lesions. Methods: In total, 100 consecutive clinically doubtful or equivocal pigmented skin lesions in black patients were submitted to dermoscopic examination. The lesions were observed using dermoscopy by two groups of dermatologists, one in Brazil (in vivo) and the other in Italy (on slide images). Besides diagnosis, each group recorded on the same type of form the dermoscopic features present. Results: Of 100 clinically suspicious cases, 79 were Clark naevi, 15 seborrhoeic keratoses, four blue naevi, one dermatofibroma and one melanoma. The two groups of observers succeeded in identifying and classifying all the lesions to such a margin of diagnostic accuracy that only a few cases (three Clark naevi) were subjected to surgical excision to confirm diagnosis. Conclusions: Darker pigmentation of the skin does not impede the identification of single dermoscopic features. As in lighter-skinned populations, dermoscopy in black people can also lead to early and accurate diagnosis of melanoma, thereby significantly reducing the number of unnecessary excisions. abstract_id: PUBMED:1498755 Use of host factors to identify people at high risk for cutaneous malignant melanoma . Objective: To determine which host characteristics are risk factors for cutaneous malignant melanoma in order to aim prevention and early detection programs at people at high risk. Design: Case-control study. Setting: Southern Ontario. Subjects: The 583 case subjects were aged 20 to 69 years and had had malignant melanoma newly diagnosed between Oct. 1, 1984, and Sept. 30, 1986. The 608 control subjects were randomly selected from a list of residents in the study area and were stratum matched for age, sex and municipality. Intervention: Through in-person interviews the interviewer ascertained exposure to putative external risk factors and assessed skin colour and number of nevi on the arm, and the subject reported his or her natural hair colour at age 20 years, eye colour, skin reaction to repeated sun exposure, and freckle and whole-body nevus densities. Results: Although all the host factors mentioned were significantly associated with melanoma risk when considered separately, only hair colour, skin reaction to repeated sun exposure, and self-reported freckle and nevus densities remained significant after backward logistic regression analysis. The odds ratio for melanoma was estimated to be 10.7 in people who had many nevi compared with those who had none (95% confidence interval [CI] 6.6 to 17.4), 4.0 in people who had red hair compared with those who had black hair (95% CI 1.9 to 8.2), 1.9 in people who had many freckles compared with those who had none or few (95% CI 1.3 to 2.8) and 1.4 respectively in people who burned and had a subsequent increase in tan and those who burned and had no increase in tan after repeated sun exposure compared with those who did not burn [corrected]. Conclusions: Four risk factors for malignant melanoma have been identified. Prospective evaluation of their predictive value should be done. In the meantime, however, these factors should be used to identify people apparently at high risk for malignant melanoma, who can then be targeted for early detection and prevention programs. abstract_id: PUBMED:1918489 Melanocytic nevus density in Asian, Indo-Pakistani, and white children: the Vancouver Mole Study. Melanocytic nevus density in 378 Asian and 68 Indo-Pakistani school children 6 to 18 years of age was compared with that in 1146 white children of the same age range. At all ages, the number of melanocytic nevi 2 mm or larger per square meter of body surface area was substantially lower in Asians and Indo-Pakistanis than in white persons. Among white persons characteristics associated with a higher risk of cutaneous melanoma in adults, that is, light skin color, a propensity to burn rather than tan in the sun, and a history of numerous or severe sunburns, are also associated with the highest melanocytic nevus density in children. Examination of these same host pigmentation and sunburn factors among Asian children revealed no association with nevus density. abstract_id: PUBMED:19380663 Risk factors in elderly people for lentigo maligna compared with other melanomas: a double case-control study. Objective: To assess lentigo maligna (LM) as an epidemiological entity separate from other melanomas (OMs) in elderly people. Design: Double age- and sex-matched case-control study to compare the risk factors for LMs and OMs. Setting: General community. Patients A total of 76 patients with LM were paired by age and sex with 76 patients with OMs and 152 controls. Main Outcome Measures: The association of melanoma risk with the following potential risk factors: sun exposure history by 10-year periods, frequency of sunburns, phenotypic traits, density of freckles and sun sensitivity at age 20 years, counts of nevi larger than 2 mm in diameter on the face and forearm, skin aging features (as assessed using a photographic scale), and history of basal and/or squamous cell carcinomas. Results: Risk of LMs and OMs were similarly associated with history of sunburns, light skin type, and freckling. Cumulative chronic outdoor and occupational sun exposures were not risk factors in any of the 2 groups of melanomas. Lentigo maligna differed from OMs by the absence of a detectable association with the number of nevi and a greater association with nonmelanoma skin cancers. Conclusions: Although chronically sun-exposed skin is a prerequisite for LM, risk of LM does not increase with the cumulative dose of sun exposure, but LM is associated with sunburn history, like all other types of melanomas. The main epidemiological characteristic of LM is the absence of an apparent relation with the genetic propensity to develop nevi. This epidemiological profile is in accordance with recent molecular findings and may also account for the histoclinical and evolutive characteristics of LM. abstract_id: PUBMED:17291298 Clinically distinct form of acquired dermal melanocytosis with review of published work. Dermal melanocytosis is most commonly found in the skin of Asians and other darkly pigmented people. It is histologically characterized by the presence of dermal melanocytes, with or without presence of dermal melanophages. Mongolian spot, nevus of Ito, nevus of Ota, nevus of Hori, and blue nevus are most common and represent distinct types of dermal melanocytosis. Other clinical patterns of acquired dermal melanocytosis have also been described. Herein, we report a unique case of acquired dermal melanocytosis diffusely affecting the entire back of a 50-year old African-American male and also review and discuss various patterns of unusual acquired dermal melanocytosis. abstract_id: PUBMED:12399772 Health disparities in arthritis and musculoskeletal and skin diseases-the dermatology session: National Institute of Arthritis and Musculoskeletal and Skin Diseases, Bethesda, Maryland, December 15-16, 2000. The National Institute of Arthritis and Musculoskeletal and Skin Diseases hosted a diverse group of physicians and scientists to discuss health disparities in arthritis, musculoskeletal, and skin diseases. This article discusses the cutaneous disease portion of the conference. Speakers described a history of scarce information on cutaneous diseases in skin of color, problems with the data that do exist, and inappropriate use of dermatologic data. Basic descriptive data on the structure and function of skin in people of color is needed. For specific cutaneous diseases, information must be collected on their epidemiology, clinical presentation, natural history, complications, and therapeutics. Researchers are standardizing methods for studying keloidal scars, and are developing and validating measurement tools for cutaneous diseases in skin of color, such as atypical nevi, psoriasis, and hand dermatitis, but more is needed. abstract_id: PUBMED:12544108 The longer your telomeres, the larger your nevus? N/A abstract_id: PUBMED:686664 Body size and form of children of predominantly black ancestry living in West and Central Africa, North and South America, and the West Indies. Stature, sitting height, hip width, arm and calf circumferences and body weight have been measured in black children of Richland County, South Carolina. Lower limb height and three indices of body shape were obtained from the measurements. Sample size exceeded 200 for each of five age-sex groups representing girls and boys aged 6 years, girls and boys aged 9 years, and boys aged 11 years. Comparisons are made with findings from previous research on children of predominantly black ancestry living in west and central Africa, the West Indies, and North, Central and South America. Black children of Richland County measured during 1974--77 are taller than black children studied since 1960 in Angola, Chad, Ghana, Liberia, Nigeria, Senegal, Uganda, Anguilla, Barbados, Cuba, Guyana, Jamaica, Nevis, St. Kitts, St. Vincent, and Surinam. Children of well-to-do black families in Accra and Ibadan are no taller or heavier than black children of Richland County taken without regard to socio-economic status. In hip width, averages for Richland County black children are larger than those for children of the Hutu and Yoruba tribes; in arm girth they are larger than children of the Hutu and Tutsi tribes. Age changes and group differences are reported for hip width relative to lower limb height, and lower limb height relative to sitting height. During childhood, the hip/lower limb index decreases, and the lower limb/sitting height index increases. Almost identical hip/lower limb indices characterize black populations in Africa, Cuba, and the United States. abstract_id: PUBMED:17372099 Nevus type in dermoscopy is related to skin type in white persons. Background: Dermoscopic classification of acquired melanocytic nevi (AMN) is based on the evaluation of 3 main criteria-global pattern, pigment distribution, and color. Objective: To determine whether these features are different in AMN in white people with different skin types (STs) according to the Fitzpatrick classification. Design: Digital dermoscopic images of AMN were evaluated, and the correlation of the 3 main dermoscopic criteria with patient ST was analyzed. Setting: Consecutive patients were recruited from 7 pigmented lesion clinics between June 1, 2004, and June 30, 2005. Patients For each patient, the ST (I [always burns, never tans] to IV [rarely burns, tans with ease]) was scored, and 1 representative AMN (defined as the AMN showing a dermoscopic typology that is repeatedly seen in the same patient) was selected and photographed. Main Outcome Measures: The distribution of the dermoscopic criteria of AMN in patients with different STs was calculated by univariate analysis. Differences in prevalence were tested using the chi(2) test. The correlation between dermoscopic criteria and ST, adjusted for age, sex, and enrolling center, was evaluated by calculating odds ratios and 95% confidence intervals by logistic regression analysis. Results: Of 680 included patients, dermoscopic analysis revealed significant differences in the prevalent nevus pattern in the 4 ST groups. Light brown AMN with central hypopigmentation were associated with ST I, and ST IV was associated with the so-called black nevus (P&lt;.001), typified by reticular pattern, central hyperpigmentation, and dark brown coloration. A significant association was also found between multifocal pattern and ST II and ST III. Conclusions: The dermoscopic nevus type varies according to different ST in white people. This knowledge may have an effect on obtaining for biopsy lesions that exhibit unusual dermoscopic patterns when patient ST is considered. Answer: The relationship between body size and the number of naevi (moles) has been explored in various studies, but the findings suggest that there is not a direct correlation between larger body size and an increased number of naevi. In a study conducted on adolescents in Tasmania, Australia, it was found that the number of naevi was only weakly related to height and weight in males, and there was essentially no relationship in females. The study concluded that the number and density of naevi in an individual are unrelated, and both should be considered as markers of the risk of melanoma in future studies (PUBMED:11101543). This suggests that larger people do not necessarily have more naevi simply because they have a larger skin area. The density of naevi, which is the number of naevi per unit of body surface area, was found to be slightly lower in the 1997 sample of the study, mainly due to increasing body size in the cohort. This indicates that as individuals grow larger, their naevi may not increase proportionally to their body size, leading to a lower density of naevi. Therefore, based on the available evidence, it cannot be conclusively stated that larger people have more naevi. Instead, the number of naevi seems to be weakly or not at all related to body size, and both the number and density of naevi should be considered in assessing melanoma risk.
Instruction: Is nasotracheal intubation safe in facial trauma patients? Abstracts: abstract_id: PUBMED:27863722 Is nasotracheal intubation safe in facial trauma patients? Background: In the prehospital setting, oral intubation is preferred in facial trauma patients due to the potential for further injury during nasotracheal intubation. This study compared the complications of nasal vs. oral vs. nasal + oral intubations performed by first responder crews in facial trauma patients. Our objective was to assess patient outcomes and complications to determine the risk of nasal intubation in facial trauma patients in the prehospital setting. Methods: Patients with facial trauma between 2008 and 13 were abstracted from the Miami Valley Hospital trauma registry: 50 were nasal only (n), 27 nasal + oral (no), and 135 oral only (o) intubation. Analysis of variance with the post-hoc Least Significance Difference Test and the chi square test were used in the analysis. Results: The oral group was older [41.1 ± 17.6 (o) vs. 36.2 ± 14.1 (n) vs. 33.0 ± 15.7 (no), p = 0.032] and had a higher facial abbreviated injury severity (AIS) mean score (1.8 ± 0.6 vs. 1.3 ± 0.5 vs. 1.5 ± 0.5, p &lt; 0.001). The three groups did not differ in mortality (n = 20% vs. o = 18% vs. no = 30%, p = 0.37). The n + o group was intubated longer (p &lt; 0.001) and had longer ICU and hospital lengths of stay (p = 0.015 and p = 0.023). The three groups did not differ on the composite of any pulmonary complication - i.e., any one of sinusitis, pneumonia, atelectasis, and respiratory failure - 44% (no) vs. 24% (o) vs. 30% (n), p = 0.10). However, nasal + oral patients were more likely to have one or more of the eight complication studied [63% (no) vs. 28% (o) vs. 34% (n), p = 0.002], and have a longer ICU and HLOS. Conclusions: Prehospital nasal intubation is a viable short-term alternative to oral intubation in patients with facial trauma and warrants further research. abstract_id: PUBMED:1155760 A technique of nasotracheal intubation in patients with recent facio-maxillary injury. A technique of nasotracheal intubation in patients with recent facial injury is described. Oral intubation is performed initially to ensure the safety of the patient during the procedure of nasal intubation. abstract_id: PUBMED:7396098 Experiences with nasotracheal intubation in the primary management of severe facial burns of face and neck (author's transl) Experiences with nasotracheal intubation in the primary management of 157 severe burns of the face and neck are reported. The indication for intubation is provided by the pathophysiology of these cases. Complications are minimized by a careful technique that makes due allowance to the individual anantomical features. The advantages and disadvantages of the method are discussed. abstract_id: PUBMED:9144052 Blind nasotracheal intubation in the presence of facial trauma. Blind nasotracheal intubation (BNTI) is an effective procedure for the intubation of trauma patients. The presence of major facial trauma has been considered a relative contraindication due to the perceived risk of intracranial placement. The purpose of the present study was to assess the risk of intracranial placement in patients with facial fractures who undergo BNTI. The records of 311 patients with facial fractures were reviewed for methods of intubation and complications. Eighty-two patients underwent BNTI. There were no cases of intracranial placement, significant epistaxis requiring nasal packing, esophageal intubation, or osteomyelitis. Three patients (4%) developed sinusitis and eight (10%) developed aspiration pneumonia. We conclude that the presence of facial trauma does not appear to be a contraindication to BNTI. abstract_id: PUBMED:3334798 Nasotracheal intubation in traumatic craniofacial dislocation: use of the lighted stylet. The coexistence of facial trauma and suspected cervical spine injury represents a difficult problem in airway management. The successful use of guided nasotracheal intubation using a flexible lighted stylet is described, and its application to the critically injured patient is emphasized. abstract_id: PUBMED:2620523 Airway intubation in injured patients at the scene of an accident. Trauma patients requiring intubation at the scene of the accident were entered into a study from June 1985 to June 1987 to determine: 1) the success rate of intubation by flight crews and 2) factors important in managing the difficult airway at the scene. One hundred thirty-six patients were reviewed. The success rate of trauma patients intubated in the field was 92.6%. The success rate of each procedure was, orotracheal 87%, and nasotracheal 77.6%. Six out of the 10 patients unsuccessfully intubated had vomiting or blood in the oropharynx which was cited as the reason for failure of intubation. All 24 patients requiring medications (paralytics and sedatives) for intubation were successfully intubated. Trismus and combativeness were the indications for medication usage. An aeromedical crew (MD, RN, RT) can successfully intubate trauma patients at the scene of the accident. Severe facial injuries with vomiting and blood in the oropharynx are factors in intubation failure. The use of muscle relaxants and sedatives facilitates difficult intubations. abstract_id: PUBMED:9144060 Intracranial placement of a nasotracheal tube after facial fracture: a rare complication. Extensive facial trauma is often associated with fractures to the skull base, cribriform plate, and sphenoid sinus. Attempted intubation of patients with facial trauma may result in intracranial penetration and placement of nasogastric or nasotracheal tubes into the brain. Such a complication carries significant morbidity and mortality. Intracranial placement of nasogastric tubes has been reported multiple times in the literature. Intracranial placement of a nasotracheal tube, however, has been reported only twice. Such a case is presented, along with a review of the literature and a discussion. abstract_id: PUBMED:25935954 Submental intubation: a new approach in panfacial trauma. The submental route for endotracheal intubation is an alternative to nasal intubation or tracheo- stomy in the surgical management of patients with complex craniomaxillofacial injuries. The critical indication for submental intubation is the requirement for intra-operative maxillomandibular fixation in the presence of injuries that preclude nasal intubation and in a situation where a tracheostomy is not otherwise required. Maxillomandibular fixation is essential to re-establish dental occlusion for a normal functional result in dentate patients with fractures involving alveolar segments of the jaws. However, maxillomandibular fixation precludes orotracheal intubation. Nasotracheal intubation is often used but is contra-indicated in the presence of skull base fractures and will interfere with the access to certain fracture types. A tracheostomy has a high potential complication rate and in many patients, an alternative to the oral airway is not required beyond the peri-operative period. Submental intuba- tion is a simple and useful technique with low morbidity in selected cases of craniomaxillofacial trauma. abstract_id: PUBMED:25450733 Patient treated with sub-mental intubation for maxillofacial trauma Maxillo-facial traumas are frequent and most often occur in young patients. Naso-tracheal or orotracheal intubation may be contraindicated in case of combined occlusal fracture and nasal or ethmoido-nasal fracture. This study was carried out a clinical case of a patient treated at the Lille University Hospital for a maxillofacial trauma associating fracture of nose and maxilla. The purpose was to assess the reliability of submental intubation as an alternative to tracheotomy. Submental intubation is a reliable single and safe technique allowing an one-stage surgical treatment in case of complex association of fractures without using tracheotomy. Its use should be implemented on a larger scale. abstract_id: PUBMED:8819724 Management of panfacial fractures--also an intubation problem Conventional oro- or nasotracheal intubation represents an obstacle in cases of panfacial fractures or elective operations of the facial skeleton above the Le Fort I level because of the intermaxillary fixation that often becomes necessary. Until recently, protection of the airway required tracheotomy with its well-known disadvantages of the usual intubations, which could compromise surgical strategies. A useful alternative method for protecting the airway involves the submental route for intubation and was first described in 1986 by Altemir. We present our experience with submandibular transmylohoid intubation in 14 patients. The technique is relatively simple and includes only a small risk potential. The cosmetic result of the extraoral incision required is good to excellent. Answer: Nasotracheal intubation in facial trauma patients is a topic of some debate, with various studies highlighting different aspects of its safety and efficacy. According to the study abstracted from the Miami Valley Hospital trauma registry, prehospital nasal intubation is considered a viable short-term alternative to oral intubation in patients with facial trauma, suggesting that it can be safe when performed in the prehospital setting (PUBMED:27863722). This study did not find significant differences in mortality or pulmonary complications between nasal only, nasal + oral, and oral only intubation groups, although patients who underwent both nasal and oral intubation had a higher rate of complications and longer ICU and hospital stays. Another study emphasizes the importance of a careful technique to minimize complications in nasotracheal intubation for the primary management of severe burns of the face and neck (PUBMED:7396098). Similarly, the use of a lighted stylet for guided nasotracheal intubation is described as successful in the context of facial trauma and suspected cervical spine injury (PUBMED:3334798). However, there are concerns about the potential risks associated with nasotracheal intubation in the presence of facial trauma. One study reports that blind nasotracheal intubation (BNTI) did not result in intracranial placement in patients with facial fractures, suggesting that facial trauma may not be a contraindication to BNTI (PUBMED:9144052). Yet, there are rare cases reported where nasotracheal intubation resulted in intracranial placement after facial fracture, which carries significant morbidity and mortality (PUBMED:9144060). Alternative intubation techniques, such as submental intubation, have been proposed for cases where nasal intubation is contraindicated due to skull base fractures or when it would interfere with surgical access (PUBMED:25935954, PUBMED:25450733, PUBMED:8819724). Submental intubation is described as a simple, useful technique with low morbidity in selected cases of craniomaxillofacial trauma, and it allows for one-stage surgical treatment without the need for tracheotomy. In conclusion, while nasotracheal intubation can be safe in facial trauma patients when performed with careful technique and appropriate precautions, there are risks associated with its use, particularly in the presence of certain types of fractures. Alternative techniques such as submental intubation may be preferred in some cases to avoid these risks.
Instruction: Is obesity a risk factor for mortality in coronary artery bypass surgery? Abstracts: abstract_id: PUBMED:15967852 Is obesity a risk factor for mortality in coronary artery bypass surgery? Background: The published articles examining obesity and CABG surgery contain conflicting results about the role of body mass index (BMI) as a risk factor for in-hospital mortality. Methods And Results: We studied 16 218 patients who underwent isolated CABG in the Providence Health System Cardiovascular Study Group database from 1997 to 2003. The effect of BMI on in-hospital mortality was assessed by logistic regression, with BMI group (underweight, normal, overweight, and 3 subgroups of obesity) as a categorical variable or transformations, including fractional polynomials, of BMI as a continuous variable. BMI was not a statistically significant risk factor for mortality in any of these assessments. However, using cumulative sum techniques, we found that the lowest risk-adjusted CABG in-hospital mortality was in the high-normal and that overweight BMI subgroup patients with lower or higher BMI had slightly increased mortality. Conclusions: Body size is not a significant risk factor for CABG mortality, but the lowest mortality is found in the high-normal and overweight subgroups compared with obese and underweight. abstract_id: PUBMED:14976845 Is obesity still a risk factor for patients undergoing coronary surgery? Historically, cardiologists and surgeons have considered obesity as a risk factor for mortality and morbidity during cardiac surgery. This factor has in fact been included in the Parsonnet score, one of the first risk stratification systems. This has led to an under-representation of obese patients with severe ischemic heart disease in the surgical population. A review of the recent literature seems to suggest that obesity is not any longer a risk factor for mortality and morbidity following coronary revascularization. Indeed, obesity is not included in the EuroSCORE, a more recent risk stratification system. Further evidence suggests that the use of off-pump coronary surgery in these patients is associated with a reduced in-hospital mortality and morbidity when compared with conventional coronary surgery with cardiopulmonary bypass and cardioplegic arrest. abstract_id: PUBMED:34747461 Sex difference in the risk factor distributions and outcomes after coronary artery bypass graft surgery in the young population. Objectives: Coronary artery disease is becoming a major health concern in the young population. Male and female patients may experience different journeys after coronary artery disease events. We aimed to evaluate risk factors and compare outcomes between young male and female patients undergoing coronary artery bypass graft surgery (CABG). Methods: In this registry-based large sample size study, patients undergoing isolated CABG at a young age (premature isolated CABG) between 2007 and 2016 were included and followed up until 2020. Premature was defined as women and men younger than 55 years old. The main end points of the study were 7-year all-cause mortality and 7-year major adverse cardiovascular and cerebrovascular events (MACCEs). Results: Of a total of 24 428 patients who underwent CABG, 7217 patients (men-to-women ratio ≈4:1) with premature isolated CABG were included. The median follow-up duration was 78.5 months (75.2-81.6 months). The prevalence rates of diabetes mellitus, hypertension, dyslipidaemia and obesity were significantly higher in women than in men (58.3% vs 28.6%, 64.2% vs 38.5%, 69.7% vs 55.3% and 44.7 vs 23.9, respectively; all Ps &lt; 0.05). The risk factor burden (mean of the risk factor count per year) was also higher among the female population. Diabetes mellitus was the common mortality predictor between men and women. In the subgroup analysis (interaction analysis in the adjusted model), hypertensive females had a higher rate of MACCE and a higher rate of mortality than hypertensive males; however, this difference was not significant in the non-hypertensive population. Opium addiction was a strong predictor of MACCE and all-cause mortality among men. Female patients had a higher rate of 7-year MACCE (hazard ratio, 1.33; 95% confidence interval, 1.16-1.51) and a higher rate of 7-year all-cause mortality (hazard ratio, 1.23; 95% confidence interval, 0.98-1.53). Conclusions: The risk factor profile and predictors of outcomes were different between our female and male patients. Women carried a higher risk of events and mortality after CABG at a young age. abstract_id: PUBMED:28217179 Does Body Mass Index Affect Mortality in Coronary Surgery? Introduction: The Body Mass Index (BMI) quantifies nutritional status and classifies humans as underweight, of normal weight, overweight, mildly obese, moderately obese or morbidly obese. Obesity is the excessive accumulation of fat, defined as BMI higher than 30 kg/m2. Obesity is widely accepted to complicate anaesthesia and surgery, being a risk factor for mediastinitis after coronary artery bypass grafting (CABG). We sought the evidence on operative mortality of CABG between standard BMI groups. Materials And Methodology: A simple literature review of papers presenting the mortality of CABG by BMI group: Underweight (BMI ≤ 18.49 kg/m2), normal weight (BMI 18.5-24.9 kg/m2), overweight (BMI 25.0-29.9 kg/m2), mild obesity (BMI 30.0-34.9 kg/m2), moderate obesity (BMI 35.0-39.9 kg/m2), or morbid obesity (BMI ≥ 40.0 kg/m2). Results: We identified 18 relevant studies with 1,027,711 patients in total. Their variability in size of samples and choice of BMI groups precluded us from attempting inferential statistics. The overall cumulative mortality was 2.7%. Underweight patients had by far the highest mortality (6.6%). Overweight patients had the lowest group mortality (2.1%). The group mortality for morbidly obese patients was 3.44%. Discussion: Patients with extreme BMI's undergoing CABG (underweight ones more than morbidly obese) suffer increased crude mortality. This simple observation indicates that under nutrition and morbid obesity need be further explored as risk factors for coronary surgery. abstract_id: PUBMED:12414047 Risk of morbidity and in-hospital mortality in obese patients undergoing coronary artery bypass surgery. Objectives: Obesity is often perceived to be a risk factor for adverse outcomes following coronary artery bypass graft (CABG) surgery. Several studies have been unclear about the relationship between obesity and the risk of adverse outcomes. The aim of this study was to examine the relationship between obesity and in-hospital outcomes following CABG, while adjusting for confounding factors. Methods: A total of 4713 consecutive patients undergoing isolated CABG between April 1997 and September 2001 were retrospectively analyzed. Body mass index (BMI) was used as the measure of obesity and was grouped as non-obese (BMI &lt;30), obese (BMI 30-35), and severely obese (BMI &gt; or =35). Associations between obesity and in-hospital outcomes were assessed by use of logistic regression to adjust for differences in patient characteristics. Results: A total of 3429 patients were defined as non-obese, compared to 1041 obese and 243 severely obese. There was no association between obesity and in-hospital mortality, stroke, myocardial infarction, re-exploration for bleeding and renal failure. Obesity was significantly associated with atrial arrhythmia (adjusted odds ratio (OR) 1.19, P = 0.037 for the obese; adjusted OR 1.52, P = 0.008 for the severely obese) and sternal wound infections (adjusted OR 1.82, P = 0.002 for the obese; adjusted OR 2.10, P = 0.038 for the severely obese). The severely obese patients were 4.17 (P &lt; 0.001) times more likely to develop harvest site infections. Severely obese patients were also more likely to have prolonged mechanical ventilation and post-operative stays, compared to non-obese patients. Conclusions: Obese patients are not associated with an increased risk of in-hospital mortality following coronary artery bypass surgery. In contrast, there is a significant increased risk of morbidities and post-operative length of stay in obese patients compared to non-obese patients. abstract_id: PUBMED:29917004 Is female sex an independent risk factor for early mortality in isolated coronary artery bypass graft? A propensity-matched analysis. Aims: Despite several studies reporting a higher mortality rate for women undergoing isolated coronary artery bypass graft (CABG) surgery, this issue has not been completely clarified. The aim of the current study was to determine if sex differences in CABG surgery affect adjusted operative results and operative mortality. Methods: Between January 2005 and December 2012, 3756 consecutive patients (3009 men and 747 women) underwent isolated CABG at our institution. Women presented with older age, smaller BSA and higher rate of preoperative comorbidities. The propensity score methodology was used to obtain risk-adjusted outcome comparisons between the two groups. Results: The distribution of preoperative variables among matched pairs (553 patients in both groups) was, on average, equal. Propensity-matched women received a similar number of distal anastomoses (P = 0.56), had similar rate of left internal thoracic artery (LITA) graft (P = 0.73) and comparable extracorporeal circulation (P = 0.61) and aortic cross-clamp (P = 0.39) time as men. Postoperative complications were comparable in the two study groups with the exception of postoperative transfusion rate which was significantly higher in women (P = 0.01). However, operative mortality (3.8 vs. 2.7%; P = 0.20), perioperative acute myocardial infarction (P = 0.50) and stroke (P = 0.75) were not significantly different between the two groups. Interestingly, after adjustment for the other baseline characteristics, female patients with diabetes (P = 0.03), preoperative renal dysfunction (P = 0.07) and obesity (P = 0.07) exhibited a greater operative mortality rate compared with men with the same comorbidities. Conclusion: After adjustment for preoperative variables, female sex seems not to be an independent risk factor for early mortality in isolated CABG surgery. However, renal impairment, diabetes and obesity seem to play an important role in the operative risk profile of women undergoing isolated CABG. abstract_id: PUBMED:9339509 Risk factor analysis among Egyptian patients who underwent coronary artery bypass surgery. We conducted a retrospective review of Egyptian patients who underwent coronary artery bypass graft surgery at our institution between 1980 and 1995. We examined the prevalence of coronary artery disease risk factors and evaluated the early postoperative results. We then compared these results with the corresponding data in a subset of American patients who underwent coronary artery bypass grafting at our institution in 1993. There were 290 Egyptian patients: 275 men and 15 women. The mean age was 54.5 years (range, 30 to 70 years). Angina was present in 258 (89%) of the Egyptian patients; of these, 186 (72.1%) were in Canadian Cardiovascular Society class 3 or 4. Risk factor analysis revealed a high prevalence of hyperlipidemia (69.7%), cigarette-smoking (66.6%), family history of coronary artery disease (53.1%), hypertension (46.9%), obesity (46.2%), and diabetes mellitus (32.4%). Comparisons between the 2 groups showed that the risk factors, except for hypertension, were significantly higher in the Egyptian patients, despite the older age of the Americans (mean, 65.5 years; range, 22 to 88 years). The prevalence of triple-vessel disease was 86.6% in the Egyptian patients and 51.0% in the American patients (p &lt; 0.001). The operative morbidity rates in the Egyptian patients were low: these included arrhythmias (13.8%), bleeding (13.4%), infection (7.6%), low cardiac output (3.4%), myocardial infarction (3.4%), and cerebrovascular accident (1.4%). The hospital mortality rate was 1.4% for the Egyptians and 1.7% for the Americans (NS). These results show that, despite the high prevalence of risk factors among Egyptian patients with coronary artery disease, coronary artery bypass grafting can be performed with low operative morbidity and mortality rates. abstract_id: PUBMED:18361211 Is obesity still a coronary risk factor? Although obesity and, in particular, abdominal obesity is clearly a risk factor for developing coronary artery disease, once coronary artery disease has been established, the correlation of obesity with total mortality, cardiovascular mortality, myocardial infarction, and revascularization is unclear and still remains a matter of debate. The relationship between obesity and mortality in patients with coronary artery disease has so far only been investigated by posthoc analysis of cohort studies, which have produced contradictory results. When a higher percentage body fat has been found to be a strong independent predictor of event-free survival, the phenomenon has been described as an 'obesity paradox' or 'reverse epidemiology'. A recent meta-analysis, appearing in the August 19 issue of Lancet on 250,152 patients with documented coronary artery disease, suggests that after grouping 40 cohort studies with adjusted risks, overweight patients were consistently associated with a better survival and lower cardiovascular events than patients with a low body mass index, whereas obesity was associated with a higher total mortality only in patients with history of coronary artery bypass graft, and severe obesity was associated with a significantly higher cardiovascular mortality but not with an increased risk for total mortality. Far from proving that obesity is harmless, these findings suggest that alternative methods might be required to better characterize individuals who truly have excess body fat and that additional studies with different methods are needed. Moreover, still unknown is the unique contribution of higher muscle-to-fat ratio, which may be merely a surrogate of increased physical fitness. Future research is needed to assess the link between high muscle mass, high body fat and clinical outcomes. abstract_id: PUBMED:29320517 Explaining the decline in coronary heart disease mortality rates in the Slovak Republic between 1993-2008. Objective: Between the years 1993 and 2008, mortality rates from coronary heart disease (CHD) in the Slovak Republic have decreased by almost one quarter. However, this was a smaller decline than in neighbouring countries. The aim of this modelling study was therefore to quantify the contributions of risk factor changes and the use of evidence-based medical therapies to the CHD mortality decline between 1993 and 2008. Methods: We identified, obtained and scrutinised the data required for the model. These data detailed trends in the major population cardiovascular risk factors (smoking, blood pressure, total cholesterol, diabetes prevalence, body mass index (BMI) and physical activity levels), and also the uptake of all standard CHD treatments. The main data sources were official statistics (National Health Information Centre and Statistical Office of the Slovak Republic) and national representative studies (AUDIT, SLOVAKS, SLOVASeZ, CINDI, EHES, EHIS). The previously validated IMPACT policy model was then used to combine and integrate these data with effect sizes from published meta-analyses quantifying the effectiveness of specific evidence-based treatments, and population-wide changes in cardiovascular risk factors. Results were expressed as deaths prevented or postponed (DPPs) attributable to risk factor changes or treatments. Uncertainties were explored using sensitivity analyses. Results: Between 1993 and 2008 age-adjusted CHD mortality rates in the Slovak Republic (SR) decreased by 23% in men and 26% in women aged 25-74 years. This represented some 1820 fewer CHD deaths in 2008 than expected if mortality rates had not fallen. The IMPACT model explained 91% of this mortality decline. Approximately 50% of the decline was attributable to changes in acute phase and secondary prevention treatments, particularly acute and chronic treatments for heart failure (≈12%), acute coronary syndrome treatments (≈9%) and secondary prevention following AMI and revascularisation (≈8%). Changes in CHD risk factors explained approximately 41% of the total mortality decrease, mainly reflecting reductions in total serum cholesterol. However, other risk factors demonstrated adverse trends and thus generated approximately 740 additional deaths. Conclusion: Our analysis suggests that approximately half the CHD mortality fall recently observed in the SR may be attributable to the increased use of evidence-based treatments. However, the adverse trends observed in all the major cardiovascular risk factors (apart from total cholesterol) are deeply worrying. They highlight the need for more energetic population-wide prevention policies such as tobacco control, reducing salt and industrial trans fats content in processed food, clearer food labelling and regulated marketing of processed foods and sugary drinks. abstract_id: PUBMED:17719470 Metabolic syndrome increases operative mortality in patients undergoing coronary artery bypass grafting surgery. Objectives: The aim of this study was to determine the impact of the metabolic syndrome (MS) on operative mortality after a coronary artery bypass grafting surgery (CABG). Background: Diabetes and obesity are highly prevalent among patients undergoing CABG. However, it remains unclear whether these factors have a significant impact on operative mortality after this procedure. We hypothesized that the metabolic abnormalities associated with MS could negatively influence the operative outcome of CABG surgery. Methods: We retrospectively analyzed the data of 5,304 consecutive patients who underwent an isolated CABG procedure between 2000 and 2004. Of these 5,304 patients, 2,411 (46%) patients met the National Cholesterol Education Program-Adult Treatment Panel III criteria for MS. The primary end point was operative mortality. Results: The operative mortality after CABG surgery was 2.4% in patients with MS and 0.9% in patients without MS (p &lt; 0.0001). The MS was a strong independent predictor of operative mortality (relative risk 3.04 [95% confidence interval (CI) 1.73 to 5.32], p = 0.0001). After adjusting for other risk factors, the risk of mortality was increased 2.69-fold (95% CI 1.43 to 5.06; p = 0.002) in patients with MS and diabetes and 2.36-fold (95% CI 1.26 to 4.41; p = 0.007) in patients with MS and no diabetes, whereas it was not significantly increased in the patients with diabetes and no MS. Conclusions: This is the first study to report that MS is a highly prevalent and powerful risk factor for operative mortality in patients undergoing a CABG surgery. Thus, interventions that could contribute to reduce the prevalence of MS in patients with coronary artery disease or that could acutely modify the metabolic perturbations of MS at the time of CABG might substantially improve survival in these patients. Answer: The relationship between obesity and mortality in coronary artery bypass surgery (CABG) is complex and has been the subject of various studies with conflicting results. Some studies suggest that obesity is not a significant risk factor for in-hospital mortality following CABG. For instance, a study of 16,218 patients found that body size was not a significant risk factor for CABG mortality, with the lowest mortality observed in the high-normal and overweight subgroups compared to obese and underweight groups (PUBMED:15967852). Similarly, another study indicated that obesity was not associated with an increased risk of in-hospital mortality, although it was significantly associated with other morbidities such as atrial arrhythmia and sternal wound infections (PUBMED:12414047). Conversely, other research has identified obesity as a risk factor for operative mortality in CABG. A study on the metabolic syndrome, which includes obesity as a component, found that it was a strong independent predictor of operative mortality (PUBMED:17719470). Additionally, a study on sex differences in CABG outcomes reported that female patients with obesity exhibited a greater operative mortality rate compared with men with the same comorbidity (PUBMED:29917004). The concept of an "obesity paradox" has also been discussed, where overweight patients were associated with better survival and lower cardiovascular events than those with a low body mass index, while severe obesity was associated with a significantly higher cardiovascular mortality but not with an increased risk for total mortality (PUBMED:18361211). In summary, while some studies suggest that obesity is not a significant risk factor for mortality in CABG (PUBMED:15967852; PUBMED:12414047), others indicate that it can be a risk factor, particularly when considering the metabolic syndrome (PUBMED:17719470) or in specific subgroups such as women (PUBMED:29917004). The concept of an "obesity paradox" further complicates the relationship between obesity and CABG outcomes (PUBMED:18361211).
Instruction: Children with multiple erythema migrans: are there any pre-treatment symptoms and/or signs suggestive for central nervous system involvement? Abstracts: abstract_id: PUBMED:12422594 Children with multiple erythema migrans: are there any pre-treatment symptoms and/or signs suggestive for central nervous system involvement? Objective: To establish eventual signs and symptoms suggestive for central nervous system involvement in children with multiple erythema migrans. Methods: Patients younger than 15 years with multiple erythema migrans, diagnosed at our department from 1996 to 2000, were included in this prospective study. Demographic, clinical and laboratory findings were obtained and compared for a group of patients with pleocytosis (interpreted as a sign of central nervous system involvement) and a group of children with normal cerebrospinal fluid findings. Results: Cerebrospinal fluid pleocytosis (cell counts &gt; or = 5 x 10(6)/l) was detected in 55/214 (25.7%) children; it was lymphocytic in 94.5% of patients and ranged from 5 to 320 (median, 10 x 10(6)/l). Compared with the group with normal cerebrospinal fluid findings, patients with pleocytosis more often reported having had Lyme borreliosis in the past (8/55 versus 3/159; p = 0.0011), had longer incubation periods (25.5 versus 14 days; p = 0.0269), larger diameter of the largest erythema migrans at the time of first examination (10 versus 5.5 cm; p = 0.0055), higher frequency of associated systemic symptoms (45.5% versus 21.4%; p = 0.0011), positive meningeal signs (10.9% versus 1.9%; p = 0.0100), borrelial IgG antibodies in cerebrospinal fluid (3/49 versus 0/150; p = 0.0142) and B. burgdorferi s.l. isolated from cerebrospinal fluid (7/52 versus 1/147; p = 0.0004), but less often had mild initial disease (67.3% versus 88.7%; p = 0.0006). Conclusions: Cerebrospinal fluid pleocytosis was detected in 25.7% of children with multiple erythema migrans. Although several clinical and laboratory abnormalities were present significantly more often in patients with elevated cell counts than with normal cerebrospinal fluid findings, discriminatory significance for the majority of these abnormalities was low, particularly because of low negative predictive values. In more than 2/3 of patients with pleocytosis the initial disease was mild, fewer than 1/2 reported systemic symptoms, and meningeal signs were expressed in only 11%. abstract_id: PUBMED:34159518 A case of lymphomatoid granulomatosis with central nervous system involvement successfully treated with IFNα. Lymphomatoid granulomatosis (LYG) is a rare Epstein-Barr virus (EBV)-driven B-cell lymphoproliferative disease affecting mainly extranodal sites such as the lung, central nervous system (CNS), skin, kidney, and liver. We report a case of low-grade LYG involving the CNS that was successfully treated with interferon alpha (IFNα). A 69-year-old woman developed necrotic erythema of the skin and was initially diagnosed with pyoderma gangrenosum based on skin biopsy. She showed a limited response to prednisolone. Approximately 6 months after the initial onset, low-grade LYG was diagnosed after detection of CNS lesions on brain biopsy. The whole blood EBV-DNA load determined by real-time polymerase chain reaction was slightly elevated. Two months into IFNα therapy, skin and CNS lesions had responded favorably and the EBV-DNA load decreased. IFNα plays an important role in treatment of LYG through its antiproliferative, immunomodulatory, and anti-EBV effects. To our knowledge, this is the first case report of successful treatment with IFNα in Japan. Further investigation is necessary to determine optimal use of IFNα for LYG. abstract_id: PUBMED:8070450 Evaluation of central nervous system involvement in Lyme borreliosis patients with a solitary erythema migrans lesion. To determine whether early dissemination of Borrelia burgdorferi to the central nervous system occurs in stage I of Lyme borreliosis, neurological and cerebrospinal fluid examination was performed in 48 consecutive patients in whom the only sign of infection was a solitary erythema migrans lesion. Long-term follow-up after treatment with tetracycline was carried out by telephone interview. At presentation, neurological findings were normal in all 48 patients. Cerebrospinal fluid samples were obtained from 29 (60%) patients. Mild pleocytosis and mild impairment of the blood-brain barrier were present in four and one of these patients, respectively. No significant amount of tumor necrosis factor or interleukin 6 was found in the cerebrospinal fluid samples. Culture results of 13 cerebrospinal fluid samples were negative. Borrelia burgdorferi DNA was only detected by the polymerase chain reaction in one of two aliquots of the cerebrospinal fluid sample of one patient. None of 46 patients who were interviewed 12 to 51 (median 25) months after antibiotic treatment developed manifestations consistent with disseminated or chronic Lyme borreliosis. Thus, no compelling evidence was found for the presence of asymptomatic central nervous system involvement in patients with clinically localized Lyme borreliosis. abstract_id: PUBMED:25810722 A case of sarcoidosis of the central nervous system and orbita. Sarcoidosis is a multisystemic disease characterized by granulomatous inflammation. Lung or lymph node involvement is common. We present a rare case of sarcoidosis that began with orbital involvement, and a month later, due to insufficient treatment, it involved the central nervous system. A 49-year-old female patient began suffering from swelling in her right eye, redness, ptosis, and limited eye movements two months ago. Gadolinium-enhanced orbital magnetic resonance imaging showed thickening of the lacrimal gland and the right medial rectus muscle. After three weeks of local antibiotic and steroid treatments, her symptoms were resolved. One month ago, the patient reported sudden weakness in her right arm and leg. After laboratory tests and imaging studies, the patient was diagnosed with probable neurosarcoidosis using the Zajicek criteria and treated with prednisone (1 mg/kg/day). Although sarcoidosis frequently presents with lung and lymph node involvement, it is rarely accompanied by orbital involvement. Patients with orbital symptoms may receive a late diagnosis and insufficient central nervous system treatment. Involvement of the central nervous system in sarcoidosis leads to high morbidity and mortality rates. Therefore, early diagnosis and treatment are very important. abstract_id: PUBMED:9564777 A recovered case of SLE with central nervous system involvement who relapsed presenting new symptoms associated with development of serum anti-Sm antibody A rare SLE patient with central nervous system involvement (CNS-SLE) who relapsed presenting new symptoms associated with the development of serum anti-Sm antibody and was then successfully treated with cyclophosphamide (CY) pulse therapy is presented here. A 47-years old housewife was admitted to Kushiro City General Hospital because of fever, limb erythema and drowsy consciousness in September 1995. On the basis of convulsion, proteinuria, leukopenia, thrombopenia, serum positive tests for both anti-nuclear antibody and anti-SSA antibody and low complement levels, as well as elevations of IgG index and IL-6 in the cerebrospinal fluid (CSF), she was diagnosed as having CNS-SLE. Serum tests for anti CL-beta 2 GPI antibody and lupus anticoaglant was negative. Serum test for HBs antigen was positive. She was treated successfully with methylprednisolone (mPSL) pulse therapy and plasma exchange (PE). Prednisolone was gradually tapered to the dosage of 17.5 mg per day and she was discharged in April 1996. She was re-admitted because of fever, an exacerbation of skin eruption and arthralgia in October 1996. Serum anti-Sm antibody was found to be positive. mPSL pulse therapy was not effective. On the basis of hallucination and elevations of IgG index and IL-6 in the CSF, a diagnosis of relapsed CNS-SLE was made. However the level of IFN-alpha in the CSF was normal. Although PE was not effective, CY pulse therapy was markedly effective. abstract_id: PUBMED:9667533 Children with anogenital symptoms and signs referred for sexual abuse evaluations. Objective: To determine whether children referred to a sexual abuse clinic because of anogenital symptoms or signs have examination findings that are suggestive of or probable or definitive for sexual abuse. Design: Case series of 157 patients. Setting: Child and adolescent ambulatory care sexual abuse clinic. Results: A medical records review of 3660 cases was done; 157 cases were identified for study. Most (75%) referrals were from medical clinics. Of 184 complaints, the most common presenting symptom or sign was anogenital bleeding or bruising (29.3%), followed by irritation or redness (21.7%), abnormal anogenital anatomy (20.7%), vaginal discharge (18.4%), lesions (6.5%), and "other" symptoms or signs (3.3%). We used a standardized classification system and determined that 25 patients (15%) had examination findings in the sexual abuse clinic that were suggestive of or probable or definitive for sexual abuse. Although 85 patients had examination findings that corroborated the presenting symptom(s), 70 had nonspecific examination findings or a diagnosis other than sexual abuse. Seventy-two patients had normal examination findings. Only patients with the presenting symptom of lesions had an increased likelihood of a sexual abuse diagnosis. Common examination findings included anogenital erythema, enhanced vascularity of the hymen or vestibule in prepubertal girls, labial adhesions, and culture-negative vaginitis. Conclusions: Few children are referred for sexual abuse evaluations based on physical signs or symptoms alone. Children with anogenital symptoms but without a disclosure or suspicion of sexual abuse are unlikely to have examination findings suggestive of abuse. The evaluation of children with anogenital symptoms and signs should include a consideration of alternative conditions and causes not directly related to sexual abuse. abstract_id: PUBMED:1740859 Invasion of the central nervous system by Borrelia burgdorferi in acute disseminated infection. Objective: To determine central nervous system (CNS) involvement in acutely disseminated Borrelia burgdorferi infection by measurement of borrelia-specific DNA using the polymerase chain-reaction (PCR) assay and to compare the results of this with standard serological tests. Design: Prospective study with laboratory investigators blinded to clinical data. Setting: Multicenter office practice with a central reference laboratory. Patients: Cerebrospinal fluid (CSF) was collected from 12 patients with acute disseminated Lyme borreliosis with less than 2 weeks of active disease. The normal control specimens came from 16 patients whose CSF samples had been sent to the clinical laboratory for tests unrelated to the present study. Main Outcome Measures: Clinical evidence of disease and laboratory abnormalities. Results: Eight of the 12 patients (four of six with multiple areas of erythema migrans and four of six with cranial neuritis without erythema migrans) had B burgdorferi-specific DNA in their CSF. Among the 12 patients studied, nine had acute cranial neuritis and six had multiple erythema migrans lesions. Just four of the eight who were found to have spirochetal DNA in their CSF had complaints suggestive of CNS infection. In three of the PCR-positive CSF samples, no other abnormalities were noted. None of 16 samples from controls were positive in the PCR assay. Conclusion: B burgdorferi can invade the CNS early in the course of infection. Careful consideration should be given to choosing antibiotics that achieve adequate CSF levels in patients with disseminated infection. abstract_id: PUBMED:33628010 Multiple Eyelid Signs are Suggestive of Demodex Infestation. Purpose: To determine the signs associated with Demodex infestation in blepharitis patients and the rates of eyelash Demodex colonization among blepharitic patients and healthy individuals in Thailand. Patients And Methods: This was a prospective, case-control clinical study. Eighty-three controls and 43 blepharitic patients were consecutively enrolled from Walailak University Hospital. Each patient answered a questionnaire, underwent a complete eye exam, and had four lashes from each eye epilated; eyelashes were directly examined under a light microscope. Ocular signs, including types of eyelash debris (waxy debris, scaly debris, or cylindrical dandruff (CD)), erythema and telangiectasia of the eyelid, were noted. Results: Demodex infestation was observed more frequently among patients with blepharitis than among controls (67.4% in the blepharitis group, 26.5% in the control group, p-value &lt;0.001). The participants with Demodex infestation were 5.7 times more likely to have blepharitis than the controls (odds ratio 5.74; 95% confidence interval 2.57-12.82, p-value &lt;0.001). Clinical signs that were significantly (p-value &lt;0.05) related to demodicosis were lid erythema, lid telangiectasia, and any type of eyelash debris (scaly, waxy, or CD). Conclusion: In contrast to previous studies that found only CD to be highly associated with eyelash demodicosis, we found multiple suggestive signs: any kind of debris on eyelashes (scaly, waxy, or CD), eyelid erythema, and eyelid telangiectasia. Approximately one-quarter (26.5%) of the studied population had asymptomatic mite colonization. Demodex infestation should therefore always be on the list of possible etiologies when treating patients with blepharitis or other ocular surface-related problems. abstract_id: PUBMED:12422597 Cerebrospinal fluid findings in patients with symptoms suggesting chronic Lyme borreliosis. Background: Patients with non-specific long-lasting symptoms such as headache, concentration disturbances, and vertigo and who have positive serum borrelial antibody titres are often assumed to have chronic Lyme borreliosis. Because of the possibility that they may have central nervous system involvement they are frequently treated with courses of i.v. ceftriaxone. We assessed central nervous system involvement by examining cerebrospinal fluid samples in a group of such patients. Patients And Methods: Adult patients who qualified for the study had non-specific symptoms suggesting central nervous system involvement for longer than six months (but without overt clinical signs of such involvement) and positive serum borrelial antibody titres and/or erythema migrans prior to the onset of symptoms. Cerebrospinal fluid was examined in all patients. Results: None of the 77 patients included in the study (median duration of symptoms 18 months) had pleocytosis and there was no isolation of Borrelia burgdorferi sensu lato from cerebrospinal fluid. Mildly elevated protein concentration and intrathecal borrelial IgG antibody synthesis were demonstrated in 16 (21%) and 7 (9.1%) patients, respectively. Conclusions: In patients with non-specific long-lasting symptoms attributed to Lyme borreliosis but with the absence of overt clinical signs suggesting central nervous system involvement, the findings of cerebrospinal fluid examination are usually in the normal range. Routine treatment of such patients with i.v. ceftriaxone is not to be encouraged. abstract_id: PUBMED:36040820 Sensitivity, Specificity, and Predictive Values of Laryngopharyngeal Reflux Symptoms and Signs in Clinical Practice. Objective: To investigate the sensitivity (SE), specificity (SP), and positive and negative predictive value (PPV and NPV) of symptoms and signs of laryngopharyngeal reflux (LPR). Study Design: Prospective controlled. Setting: University medical center. Methods: Patients presenting with LPR symptoms and signs were consecutively included after diagnosis confirmation through 24-hour hypopharyngeal-esophageal multichannel intraluminal impedance-pH monitoring. Healthy individuals were recruited to compose a control group. Symptoms and signs were evaluated with the reflux symptom score and reflux sign assessment. The SE, SP, PPV, and NPV of symptoms and signs were assessed. Results: The study included 403 patients with LPR and 144 healthy individuals. Throat clearing, globus sensation, heartburn, and excess throat mucus were symptoms with the highest SE (67.5%-69.7%), SP (12.5%-20.8%), and NPV (48.3%-49.2%). The combination of throat clearing, heartburn, globus sensation, and excess throat mucus led to a high SE (96.0%) and NPV (85.2%). Anterior pillar erythema, tongue tonsil hypertrophy, and posterior commissure hypertrophy resulted in the highest SE (75.5%-83.5%). The highest SP was found for uvula erythema/edema, epiglottis erythema, and interarytenoid granulatory tissue (97.1%-97.2%). The association of nonendoscopic signs (anterior pillar erythema, uvula erythema/edema, and coated tongue) had an SE and SP of 80.1% and 47.2%, respectively. The association of throat clearing, heartburn, globus, anterior pillar erythema, and uvula erythema/edema had the highest SE (98.8%), SP (33.3%), PPV (94.3%), and NPV (70.6%). Conclusion: LPR symptoms and signs reported low SP and NPV. The SE, SP, PPV, and NPV may be maximized with the association of throat clearing, heartburn, globus sensation, anterior pillar erythema, and uvula erythema/edema. Answer: In a study of children with multiple erythema migrans, cerebrospinal fluid pleocytosis, which is interpreted as a sign of central nervous system (CNS) involvement, was detected in 25.7% of the cases (55 out of 214 children). The study aimed to establish if there were any pre-treatment symptoms and/or signs suggestive of CNS involvement in these children. The findings indicated that patients with pleocytosis more often reported having had Lyme borreliosis in the past, had longer incubation periods, and exhibited a larger diameter of the largest erythema migrans at the time of the first examination. Additionally, these patients had a higher frequency of associated systemic symptoms, positive meningeal signs, borrelial IgG antibodies in cerebrospinal fluid, and isolation of B. burgdorferi s.l. from cerebrospinal fluid. However, despite these associations, the discriminatory significance for the majority of these abnormalities was low, particularly because of low negative predictive values. In more than two-thirds of patients with pleocytosis, the initial disease was mild, fewer than half reported systemic symptoms, and meningeal signs were expressed in only 11% (PUBMED:12422594). Therefore, while certain clinical and laboratory abnormalities were more common in patients with elevated cell counts in the cerebrospinal fluid, suggesting CNS involvement, these signs and symptoms were not highly predictive. This indicates that CNS involvement in children with multiple erythema migrans may not be easily discernible based on pre-treatment symptoms and signs alone.
Instruction: Do termites avoid carcasses? Abstracts: abstract_id: PUBMED:22558452 Do termites avoid carcasses? Behavioral responses depend on the nature of the carcasses. Background: Undertaking behavior is a significant adaptation to social life in enclosed nests. Workers are known to remove dead colony members from the nest. Such behavior prevents the spread of pathogens that may be detrimental to a colony. To date, little is known about the ethological aspects of how termites deal with carcasses. Methodology And Principal Findings: In this study, we tested the responses to carcasses of four species from different subterranean termite taxa: Coptotermes formosanus Shiraki and Reticulitermes speratus (Kolbe) (lower termites) and Microcerotermes crassus Snyder and Globitermes sulphureus Haviland (higher termites). We also used different types of carcasses (freshly killed, 1-, 3-, and 7-day-old, and oven-killed carcasses) and mutilated nestmates to investigate whether the termites exhibited any behavioral responses that were specific to carcasses in certain conditions. Some behavioral responses were performed specifically on certain types of carcasses or mutilated termites. C. formosanus and R. speratus exhibited the following behaviors: (1) the frequency and time spent in antennating, grooming, and carcass removal of freshly killed, 1-day-old, and oven-killed carcasses were high, but these behaviors decreased as the carcasses aged; (2) the termites repeatedly crawled under the aging carcass piles; and (3) only newly dead termites were consumed as a food source. In contrast, M. crassus and G. sulphureus workers performed relatively few behavioral acts. Our results cast a new light on the previous notion that termites are necrophobic in nature. Conclusion: We conclude that the behavioral response towards carcasses depends largely on the nature of the carcasses and termite species, and the response is more complex than was previously thought. Such behavioral responses likely are associated with the threat posed to the colony by the carcasses and the feeding habits and nesting ecology of a given species. abstract_id: PUBMED:28111901 Cryptic termites avoid predatory ants by eavesdropping on vibrational cues from their footsteps. Eavesdropping has evolved in many predator-prey relationships. Communication signals of social species may be particularly vulnerable to eavesdropping, such as pheromones produced by ants, which are predators of termites. Termites communicate mostly by way of substrate-borne vibrations, which suggest they may be able to eavesdrop, using two possible mechanisms: ant chemicals or ant vibrations. We observed termites foraging within millimetres of ants in the field, suggesting the evolution of specialised detection behaviours. We found the termite Coptotermes acinaciformis detected their major predator, the ant Iridomyrmex purpureus, through thin wood using only vibrational cues from walking, and not chemical signals. Comparison of 16 termite and ant species found the ants-walking signals were up to 100 times higher than those of termites. Eavesdropping on passive walking signals explains the predator detection and foraging behaviours in this ancient relationship, which may be applicable to many other predator-prey relationships. abstract_id: PUBMED:30871170 Use of Termites by Farmers as Poultry Feed in Ghana. The aim of the study was to gather information on the use of termites by farmers as feed for indigenous poultry in Ghana and factors affecting its use. We conducted surveys in four regions in Ghana to collect information, by the administration of questionnaires, on the use of termites as poultry feed, termite species collected, species not used and collection methods. Samples of termite species mentioned were collected and identified to the genus level. Twenty-three percent and 19% of farmers mentioned that termites are always or often used to feed poultry whereas 11% never use termites. A binomial regression analysis showed that their utilization was affected by region, sex, education, farm size and income. Termites collected belonged to eight genera, the main ones being Macrotermes, Trinervitermes and Odontotermes. Five collection methods are used to obtain termites and involve either breaking mounds or using containers as traps. Collection methods vary with species and region and the abundance of termite genera varies with season. Farmers identified some species as poisonous to poultry. Termites are important in indigenous poultry production because they are a readily available protein source for local farmers. However, better collection methods need to be developed to aid their optimal use. abstract_id: PUBMED:19710058 Termites eavesdrop to avoid competitors. Competition exclusion, when a single species dominates resources due to superior competitiveness, is seldom observed in nature. Termites compete for resources with deadly consequences, yet more than one species can be found feeding in the same wooden resource. This is especially surprising when drywood species, with colonies of a few hundred, are found cohabiting with subterranean species, with colonies of millions. Termites communicate vibro-acoustically and, as these signals can travel over long distances, they are vulnerable to eavesdropping. We investigated whether drywood termites could eavesdrop on vibration cues from subterranean species. We show, using choice experiments and recordings, that the drywood termite Cryptotermes secundus can distinguish its own species from the dominant competitor in the environment, the subterranean termite Coptotermes acinaciformis. The drywood termite was attracted to its own vibration cues, but was repelled by those of the subterranean species. This response increased with decreasing wood size, corresponding with both increased risk and strength of the cue. The drywood termites appear to avoid confrontation by eavesdropping on the subterranean termites; these results provide further evidence that vibro-acoustic cues are important for termite sensory perception and communication. abstract_id: PUBMED:31479810 Mercury species in the nests and bodies of soil-feeding termites, Silvestritermes spp. (Termitidae, Syntermitinae), in French Guiana. Mercury pollution is currently a major public health concern, given the adverse effects of mercury on wildlife and humans. Soil plays an essential role in speciation of mercury and its global cycling, while being a habitat for a wide range of terrestrial fauna. Soil fauna, primarily soil-feeding taxa that are in intimate contact with soil pollutants are key contributors in the cycling of soil mercury and might provide relevant indications about soil pollution. We studied the enrichment of various mercury species in the nests and bodies of soil-feeding termites Silvestritermes spp. in French Guiana. Soil-feeding termites are the only social insects using soil as both shelter and food and are major decomposers of organic matter in neotropical forests. Nests of S. minutus were depleted in total and mobile mercury compared to nearby soil. In contrast, they were enriched 17 times in methylmercury. The highest concentrations of methylmercury were found in body of both studied termite species, with mean bioconcentration factors of 58 for S. minutus and 179 for S. holmgreni relative to the soil. The assessment of the body distribution of methylmercury in S. minutus showed concentrations of 221 ng g-1 for the guts and even higher for the gut-free carcasses (683 ng g-1), suggesting that methylmercury is not confined to the gut where it was likely produced, but rather stored in various tissues. This enrichment in the most toxic form of Hg in termites may be of concern on termite predators and the higher levels in the food chain that may be endangered through prey-to-predator transfers and bioaccumulation. Soil-feeding termites appear to be promising candidates as bio-indicators of mercury pollution in soils of neotropical rainforest ecosystems. abstract_id: PUBMED:29157286 Farmers' perception of termites in agriculture production and their indigenous utilization in Northwest Benin. Background: Although termites are considered as agricultural pests, they play an important role in maintaining the ecosystem. Therefore, it matters to investigate the farmers' perception of the impacts of the termites on the agriculture and their indigenous utilization. Methods: A semi-structured questionnaire was used to interview 94 farmers through 10 villages of Atacora department, in the northwestern region of Benin, to obtain information for the development of successful strategies of termite management and conservation. Their perceptions on the importance and management of termites along with the indigenous nomenclature and utilization of termite mounds were assessed. Termite species identified by farmers were collected and preserved in 80% alcohol for identification. Results: Eight crops were identified by farmers as susceptible to termites with maize, sorghum, and yam as being the most susceptible. According to farmers, the susceptibility to termites of these crops is due to their high-water content and sweet taste. A total of 27 vernacular names of termites were recorded corresponding to 10 species, Amitermes evuncifer, Macrotermes subhyalinus, and Trinervitermes oeconomus being the most damaging termite species. All the names given to termite species had a meaning. The drought was identified by farmers as the main factor favouring termite attacks. Demolition of termite mounds in the fields was the most commonly reported control method. Salt and other pesticides were commonly used by farmers to protect stored farm products. The lack of effective control methods is the main constraint for termite management. In northwestern Benin, farmers reported different purpose utilizations of termite mounds and termites. Conclusions: The study has shown that farmers perceived termites as pests of several agricultural crops and apply various indigenous control practices whose efficiency need to be verified. Utilization of termites and termite mound soil as food and medicinal resources underlines the need for a more focused approach to termite control for the conservation of non-pest termite species. The sensitization of farmers on the importance of termites as well as the development of an integrated control method to combat termite pests proved necessary. abstract_id: PUBMED:29616376 Chemical Fertility Signaling in Termites: Idiosyncrasies and Commonalities in Comparison with Ants. Termites evolved eusociality independently from social Hymenoptera. As a common trait, reproductive monopoly is maintained through chemical communication. The queen (and in termites also a king) prevents workers from reproduction by conveying their reproductive status. In termites all soldiers are sterile, but workers' potential to reproduce differs between species. It ranges from totipotency in wood-dwelling lower termites where workers are a transient stage from which all other castes develop, to sterile workers in some higher termites. Intermediate are species in which workers can develop into replacement sexuals within the nest but not into winged sexuals. I summarize the patchy picture about fertility signaling that we currently have for termites, pointing also to potential conflicts over reproduction that differ from those in social Hymenoptera. Recent findings imply that, similar to many social Hymenoptera, wood-dwelling termites that live in confined nests use long-chain cuticular hydrocarbons (CHCs) as fertility signals. Yet other compounds are important as well, comprising proteinaceous secretions and especially volatiles. For a subterranean termite, two volatiles have been identified as primer pheromones that prevent reproductive differentiation of workers. It requires more data to test whether wood-dwelling termites use CHCs, while species with larger colonies and less confined nests use volatiles, or whether all species rely on multicomponent signals. Ultimately, we need more effort to model and test potential conflicts over reproduction between queens, kings and workers. Here results from social Hymenoptera cannot be transferred to termites as the latter are diploid and commonly inbred. This review illustrates promising future research avenues. abstract_id: PUBMED:25788900 A genomic comparison of two termites with different social complexity. The termites evolved eusociality and complex societies before the ants, but have been studied much less. The recent publication of the first two termite genomes provides a unique comparative opportunity, particularly because the sequenced termites represent opposite ends of the social complexity spectrum. Zootermopsis nevadensis has simple colonies with totipotent workers that can develop into all castes (dispersing reproductives, nest-inheriting replacement reproductives, and soldiers). In contrast, the fungus-growing termite Macrotermes natalensis belongs to the higher termites and has very large and complex societies with morphologically distinct castes that are life-time sterile. Here we compare key characteristics of genomic architecture, focusing on genes involved in communication, immune defenses, mating biology and symbiosis that were likely important in termite social evolution. We discuss these in relation to what is known about these genes in the ants and outline hypothesis for further testing. abstract_id: PUBMED:31340595 Diversity of Ants and Termites of the Botanical Garden of the University of Lomé, Togo. Ants and termites are used as bioindicators in many ecosystems. Little knowledge is available about them in Togo, especially ants. This study aimed to find out how ants and termites could be used to assess the restoration of former agricultural land. These insect groups were sampled within six transects of 50 × 2 m2 (using pitfall traps, monoliths, baits for ants and hand sampling for termites) in two consecutive habitats: open area (grassland) and covered area (an artificial forest). Seventeen termite species and 43 ant species were collected. Seven ant species were specific to the covered area against four for the open area, while four unshared species of termite were found in the open area against three in the covered area. The presence of unshared species was linked to vegetation, as Trinervitermes (Holmgren, 1912), a grass feeding termite, was solely found in open area. Also, for some ant species like Cataulacus traegaordhi (Santschi, 1914), Crematogaster (Lund, 1831) species, Oecophyllalonginoda (Latreille, 1802) and Tetraponera mocquerysi (Brown, 1960), all arboreal species, vegetation was a determining factor for their presence. The occurrence of these species together with Basidentitermes mactus (Sjöstedt, 1911), Strumigenys bernardi (Brown, 1960) and S. sistrura (Bolton, 1983), suggest a more advanced level of restoration of the covered area. abstract_id: PUBMED:23623853 Lignocellulose-degrading enzymes from termites and their symbiotic microbiota. Lignocellulose-the dry matter of plants, or "plant biomass"-digestion is of increasing interest in organismal metabolism research, specifically the conversion of biomass into biofuels. Termites efficiently decompose lignocelluloses, and studies on lignocellulolytic systems may elucidate mechanisms of efficient lignocellulose degradation in termites as well as offer novel enzyme sources, findings which have significant potential industrial applications. Recent progress in metagenomic and metatranscriptomic research has illuminated the diversity of lignocellulolytic enzymes within the termite gut. Here, we review state-of-the-art research on lignocellulose-degrading systems in termites, specifically cellulases, xylanases, and lignin modification enzymes produced by termites and their symbiotic microbiota. We also discuss recent investigations into heterologous overexpression of lignocellulolytic enzymes from termites and their symbionts. Answer: Termites' responses to carcasses vary depending on the nature of the carcasses and the termite species. In a study examining the behavior of four different termite species towards carcasses, it was found that the behavioral response is more complex than previously thought and not all termites are necrophobic (PUBMED:22558452). For instance, Coptotermes formosanus and Reticulitermes speratus (lower termites) showed high frequency and time spent in antennating, grooming, and carcass removal when the carcasses were freshly killed, 1-day-old, or oven-killed, but these behaviors decreased as the carcasses aged. They also consumed newly dead termites as a food source. In contrast, Microcerotermes crassus and Globitermes sulphureus (higher termites) performed relatively few behavioral acts towards carcasses. This suggests that the avoidance of carcasses by termites is not a uniform behavior across species and is influenced by the condition of the carcass and the species' feeding habits and nesting ecology (PUBMED:22558452).
Instruction: Are we eliminating cures with antibiotic abuse? Abstracts: abstract_id: PUBMED:22718162 Are we eliminating cures with antibiotic abuse? A study among dentists. Context: The theme of "World Health Day 2011" is "combat drug resistance- No action today, No cure tomorrow" which is very pertinent. The present study emphatically demonstrates the current issues related to the overwhelming concerns regarding indiscriminate use of antibiotics, leading to a bleak tomorrow where cures may be few. Aim: To know the prescription pattern of antibiotics for various dental procedures by dental practitioners. Materials And Methods: A pretested questionnaire was used which contained two sections pertaining to prescription of antibiotics for healthy and medically compromised patients during various dental procedures, with therapeutic and prophylactic considerations. Results: Questionnaire response rate of 66.6% was observed. Amoxicillin emerged as the most preferred antibiotic for dental procedures both as a therapeutic and a prophylactic drug. 50% of the endodontists and 40% of the general dentists opted to prescribe antibiotics during root canal therapy where ideally operative intervention would have sufficed. Overuse of antibiotics for routine scaling and extraction was observed. Conclusion: The dental profession as a whole needs to acquire a deeper understanding of the global effects of superfluous antibiotic prescription. Antibiotics when judiciously used are precise life-saving drugs. abstract_id: PUBMED:22843835 The Lourdes medical cures revisited. This article examines the cures recorded in Lourdes, France, between 1858, the year of the Visions, and 1976, the date of the last certified cure of the twentieth century. Initially, the records of cures were crude or nonexistent, and allegations of cures were accepted without question. A Medical Bureau was established in 1883 to examine and certify the cures, and the medical methodology improved steadily in the subsequent years. We discuss the clinical criteria of the cures and the reliability of medical records. Some 1,200 cures were said to have been observed between 1858 and 1889, and about one hundred more each year during the "Golden Age" of Lourdes, 1890-1914. We studied 411 patients cured in 1909-14 and thoroughly reviewed the twenty-five cures acknowledged between 1947 and 1976. No cure has been certified from 1976 through 2006. The Lourdes phenomenon, extraordinary in many respects, still awaits scientific explanation. Lourdes concerns science as well as religion. abstract_id: PUBMED:31195950 Antibiotic Abuse Induced Histopathological and Neurobehavioral Disorders in Mice. Introduction: Antibiotic abuse is a common phenomenon in Egypt as medications are prescribed without supervision. It is suggested that the excess use of antibiotics modifies the gut microbiota and plays a role in the development of neurological and psychiatric disorders. Objective: The aim of the present study was to use bulb-c mice as models for curam (amoxicillin /clavulanic acid) abuse compared to the locally acting neomycin model, then restoring the probiotic balance to look at the possible effects on the animal brains. Methods: The results showed early excitable brains demonstrated by S100b immunohistochemistry in both cortexes and hippocampuses of neomycin-treated mice. Staining with PAS stain showed no suggested neurodegenerative changes. Treatment with probiotics improved the S100b immunohistochemistry profile of the curam group partially but failed to overcome the neuroinflammatory reaction detected by hematoxylin and eosin stain. Curam was possibly blamed for the systemic effects. Results: The neurobehavioral tests showed delayed impairment in the open field test for the curam group and impaired new object recognition for the neomycin group. These tests were applied by video recording. The neurobehavioral decline developed 14 days after the end of the 3-week antibiotic course. Unfortunately, curam abuse induced animal fatalities. Conclusion: Antibiotic abuse has a neurotoxic effect that works by both local and more prominent systemic mechanisms. It can be said that antibiotic abuse is a cofactor behind the rise of neuropsychiatric diseases in Egypt. abstract_id: PUBMED:31803645 Antibiotic abuse during endodontic treatment: A contributing factor to antibiotic resistance. Antibiotic resistance is one of our most serious global health threats. The adverse effects of overusing and misusing antibiotics are highly publicized in the health professional literature. Antibiotic abuse occurs during routine endodontic treatment and there are deficiencies in knowledge regarding prescribing antibiotic and appropriate prophylactic antibiotic use. Multidisciplinary coordination and cooperation among dentists, pharmacists, and patients is needed to curb antibiotic abuse. As endodontists, we can become part of the solution to the antibiotic resistance crisis and deal with it conclusively. This review article discusses antibiotic resistance resulting from antibiotic abuse during endodontic treatment, various factors contributing to it, and measures required for stopping antibiotic abuse in endodontic treatment. A web-based research on MedLine was performed with terms Review Articles published in the last 10 year's dental journals in English for literature researching, extracting, and synthesizing data. Relevant articles were shortlisted. Important cross-reference articles were also reviewed. abstract_id: PUBMED:33865680 The 21st Century Cures Act and Challenges to Adolescent Confidentiality. Confidentiality is an important part of adolescent health care, providing a safe arena for young people to address sensitive health concerns and develop independent relationships with their providers. State and federal laws support a range of adolescent confidentiality protections. However, the full implementation of the 21st Century Cures Act, with the release of all medical records to patients and caregivers, may endanger this expectation of privacy. This policy brief reviews implications of the open notes requirement of the Cures Act, suggests strategies to improve care for adolescent patients, and recommends advocacy to improve the 2020 Final Rule implementation. abstract_id: PUBMED:38147904 Immediate Access to Radiology Reports: Perspectives on X Before and After the Cures Act Information Blocking Provision. Objective: The 21st Century Cures Act's information blocking provision mandates that patients have immediate access to their electronic health information, including radiology reports. We evaluated public opinions surrounding this policy on X, a microblogging platform with over 400 million users. Methods: We retrieved 27,522 posts related to radiology reports from October 5, 2020, through October 4, 2021. One reviewer performed initial screening for relevant posts. Two reviewers categorized user type and post theme(s) using a predefined coding system. Posts were grouped as "pre-Cures" (6 months before information blocking) and "post-Cures" (6 months after). Descriptive statistics and χ2 tests were performed. Results: Among 1,155 final posts, 1,028 unique users were identified (64% patients, 11% non-radiologist physicians, 4% radiologists). X activity increased, with 40% (n = 462) pre-Cures and 60% (n = 693) post-Cures. Early result notification before referring providers was the only theme that significantly increased post-Cures (+3%, P = .001). Common negative themes were frustration (33%), anxiety (27%), and delay (20%). Common positive themes were gratitude for radiologists (52%) and autonomy (21%). Of posts expressing opinions on early access, 84% favored and 16% opposed it, with decreased preference between study periods (P = .006). More patients than physicians preferred early access (92% versus 40%, P &lt; .0001). Discussion: X activity increased after the information blocking provision, partly due to conversation about early notification. Despite negative experiences with reports, most users preferred early access. Although the Cures Act is a positive step toward open access, work remains to improve patients' engagement with their radiology results. abstract_id: PUBMED:25482146 Financing cures in the United States. True cures in health care are rare but likely not for long. The high price tag that accompanies a cure along with its rapid uptake create challenges in the financing of cures by public and private payers. In the US, the disaggregated nature of health insurance system adds to this challenge as patients frequently churn across multiple health plans. This creates a 'free-rider' problem, where no one health plan has the incentive to invest in cure since the returns will be scattered over many health plans. Here, a new health currency is proposed as a generalized version of a social impact bond that has the potential to solve this free-rider problem, as it can be traded not only between public and private payers but also within the private sector. An ensuing debate as to whether and how to develop such a currency can serve the US health care system well. abstract_id: PUBMED:38252857 Impact of the 21st Century Cures Act on Patients and the Healthcare Team. The 21st Century Cures Act is designed to make healthcare information more accessible and transparent to patients and the healthcare team. The provisions to address information blocking dictate that health information such as. abstract_id: PUBMED:38099899 Provider Perspectives on Adolescent Confidentiality and the Electronic Health Record Postimplementation of the 21st Century Cures Act Final Rule. Purpose: The 21st Century Cures Act Final Rule mandates increased transparency and accessibility of the Electronic Health Record (EHR). This mandate has exacerbated tensions in adolescent healthcare between transparency and confidentiality. This study evaluates clinicians' perspectives on how well the EHR maintains confidentiality for adolescents in the post-Cures Act era. Methods: A web-based survey was distributed via the national listserv of the Society for Adolescent Health and Medicine in March-April 2022. English-speaking members practicing in the Unites States who used an EHR to document clinical visits with minors were eligible. The survey included questions about EHR training, features, and approach to information sharing with adolescent patients and proxies. Results: Participants included 97 respondents from 32 states. Most participants were physicians (98%) and identified themselves as attending (89%). Several EHR vendors were represented, although the majority used Epic (76%). As has been seen in prior studies, there remains significant variability in approaches to patient portal access for adolescent patients and their adult proxies. Respondents report that training around adolescent-specific privacy issues remains infrequent and is perceived as inadequate. Adolescent providers, despite feeling confident in navigating the EHR generally, continue to report low rates of confidence in how well their EHR may protect adolescent privacy. Discussion: Clinicians have persistent concerns about adolescent confidentiality postimplementation of the 21st Century Cures Act. Sharing sufficient information while protecting adolescent confidentiality might require standardization by EHR vendors to improve granularity of proxy information sharing. Healthcare institutions must also commit to training providers on management of adolescent confidentiality. abstract_id: PUBMED:37822951 Prevalence of antibiotic misuse in cases of pneumonia and diarrhea in Saudi Arabia. Background: Antibiotic misuse is a major public health issue with long-term repercussions. Objective: The purpose of this investigation was to evaluate the effects of pneumonia and diarrhea, with an emphasis on antibiotic misuse. Methodology: This study included 410 participants (217 fathers and 193 mothers), of whom 239 purchased antibiotics for their children without a prescription, whereas 171 had a prescription or were unsure if one was required. Results: Antibiotics were used incorrectly by 58.1% of respondents. About 51.2% of participants said they were taking two antibiotics at the same time. Around 30% of people admitted to using antibiotics inefficiently. The most prevalent reason for use was "viral and bacterial," followed by "viral," and then "bacterial," with 35%, 21%, and 20%, respectively. In addition, 22.4% of patients have used antibiotics for an unknown reason. Conclusion: Saudi parents of children with pneumonia and diarrhea abuse antibiotics. Saudi legislation banning medications without a prescription has helped reduce antibiotic abuse, but more community-based education and awareness are needed. Answer: The concern that antibiotic abuse may be eliminating cures is supported by several studies. A study among dentists revealed that there is an overuse of antibiotics for routine dental procedures, such as root canal therapy and extractions, where operative intervention alone might suffice (PUBMED:22718162). This indiscriminate use of antibiotics can contribute to the development of antibiotic resistance, which is a serious global health threat. The study suggests that the dental profession needs to deepen its understanding of the global effects of superfluous antibiotic prescription, as antibiotics are life-saving drugs when used judiciously. Further research has shown that antibiotic abuse can lead to histopathological and neurobehavioral disorders in mice, indicating that antibiotic abuse may have neurotoxic effects that contribute to the rise of neuropsychiatric diseases (PUBMED:31195950). This suggests that the consequences of antibiotic abuse extend beyond the immediate context of bacterial infections and may have broader implications for health. Additionally, antibiotic abuse during endodontic treatment has been identified as a contributing factor to antibiotic resistance (PUBMED:31803645). The review article emphasizes the need for multidisciplinary coordination and cooperation among dentists, pharmacists, and patients to curb antibiotic abuse and become part of the solution to the antibiotic resistance crisis. In conclusion, the evidence suggests that antibiotic abuse is a significant problem that can undermine the effectiveness of these drugs, potentially leading to a future where cures for bacterial infections are fewer due to the development of antibiotic-resistant strains. It is imperative to address this issue through better education, prescribing practices, and global awareness to preserve the curative powers of antibiotics.
Instruction: Perinatal management of congenital diaphragmatic hernia: when and how should babies be delivered? Abstracts: abstract_id: PUBMED:37916008 Perinatal Outcome in Congenital Diaphragmatic Hernia (CDH): A Single-Center Experience. Objective: To study the perinatal outcome in fetuses diagnosed with congenital diaphragmatic hernia (CDH). Methods: Thirty-two pregnant women with antenatal diagnosis of CDH in fetus, who delivered between 2018 and 2021, were included in the study. Postnatally eventration of diaphragm was diagnosed in 3 neonates and were excluded. Results: The median gestational age at diagnosis was 23 weeks (IQR: 216-261 weeks). The mean O/E LHR was 34.88 ± 9.03%, and the O/E LHR was significantly lower in fetuses who did not survive (40.81 ± 4.25 vs 31.26 ± 9.33; p = 0.0037). On ROC analysis, at a cutoff of ≤ 32.93, O/E LHR had a specificity of 100% with a sensitivity of 72.22% in predicting mortality. Cases with liver herniation were not significantly different between survivors versus non-survivors. The overall survival rate was 37.93%, and the leading cause of death was severe persistent pulmonary hypertension. Conclusion: O/E LHR can predict mortality in neonates with antenatal diagnosis of CDH. The presence of pulmonary hypertension was the leading cause of death in these neonates. abstract_id: PUBMED:21616231 Impact of prenatal evaluation and protocol-based perinatal management on congenital diaphragmatic hernia outcomes. Background/purpose: Although intuitive, the benefit of prenatal evaluation and multidisciplinary perinatal management for fetuses with congenital diaphragmatic hernia (CDH) is unproven. We compared the outcome of prenatally diagnosed patients with CDH whose perinatal management was by a predefined protocol with those who were diagnosed postnatally and managed by the same team. We hypothesized that patients with CDH undergoing prenatal evaluation with perinatal planning would demonstrate improved outcome. Methods: Retrospective chart review of all patients with Bochdalek-type CDH at a single institution between 2004 and 2009 was performed. Patients were stratified by history of perinatal management, and data were analyzed by Fisher's Exact test and Student's t test. Results: Of 116 patients, 71 fetuses presented in the prenatal period and delivered at our facility (PRE), whereas 45 infants were either outborn or postnatally diagnosed (POST). There were more high-risk patients in the PRE group compared with the POST group as indicated by higher rates of liver herniation (63% vs 36%, P = .03), need for patch repair (57% vs 27%, P = .004), and extracorporeal membrane oxygenation use (35% vs 18%, P = .05). Despite differences in risk, there was no difference in 6-month survival between groups (73% vs 73%). Conclusions: Patients with CDH diagnosed prenatally are a higher risk group. Prenatal evaluation and multidisciplinary perinatal management allows for improved outcome in these patients. abstract_id: PUBMED:9526738 Current status of antenatal diagnosis &amp; perinatal medical network Antenatal diagnosis has undergone an explosion of growth in the last decade. This has revolutionized the practice of fetal medicine. Ideally, the antenatal diagnosis of congenital anomalies should improve antenatal counseling, it may also affect the timing, site, and method of delivery and potentially allow for intrauterine surgical correction. Paradoxically worse outcomes, however, have been observed in cases of some anomalies such as diaphragmatic hernia and omphalocele. Most of them combined premature birth and/or low birth weight with associated lethal anomalies: the combination almost inevitably led to mortality. In order to reduce the mortality, new approaches will be required for such anomalies in the future. Perinatal medical network will also be important to promote a closer association between perinatal medicine and perinatal surgery. abstract_id: PUBMED:30730631 Report review Perinatal mortality and morbidity: A retrospective. The UK stillbirth and neonatal mortality rate has fallen by a fifth in the last decade; however, stillbirth rates continue to be among the highest of high-income countries.The Government wants to halve the rate of stillbirths and neonatal deaths in England by 2030, with a 20 per cent reduction by 2020 (O'Connor 2016). MBRRACE-UK runs the Maternal, Newborn and Infant Clinical Outcome Review Programme (MNI-CORP). Maternity units should notify all maternal deaths, stillbirths, perinatal deaths and infant deaths to MBRRACE-UK, who analyse the data and publish surveillance reports. Here, findings from the surveillance reports about congenital diaphragmatic hernia, perinatal deaths and stillbirths are reviewed, and recommendations made for practice. abstract_id: PUBMED:8092953 Congenital diaphragmatic hernia treated by perinatal stabilization. The delay of surgery in conjunction with high-frequency oscillation ventilation combined with preoperative stabilization is a standard treatment for congenital diaphragmatic hernia neonates in the high-risk group. A new protocol of perinatal stabilization with this preoperative stabilization has been developed at the University of Tokyo Hospital. The most important components of this method of perinatal stabilization are cesarean section and the injection of morphine and pancuronium through the umbilical vein prior to ligation of the umbilical cord. In order to benefit from this treatment, congenital diaphragmatic hernia patients should be diagnosed antenatally by ultrasonography and be delivered by cesarean section. This protocol of perinatal stabilization appears effective in congenital diaphragmatic hernia patients. abstract_id: PUBMED:21129540 Perinatal management of congenital diaphragmatic hernia: when and how should babies be delivered? Results from the Canadian Pediatric Surgery Network. Purpose: A prenatal diagnosis of congenital diaphragmatic hernia (CDH) enables therapeutic decision making during the intrapartum period. This study seeks to identify the gestational age and delivery mode associated with optimal outcomes. Patients And Methods: A national data set was used to study CDH babies born between 2005 and 2009. The primary outcome was survival to discharge. Primary and secondary outcomes were analyzed by categorical gestational age (preterm, &lt;37 weeks; early term, 37-38 weeks; late term, &gt;39 weeks) by intended and actual route of delivery and by birth plan conformity, regardless of route. Results: Of 214 live born babies (gestational age, 37.6 ± 4.0 weeks; birth weight, 3064 ± 696 g), 143 (66.8%) had a prenatal diagnosis and 174 (81.3%) survived to discharge. Among 143 prenatally diagnosed pregnancies, 122 (85.3%) underwent abdominal delivery (AD) and 21 (14.6%) underwent cesarean delivery (CS). Conformity between intended and actual delivery occurred in 119 (83.2%). Neither categorical gestational age nor delivery route influenced outcome. Although babies delivered by planned CS had a lower mortality than those delivered by planned AD (2/21 and 36/122, respectively; P = .04), this difference was not significant by multivariate analysis. Conformity to any birth plan was associated with a trend toward improved survival. Conclusion: Our data do not support advocacy of any specific delivery plan or route nor optimal gestational age for prenatally diagnosed CDH. abstract_id: PUBMED:24749262 Experience of extracorporeal membrane oxygenation in critical care of respiratory failure in newborn with congenital diaphragmatic hernia in perinatal center The article deals with an experience of veno-arterial extracorporeal membrane oxygenation use in preterm infants with congenital diaphragmatic hernia during postoperative period in the perinatal center. abstract_id: PUBMED:33622390 Study protocol: a core outcome set for perinatal interventions for congenital diaphragmatic hernia. Background: Congenital diaphragmatic hernia (CDH) is, depending of the severity, a birth defect associated with significant mortality and morbidity. Prenatal screening by ultrasound may detect this condition and comprehensive assessment of severity is possible, allowing for in utero referral to an experienced centre for planned delivery. In an effort to improve outcomes, prenatal interventions to stimulate lung development were proposed. Along the same lines, new postnatal management strategies are being developed. In order to enable proper comparison of novel perinatal interventions as well as outcomes, a set of uniform and relevant outcome measures is required. Core outcome sets (COS) are agreed, clearly defined sets of outcomes to be measured in a standardised manner and reported consistently. Herein we aim to describe the methodology we will use to define a COS for perinatal and neonatal outcomes of foetuses and newborns with congenital diaphragmatic hernia and to draft a dissemination and implementation plan. Methods: We will use the methodology described in the Core Outcome Measures in Effectiveness Trials (COMET) Initiative Handbook. An international steering group will be created to guide the development of the COS. We are systematically reviewing the literature to identify all potential relevant pre- and neonatal outcomes previously used in studies on perinatal interventions for CDH. We will build a consensus on these core outcomes in a stakeholder group using the Delphi method. After completion, a stakeholder meeting will decide on a final COS, using a modified Nominal Group Technique. Thereafter, we will review potential definitions and measurements of these outcomes, and again a consensus meeting will be organised, to finalise the COS before dissemination. Discussion: We have started a procedure to develop a COS for studies on perinatal interventions for congenital diaphragmatic hernia, with the purpose of improving the quality of research, guide clinical practice and improve patient care and eventual use in future clinical trials, systematic reviews and clinical practice guidelines. Trial Registration: We prospectively registered this study in the International Prospective Register of Systematic Reviews (PROSPERO) (registration number: CRD42019124399 ) and The Core Outcome Measures in Effectiveness Trials (COMET) Initiative (registration number: 1296 ). abstract_id: PUBMED:23868540 Multidisciplinary perinatal management of the compromised airway on placental support: lessons learned. Objective: The aims of this study were to review fetal and maternal outcomes after management of the compromised perinatal airway via operation on placental support or ex utero intrapartum treatment and to discuss implications for future management of these complex and rare cases. Methods: We have presented a retrospective case series of 12 neonates requiring airway management on placental support at a single tertiary care, academic center. Results: One mother experienced significant blood loss. Operative recovery times were unremarkable. Eight neonates required airway management due to mass obstruction, two for removal of an endotracheal balloon for fetoscopic treatment of congenital diaphragmatic hernia, one for laryngeal atresia, and one for severe retrognathia. One of our series is an unusual case of management on placental support after vaginal delivery. Another child would have ideally been managed on placental support, but an extremely short umbilical cord prevented this. Even though the airway was secured in all 12 cases, five neonates died in the perinatal period. Conclusions: These procedures have a risk for substantial maternal blood loss. Despite excellent rates of success securing the neonatal airway, children who require management on placental support still have high mortality. A formalized multidisciplinary approach at our institution has enhanced preparedness for these cases. abstract_id: PUBMED:37099763 Core outcome set for perinatal interventions for congenital diaphragmatic hernia. Objective: To develop a core set of prenatal and neonatal outcomes for clinical studies evaluating perinatal interventions for congenital diaphragmatic hernia, using a validated consensus-building method. Methods: An international steering group comprising 13 leading maternal-fetal medicine specialists, neonatologists, pediatric surgeons, patient representatives, researchers and methodologists guided the development of this core outcome set. Potential outcomes were collected through a systematic review of the literature and entered into a two-round online Delphi survey. A call was made for stakeholders with experience of congenital diaphragmatic hernia to review the list and score outcomes based on their perceived relevance. Outcomes that fulfilled the consensus criteria defined a priori were discussed subsequently in online breakout meetings. Results were reviewed in a consensus meeting, during which the core outcome set was defined. Finally, the definitions, measurement methods and aspirational outcomes were defined in online and in-person definition meetings by a selection of 45 stakeholders. Results: Overall, 221 stakeholders participated in the Delphi survey and 198 completed both rounds. Fifty outcomes met the consensus criteria and were discussed and rescored by 78 stakeholders in the breakout meetings. During the consensus meeting, 93 stakeholders agreed eventually on eight outcomes, which constituted the core outcome set. Maternal and obstetric outcomes included maternal morbidity related to the intervention and gestational age at delivery. Fetal outcomes included intrauterine demise, interval between intervention and delivery and change in lung size in utero around the time of the intervention. Neonatal outcomes included neonatal mortality, pulmonary hypertension and use of extracorporeal membrane oxygenation. Definitions and measurement methods were formulated by 45 stakeholders, who also added three aspirational outcomes: duration of invasive ventilation, duration of oxygen supplementation and use of pulmonary vasodilators at discharge. Conclusions: We developed with relevant stakeholders a core outcome set for studies evaluating perinatal interventions in congenital diaphragmatic hernia. Its implementation should facilitate the comparison and combination of trial results, enabling future research to better guide clinical practice. © 2023 International Society of Ultrasound in Obstetrics and Gynecology. Answer: Perinatal management of congenital diaphragmatic hernia (CDH) is a complex issue that involves decisions about the timing and mode of delivery to optimize outcomes for the baby. A study by the Canadian Pediatric Surgery Network sought to identify the gestational age and delivery mode associated with optimal outcomes for babies with CDH. The study found that neither categorical gestational age nor delivery route influenced the outcome significantly. Although babies delivered by planned cesarean section (CS) had a lower mortality than those delivered by planned abdominal delivery (AD), this difference was not significant by multivariate analysis. Conformity to any birth plan was associated with a trend toward improved survival, suggesting that adherence to a planned delivery approach may be beneficial (PUBMED:21129540). Another study emphasized the importance of prenatal evaluation and multidisciplinary perinatal management for fetuses with CDH. It compared the outcomes of prenatally diagnosed patients with those diagnosed postnatally and managed by the same team. The study found that patients with CDH diagnosed prenatally are a higher risk group, but prenatal evaluation and multidisciplinary perinatal management allowed for improved outcomes in these patients (PUBMED:21616231). A protocol of perinatal stabilization, which includes cesarean section and the administration of medications through the umbilical vein prior to ligation of the umbilical cord, has been developed and appears effective in CDH patients (PUBMED:8092953). Additionally, the use of veno-arterial extracorporeal membrane oxygenation in preterm infants with CDH during the postoperative period in the perinatal center has been reported (PUBMED:24749262). In summary, while there is no clear consensus on the optimal gestational age or delivery mode for babies with CDH, adherence to a planned delivery approach and multidisciplinary perinatal management seem to be key factors in improving outcomes. Prenatal evaluation and the development of a perinatal management protocol, including the potential use of extracorporeal membrane oxygenation, are important considerations in the care of these patients.
Instruction: Is frenuloplasty worthwhile? Abstracts: abstract_id: PUBMED:26623287 Ankyloglossia and its surgical correction by frenuloplasty in a she-camel calf (Camelus dromedarius). A rare case of ankyloglossia in a one-humped camel and its successful surgical correction by horizontal to vertical frenuloplasty is reported. Seven-month-old she-camel calf, with a history of excessive salivation and inability to chew and swallow hay, was referred to Al Qattara veterinary hospital. Solid food was regurgitated, whereas milk and water could be swallowed. On examination; the animal could not protrude its tongue which was attached by a thin tissue band between the ventral surface of the tongue and the floor of the oral cavity. The tissue band was corrected by frenuloplasty and the incisions were sutured. The animal recovered well as the salivation and regurgitation stopped and the animal became able to chew and swallow solid food including hay. It was concluded that ankyloglossia can occur in one-humped camel in which a horizontal to vertical frenuloplasty may improve full function of tongue movement. abstract_id: PUBMED:17059723 Is frenuloplasty worthwhile? A 12-year experience. Introduction: Frenuloplasty is commonly performed. Its outcome has never been reported. Patients And Methods: We have performed 213 frenuloplasties since 1992. Patients were sent a questionnaire regarding indication, treatment advised on presentation, anaesthetic, outcome (linear analogue satisfaction score), and further intervention required. Results: Overall, 48 of 209 (23%) patients replied (median age of series, 27 years; range 16-78 years: and of respondents, 27 years; range, 17-78 years). Indications included tearing/bleeding of the foreskin (n = 32), balanitis (n = 3), pain on intercourse (n = 26), phimosis (n = 4), ejaculatory (n = 1) and hygiene (n = 1) problems. Twenty-five patients received general anaesthesia. Median satisfaction score was 8 (range, 1-10). Thirty-four patients would recommend frenuloplasty to another with similar symptoms. Three of nine patients initially advised to undergo circumcision later underwent circumcision (frenuloplasty satisfaction score, 1-3), 1 awaits circumcision (score, 5), 1 has a tight foreskin (score, 3), and 4 are satisfied (scores 6,9,10,10). In total, 23 of 209 patients (11%; median age, 27 years) underwent circumcision following frenuloplasty. Median time to circumcision was 11 months (range, 5-52 months). Conclusions: This is the only reported series of frenuloplasty. The response rate reflects a young mobile population. Most men are satisfied with the outcome, including some initially advised to undergo circumcision. The overall rate of circumcision following frenuloplasty is likely to be 15-20%. abstract_id: PUBMED:36412284 Penile skin length can be predicted before frenuloplasty during routine circumcision. Background: One of the restricting factors for surgical treatment of congenital anatomical pathologies of the penis is the skin structure. Thanks to its structure, the penis is a flexible organ structure, and the treatment can be completed without the need for a free tissue graft. Length changes are obtained on the ventral side of the penis by frenuloplasty. Aims: Our study aims to attain an objective formula that could noninvasively predict these changes on the ventral side before the procedure. Patients And Methods: In our study, 52 patients who were admitted for routine religious circumcision were included, and penile ventral and dorsal skin and inner mucosa lengths were measured before and after frenuloplasty, which is a part of the normal routine circumcision procedure and performed via the "pull and burn" method. Stitches were done to prevent scar formation in the frenuloplasty area. The results were used to estimate the length changes to be obtained on the ventral side by performing a regression analysis of the patient's weight, height, ventral and dorsal inner mucosa, and outer skin lengths. Result: Following frenuloplasty, an increase was detected in the ventral surface ranging from 20% to 177%. This increase was found to be predictable with 62.5% precision (P &lt; 0.01) using the formulation established by assessing the ventral inner mucosa length from pre-procedure values. No significant result was determined in the regression analysis of the patients' other measurements (P &gt; 0.05). Conclusion: Penile length changes that would occur following frenuloplasty can be calculated using a noninvasive method. Frenuloplasty, which would be performed without any vascular change, could be helpful in the surgical planning of some penile surgeries. More studies should be conducted on older boys, in anatomic deviations, and moreover, to establish a reliable formulation. abstract_id: PUBMED:35068848 Lingual Frenuloplasty for Ankyloglossia in Children: A Case Series. Tongue is an important oral structure that affects speech, position of teeth, periodontal tissue, nutrition, swallowing, nursing, and certain social activities. Tongue-tie or ankyloglossia is a developmental anomaly of the tongue, characterized by an abnormally short, thick lingual frenulum resulting in limited tongue movement. Diagnosis must include functional assessment of tongue mobility, in addition to the physical appearance of the frenulum. Tongue mobility and appearance associated with the insertion, as well as the attachment and the shortness of the lingual frenulum should be evaluated. Ankyloglossia management should be considered at any age considering the risk-benefit evaluation and because of the highest vascularization and mobility of the tongue. Lingual frenuloplasty for the management of ankyloglossia in children is being discussed in the form of case series. abstract_id: PUBMED:37351399 Management of ankyloglossia by functional frenuloplasty using diode laser. Ankyloglossia or tongue-tie is a condition present since birth that results in restricted movement of the tongue due to the attachment of the lingual frenulum. The condition affects breastfeeding, speaking, swallowing, occlusion, and proper tongue posture. Tongue ties vary in degree of severity from mild cases of mucus membrane bands to complete tongue ties where the tongue adheres to the floor of the mouth. Treatment options such as speech therapy, frenotomy,frenectomyhave all been suggested in the literature. Surgical correction often causes, bleeding,chances of infection, swelling, and relapse. A systemically healthy 23-year-old male patient who experienced difficulty in speech since childhood was referred for treatment to the dental clinic in September 2020. He was diagnosed with Kotlows class III tongue tie and angles class 3 malocclusion. Orthodontic correction and Functional frenuloplasty using a diode laser was carried out. In conjunction with it,orofacial myofunctional therapy was advised. Follow-up: The surgical procedure was uneventful. The patient was reviewed post-operatively at regular intervals every 3 months to check for relapse. This paper elaborates on the newer modes of diagnosis, orofacial myofunctional therapy, and lingual frenuloplasty with diode laser. This technique will help to overcome all the challenges of conventional tongue-tie treatments providing long-term excellent results. abstract_id: PUBMED:31637291 Lingual frenuloplasty with myofunctional therapy: Exploring safety and efficacy in 348 cases. Background: Ankyloglossia is a condition of altered tongue mobility due to the presence of restrictive tissue between the undersurface of the tongue and the floor of mouth. Potential implications of restricted tongue mobility (such as mouth breathing, snoring, dental clenching, and myofascial tension) remain underappreciated due to limited peer-reviewed evidence. Here, we explore the safety and efficacy of lingual frenuloplasty and myofunctional therapy for the treatment of these conditions in a large and diverse cohort of patients with restricted tongue mobility. Methods: Four hundred twenty consecutive patients (ages 29 months to 79 years) treated with myofunctional therapy and lingual frenuloplasty for indications of mouth breathing, snoring, dental clenching, and/or myofascial tension were surveyed. All procedures were performed by a single surgeon using a scissors and suture technique. Safety and efficacy was assessed &gt;2 months postoperatively by means of patient-reported outcome measures. Results: In all, 348 surveys (83% response rate) were completed showing 91% satisfaction rate and 87% rate of improvement in quality of life through amelioration of mouth breathing (78.4%), snoring (72.9%), clenching (91.0%), and/or myofascial tension (77.5%). Minor complications occurred in &lt;5% of cases including complaints of prolonged pain or bleeding, temporary numbness of the tongue-tip, salivary gland issues, minor wound infection or inflammation, and need for revision to excise scar tissue. There were no major complications. Conclusion: Lingual frenuloplasty with myofunctional therapy is safe and potentially effective for the treatment of mouth breathing, snoring, clenching, and myofascial tension in appropriately selected patient candidates. Further studies with objective measures are merited. Level Of Evidence: 3. abstract_id: PUBMED:35474165 Is frenuloplasty a risk factor for post-circumcision meatal stenosis? Introduction: Meatal stenosis (MS) is a common complication of circumcision. In this study, we aimed to examine the role of frenuloplasty in the development of MS in school-age boys undergoing circumcision. Materials And Methods: Medical data of 502 patients who were circumcised surgically in the hospital setting aged ≥6 years between July 2019 and July 2020 were retrospectively analyzed. The patients were divided into two groups based on the appearance of the frenular region. The patients whose integrity of the frenular region was protected during circumcision were classified as Group 1 (n = 262), and the patients whose frenulum was absent for some reason were classified as Group 2 (n = 240). Both groups were evaluated based on the development of complications following frenuloplasty after circumcision. Results: The median age was 11 (range: 6 to 17) years. The median circumcision age was seven (range: 6 to 12) years, and the median time since circumcision was four (range: 1 to 11) years. The main complaints of the patients were gastrointestinal (n = 254, 51%), respiratory (n = 148, 29%), and urinary system-related pathologies (n = 100, 20%). Meatal stenosis was the most common complication of circumcision with an overall incidence of 4%, which was significantly higher in Group 2 (7% vs. 2%, respectively; p = 0.006). Conclusion: Our results suggested that MS is a common complication of circumcision. Performing frenuloplasty during circumcision might increase the risk of developing MS in school-age boys. abstract_id: PUBMED:26729995 Modified Tubularized Incised Plate Urethroplasty Repair: Frenuloplasty and Long-Term Results in 155 Patients. To describe a modification of tubularized incised plate (TIP) urethroplasty which we refer to as "frenuloplasty". We retrospectively reviewed 155 children who underwent TIP urethroplasty between June 2008 and August 2011 in our institution. In our technique, the circumcision incision went on through the mucocutaneous junction obliquely in order to form frenular wings instead of linear circumscribing incision joined the urethral plate vertically. The little triangle-shaped skin flaps between the oblique mucocutaneous incisions and urethral plate incisions were excised. The glans wings and frenular wings were re-approximated without tension after uretroplasty. The mean age of the patients was 4.63 ± 3.82 years. The mean follow-up was 15.94 ± 5.46 months. Location of hypospadias was distal penile in 126 patients (81.3 %) and mid-shaft in 29 (18.7 %). The following complications occurred in 21 patients (13.5 %): urethrocutaneous fistula formation in 7 (4.5 %), meatal stenosis in 14 (9 %) and no dehiscence. We suppose that there is no incompletely formed prepuce but a ventral fusion defect in the midline. Frenuloplasty reduces the necessity of ventral preputial flaps and provides satisfactory cosmetic outcomes with the appearence of normal circumcised penis. abstract_id: PUBMED:32540779 Comparison of simple frenotomy with 4-flap Z-frenuloplasty in treatment for ankyloglossia with articulation difficulty: A prospective randomized study. Objective: To compare the surgical outcomes of simple frenotomy and the 4-flap Z-frenuloplasty according to the articulation test values and tongue-tie classification in patients with ankyloglossia with articulation difficulty. Study Design: prospective randomized study. Setting: Tertiary academic center. Subjects: and methods: Children with ankyloglossia with articulation difficulty were randomly divided into 2 groups for surgical treatment. Patients were evaluated for the tongue-tie classification and articulation test before surgery. Three months after the operation, the frenulum classification and articulation test were re-evaluated to compare the differences in surgical outcome between the two surgical methods. Results: Out of 37 patients, 19 underwent the 4-flap Z-frenuloplasty and 18, the simple frenotomy. No differences were observed in the baseline characteristics of the patients assigned to both groups. Changes in the tongue-tie classification and improvement in the articulation test results were observed with both the surgical methods. Both surgical groups had significant improvement in the speech articulation test (consonants) but there was no difference in the speech outcomes between the surgical groups. Conclusion: Although there was no significant difference in the surgical outcome between the two surgical methods, ankyloglossia patients showed improvement in a Korean speech articulation test 3 months after undergoing surgery to release the lingual frenulum. abstract_id: PUBMED:15900148 Improved ankyloglossia correction with four-flap Z-frenuloplasty. Ankyloglossia, or tongue-tie, is the result of a short, tight, lingual frenulum causing tethering of the tongue tip. Although most cases resolve or are asymptomatic, some patients develop articulation problems and other concerns related to poor tongue-tip mobility. In this study, we evaluated the treatment of patients greater than 3 years of age with persistent articulation problems related to ankyloglossia (n = 16). All patients underwent surgical correction either via a novel application of the 4-flap Z-frenuloplasty (n = 11) or via the more traditional horizontal-to-vertical frenuloplasty (n = 5). Frenulum length, tongue protrusion length, and speech evaluations (4-grade scaling) were documented preoperatively and at follow-up. For the 4-flap Z-frenuloplasty, 91% of patients showed at least 2 orders of improvement in speech, 64% showed complete resolution of articulation errors, and 9% demonstrated no improvement in speech. Frenulum length and tongue protrusion gained 37.5 +/- 13.5 mm (P &lt; 0.001) (315% increase) and 36.2 +/- 7.6 mm (P &lt; 0.001), respectively. In contrast, with the horizontal-to-vertical frenuloplasty, only 40% showed 1 order of improvement in speech, and 60% had no change in articulation. Frenulum length and tongue protrusion gained 11.2 +/- 4.15 mm (98% increase) and 13.2 +/- 2.6 mm (P = 0.0003), respectively. Our data indicated that the 4-flap Z-frenuloplasty was superior to the horizontal to vertical frenuloplasty with respect to tongue lengthening, protrusion, and articulation improvement for patients with symptomatic ankyloglossia. Answer: Based on the provided abstracts, frenuloplasty appears to be a worthwhile procedure for certain conditions. For instance, in cases of ankyloglossia, or tongue-tie, where the tongue's range of motion is restricted due to a short lingual frenulum, frenuloplasty has been shown to be beneficial. A case series reported improved tongue mobility and quality of life after lingual frenuloplasty combined with myofunctional therapy, with a high satisfaction rate and minor complications in less than 5% of cases (PUBMED:31637291). Similarly, a case of ankyloglossia in a one-humped camel calf was successfully treated with frenuloplasty, allowing the animal to chew and swallow solid food (PUBMED:26623287). In the context of penile health, frenuloplasty has been performed for various indications such as tearing or bleeding of the foreskin, pain on intercourse, and phimosis. A 12-year experience study found that most men were satisfied with the outcome of frenuloplasty, and some who were initially advised to undergo circumcision were satisfied with frenuloplasty instead (PUBMED:17059723). However, it is worth noting that a certain percentage of patients (11-20%) eventually underwent circumcision following frenuloplasty. For children with ankyloglossia and articulation difficulty, both simple frenotomy and 4-flap Z-frenuloplasty showed significant improvement in speech articulation tests, with no significant difference in outcomes between the two surgical methods (PUBMED:32540779). Another study found that the 4-flap Z-frenuloplasty was superior to the horizontal-to-vertical frenuloplasty in terms of tongue lengthening, protrusion, and articulation improvement (PUBMED:15900148). In summary, frenuloplasty can be a worthwhile procedure for improving function and quality of life in patients with ankyloglossia and for resolving certain penile health issues. However, the decision to undergo frenuloplasty should be based on individual circumstances, and patients should be informed of the potential outcomes and risks, including the possibility of needing additional procedures such as circumcision.
Instruction: Does direct ophthalmoscopy improve retinal screening for diabetic eye disease by retinal photography? Abstracts: abstract_id: PUBMED:12358877 Does direct ophthalmoscopy improve retinal screening for diabetic eye disease by retinal photography? Aims: To identify whether after performing retinal photography, direct ophthalmoscopy can improve the yield of screening for the detection of sight-threatening diabetic eye disease (STDED). Methods: Patients (n = 408) who had previously received both dilated direct ophthalmoscopy by a diabetologist and retinal photography graded by a diabetologist within 3 months of each other were included. The results of the other screening modality were not available to the grader/screener. The first 308 patients were consecutive attendees at the clinic who fulfilled the study criteria and 100 were selected because they were identified as having potential STDED by either one of these modalities. An ophthalmologist using slit lamp biomicroscopy then examined patients identified with potential STDED. Results: In 357 (88%) patients there was agreement between the two modalities about whether referral to an ophthalmologist was required (kappa 0.62). Retinal photography identified 38 patients for referral to ophthalmology which ophthalmoscopy missed. Of these, the ophthalmologist agreed that STDED was present in 32 (84%) and four patients required early laser. Ophthalmoscopy identified 13 patients for referral who were not identified by photography. Of these, the ophthalmologist agreed with the diabetologist that STDED was present in seven (54%) and one patient required early laser. Conclusion: Ophthalmoscopy may identify the occasional patient with diabetes who has STDED which is missed by retinal photography. For a systematic retinal screening programme, adding ophthalmoscopy to retinal photography will increase false-positive referrals and is likely to detect only a few extra patients requiring laser. abstract_id: PUBMED:7580708 Sensitivity and specificity of photography and direct ophthalmoscopy in screening for sight threatening eye disease: the Liverpool Diabetic Eye Study. Objective: To evaluate different methods for community based screening for sight threatening diabetic eye disease. Design: Prospective study. Setting: Mobile screening unit visiting inner city community clinics; hospital assessment clinic (tertiary centre). Subjects: 395 diabetic patients registered with four general practices in an inner city location. Interventions: Community based photography with mydriasis and direct ophthalmoscopy through dilated pupils by an experienced ophthalmologist, both compared with reference standard of slit lamp biomicroscopy by a consultant specialist in medical retinal disease. Main Outcome Measures: Sensitivity and specificity of screening method and prevalence of sight threatening diabetic eye disease (moderate preproliferative retinopathy, circinate maculopathy, exudate within 1 disc diameter of fixation, other diabetes related eye disease). Results: 358 subjects underwent photography, 326 attended hospital clinic for ophthalmoscopy, and six were ungradable on photographs and biomicroscopy, leaving 320 for analysis. Of these 295 (91%) attended clinic within four months of photography. Sensitivity of detection of eye disease by photography was 89% (95% confidence interval 80% to 98%), significantly better than for direct ophthalmoscopy (65% (51% to 79%)). Analysis of patients with false negative results indicated possible improvement of photographic sensitivity to 93% by addition of stereoscopic macular pair photographs. Specificity of detection of sight threatening eye disease was 86% (82% to 90%) for photography and 97% (95% to 99%) for direct ophthalmoscopy. Conclusions: Since high sensitivity is essential for an effective screening programme, a photographic method should be considered as preferred option in national, community based screening programmes. Even in the hands of an experienced ophthalmologist, direct ophthalmoscopy is limited by weaknesses inherent to the instrument. abstract_id: PUBMED:8345340 Screening for diabetic retinopathy in a clinical setting: a comparison of direct ophthalmoscopy by primary care physicians with fundus photography. Background: Type II diabetes mellitus is a major health problem among Native Americans, and diabetic retinopathy is a frequent complication of this disease. Screening for retinopathy can identify early disease and prevent major vision loss, but the most cost-effective screening method has not yet been determined. Methods: In a rural clinic that served more than 400 Native Americans with diabetes, we compared the accuracy of referrals made based on two screening methods: ophthalmoscopy by trained primary care physicians and seven-view nonstereoscopic, mydriatic fundal photography read by two general ophthalmologists and a retinal specialist. Patients in whom abnormal findings were detected by either screening method were then referred to a general ophthalmologist for further evaluation. Results: Two hundred forty-three examinations were performed and 83 referrals made. Both screening methods had high sensitivity for referring patients with retinopathy that required treatment or follow-up sooner than 1 year (100% for direct ophthalmoscopy by primary care physicians, 94% for the general ophthalmologist photography readers, and 100% for the retinal specialist reader). The calculated costs of screening by direct ophthalmoscopy and by retinal photography were 64% less and 44% to 35% less, respectively, than the cost of yearly ophthalmological examinations by ophthalmologists. Conclusions: Careful screening for treatable diabetic eye disease by trained primary care physicians proved to be a clinically acceptable, cost-effective strategy. Screening methods for diabetic retinopathy should be evaluated based on the absolute sensitivity, specificity, and predictive values of their ability to correctly refer patients rather than their diagnostic accuracy. abstract_id: PUBMED:20145421 Screening for diabetic retinopathy: a comparative trial of photography and scanning laser ophthalmoscopy. Aims: To evaluate the sensitivity and specificity of wide-field scanning laser ophthalmoscopy (WSLO) in the detection of referable diabetic eye disease, and to compare its performance with digital retinal photography. Methods: Patients enrolled into the study underwent non-mydriatic WSLO imaging, then single- and dual-field mydriatic digital retinal photography, and examination with slit lamp biomicroscopy, the reference standard. Grading of retinopathy was performed in a masked fashion. Results: A total of 380 patients (759 eyes) were recruited to the study. Technical failure rates for dilated single-field retinal photography, dual-field retinal photography and undilated WSLO were 6.3, 5.8 and 10.8%, respectively (0.005 &lt; p &lt; 0.02 for photography vs. WSLO). The respective indices for screening sensitivity were 82.9, 82.9 and 83.6% (p &gt; 0.2). Specificity was 92.1, 91.1 and 89.5%, respectively (p &gt; 0.2). Conclusions: Sensitivity and specificity for WSLO were similar to retinal photography. The technical failure rate was greater for the WSLO used in this study. abstract_id: PUBMED:15181522 Comparison of the effectiveness of detecting diabetic eye disease: diabetic retinal photography versus ophthalmic consultation. Introduction: This study determines the accuracy of reading a Polaroid retinal photograph in the diabetic retinal photography programme as compared to a clinical fundal examination in the diagnosis of diabetic retinopathy. Methods: A prospective study recording the additional findings obtained from clinical examination of the eye using indirect ophthalmoscopy and slit lamp biomicroscopy in a restructured tertiary hospital serving an urban community was performed. Results: Seventy-eight eyes in 39 diabetic patients were reviewed by an ophthalmologist for diabetic changes seen on fundus photography. The sensitivity of diabetic retinal photography in diagnosing diabetic eye conditions was 91.6 percent, with a specificity of 99.8 percent and a positive predictive value of 95.6 percent. The degree of agreement kappa was 0.94. Conclusion: The use of diabetic retinal photographs to screen for diabetic eye disease achieved a high sensitivity by capturing diabetic retinal lesions. It was comparable to an examination performed by the ophthalmologist. With appropriate training, the person reading the diabetic retinal photographs can accurately diagnose diabetic eye conditions. abstract_id: PUBMED:28516404 Screening for Diabetic Eye Disease among Samoan Adults: A Pilot Study. Introduction: The purpose of this study was to determine the prevalence of diabetic retinopathy in Samoa by piloting a retinal photography screening programme. Methods: We performed a cross-sectional study of patients with diabetes who presented to the Tupua Tamasese Meaole eye clinic in Apia, Samoa, between May 2011 and September 2011. Study approval was granted by the National Health Service Board of Samoa, the Monash University Human Research Ethics Committee, and the study adhered to the Tenets of the Declaration of Helsinki. Following informed consent, data collection included patient demographics [age, area of residence (rural or urban)], type of diabetes, length of time since diagnosis, most recent random blood sugar and blood pressure levels. The subjects had three 45-degree retinal photographs taken in each eye with the Canon CR6-45NM camera. All gradable photographs were assessed for the presence of diabetic retinopathy or macular oedema using the International Clinical Diabetic Retinopathy Severity Scale and the International Clinical Diabetic Macular Oedema Severity scale, respectively. Results: Two hundred and fourteen eyes from 107 subjects were examined during the study period, all of whom had type 2 diabetes mellitus. Diabetic retinopathy was present in 53.3% (114/214) of eyes, with 14.5% having proliferative retinopathy and 7.5% with severe non-proliferative retinopathy. Also, 25.2% (54/214) had some evidence of macular oedema with 11.7% (25/214) requiring treatment. A statistically significant relationship existed between the length of time since diagnosis and the severity of both retinopathy (p &lt; 0.001) and macular oedema (p = 0.031). Blood pressure more than 150/85 mmHg was associated with higher levels of diabetic retinopathy (p = 0.03) and macular oedema (p = 0.02). Conclusion: A comprehensive diabetic retinopathy screening programme is much needed in Samoa given the high prevalence of diabetic eye disease. abstract_id: PUBMED:33541295 A 15 month experience with a primary care-based telemedicine screening program for diabetic retinopathy. Background: Using telemedicine for diabetic retinal screening is becoming popular especially amongst at-risk urban communities with poor access to care. The goal of the diabetic telemedicine project at Temple University Hospital is to improve cost-effective access to appropriate retinal care to those in need of close monitoring and/or treatment. Methods: This will be a retrospective review of 15 months of data from March 2016 to May 2017. We will investigate how many patients were screened, how interpretable the photographs were, how often the photographs generated a diagnosis of diabetic retinopathy (DR) based on the screening photo, and how many patients followed-up for an exam in the office, if indicated. Results: Six-hundred eighty-nine (689) digital retinal screening exams on 1377 eyes of diabetic patients were conducted in Temple's primary care clinic. The majority of the photographs were read to have no retinopathy (755, 54.8%). Among all of the screening exams, 357 (51.8%) triggered a request for a referral to ophthalmology. Four-hundred forty-nine (449, 32.6%) of the photos were felt to be uninterpretable by the clinician. Referrals were meant to be requested for DR found in one or both eyes, inability to assess presence of retinopathy in one or both eyes, or for suspicion of a different ophthalmic diagnosis. Sixty-seven patients (9.7%) were suspected to have another ophthalmic condition based on other findings in the retinal photographs. Among the 34 patients that were successfully completed a referral visit to Temple ophthalmology, there was good concordance between the level of DR detected by their screening fundus photographs and visit diagnosis. Conclusions: Although a little more than half of the patients did not have diabetic eye disease, about half needed a referral to ophthalmology. However, only 9.5% of the referral-warranted exams actually received an eye exam. Mere identification of referral-warranted diabetic retinopathy and other ophthalmic conditions is not enough. A successful telemedicine screening program must close the communication gap between screening and diagnosis by reviewer to provide timely follow-up by eye care specialists. abstract_id: PUBMED:19668442 Hypoxia-ischemia and retinal ganglion cell damage. Retinal hypoxia is the potentially blinding mechanism underlying a number of sight-threatening disorders including central retinal artery occlusion, ischemic central retinal vein thrombosis, complications of diabetic eye disease and some types of glaucoma. Hypoxia is implicated in loss of retinal ganglion cells (RGCs) occurring in such conditions. RGC death occurs by apoptosis or necrosis. Hypoxia-ischemia induces the expression of hypoxia inducible factor-1alpha and its target genes such as vascular endothelial growth factor (VEGF) and nitric oxide synthase (NOS). Increased production of VEGF results in disruption of the blood retinal barrier leading to retinal edema. Enhanced expression of NOS results in increased production of nitric oxide which may be toxic to the cells resulting in their death. Excess glutamate release in hypoxic-ischemic conditions causes excitotoxic damage to the RGCs through activation of ionotropic and metabotropic glutamate receptors. Activation of glutamate receptors is thought to initiate damage in the retina by a cascade of biochemical effects such as neuronal NOS activation and increase in intracellular Ca(2+) which has been described as a major contributing factor to RGC loss. Excess production of proinflammatory cytokines also mediates cell damage. Besides the above, free-radicals generated in hypoxic-ischemic conditions result in RGC loss because of an imbalance between antioxidant- and oxidant-generating systems. Although many advances have been made in understanding the mediators and mechanisms of injury, strategies to improve the damage are lacking. Measures to prevent neuronal injury have to be developed. abstract_id: PUBMED:34151725 Telemedical Diabetic Retinopathy Screening in a Primary Care Setting: Quality of Retinal Photographs and Accuracy of Automated Image Analysis. Background: Screening for diabetic eye disease (DED) and general diabetes care is often separate, which leads to delays and low adherence to DED screening recommendations. Thus, we assessed the feasibility, achieved image quality, and possible barriers of telemedical DED screening in a point-of-care general practice setting and the accuracy of an automated algorithm for detection of DED. Methods: Patients with diabetes were recruited at general practices. Retinal images were acquired using a non-mydriatic camera (CenterVue, Italy) by medical assistants. Images were quality assessed and double graded by two graders. All images were also graded automatically using a commercially available artificial intelligence (AI) algorithm (EyeArt version 2.1.0, Eyenuk Inc.). Results: A total of 75 patients (147 eyes; mean age 69 years, 96% type 2 diabetes) were included. Most of the patients (51; 68%) preferred DED screening at the general practice, but only twenty-four (32%) were willing to pay for this service. Images of 63 patients (84%) were determined to be evaluable, and DED was diagnosed in 6 patients (8.0%). The algorithm's positive/negative predictive values (95% confidence interval) were 0.80 (0.28-0.99)/1.00 (0.92-1.00) and 0.75 (0.19-0.99)/0.98 (0.88-1.00) for detection of any DED and referral-warranted DED, respectively.Overall, the number of referrals was 18 (24%) for manual telemedical assessment and 31 (41%) for the artificial intelligence (AI) algorithm, resulting in a relative increase of referrals by 72% when using AI. Conclusions: Our study shows that achieved overall image quality in a telemedical GP-based DED screening was sufficient and that it would be accepted by medical assistants and patients in most cases. However, good image quality and integration into existing workflow remain challenging. Based on these findings, a larger-scale implementation study is warranted. abstract_id: PUBMED:31042793 Outer Retinal Structure and Function Deficits Contribute to Circadian Disruption in Patients With Type 2 Diabetes. Purpose: Light transmitted by retinal photoreceptors provides the input for circadian photoentrainment. In diabetes, there is a high prevalence of circadian and sleep disruption but the underlying causes are not well understood. Patients with diabetes can exhibit dysfunctional photoreceptors but their role in circadian health is not known. Here we quantify photoreceptor function and contributions to circadian health and sleep in patients with diabetes without diabetic retinopathy and healthy controls. Methods: Rod, cone, and melanopsin function was derived using chromatic pupillometry in 47 participants including 23 patients with type 2 diabetes and 24 age-matched healthy controls after an ophthalmic examination including retinal thickness assessment using optical coherence tomography. Circadian health was determined using dim light melatonin onset (DLMO) and sleep questionnaires; light exposure was measured using actigraphy. Results: Compared with the control group, the patients with diabetes had a significantly earlier DLMO (1 hour) (P = 0.008), higher subjective sleep scores (P &lt; 0.05), a reduction in pupil constriction amplitude for red stimuli (P = 0.039) and for the early postillumination pupil response (PIPR) for blue (P = 0.024) stimuli. There were no between-group differences in the light exposure pattern, activity levels, and intrinsic melanopsin-mediated PIPR amplitude (P &gt; 0.05). A significant correlation was evident between outer retinal thickness and DLMO (r = -0.65, P = 0.03) and the pupil constriction amplitude (r = 0.63, P = 0.03); patients with thinner retina had earlier DLMO and lower pupil amplitudes. Conclusions: We infer that the observed changes in circadian function in patients with no diabetic retinopathy are due to structural and functional outer retinal rod photoreceptor deficits at early stage of diabetic eye disease. Answer: Direct ophthalmoscopy does not significantly improve retinal screening for diabetic eye disease when compared to retinal photography. In a study that included patients who had undergone both dilated direct ophthalmoscopy by a diabetologist and retinal photography graded by a diabetologist, retinal photography identified 38 patients for referral to ophthalmology which ophthalmoscopy missed, with the ophthalmologist agreeing that sight-threatening diabetic eye disease (STDED) was present in 32 (84%) of these cases. Conversely, ophthalmoscopy identified 13 patients for referral who were not identified by photography, with the ophthalmologist agreeing with the diabetologist that STDED was present in seven (54%) of these cases. The conclusion was that ophthalmoscopy might identify the occasional patient with STDED missed by retinal photography, but for a systematic retinal screening program, adding ophthalmoscopy to retinal photography would increase false-positive referrals and is likely to detect only a few extra patients requiring laser treatment (PUBMED:12358877). Another study, the Liverpool Diabetic Eye Study, found that the sensitivity of detection of eye disease by photography was significantly better than for direct ophthalmoscopy (89% vs. 65%). The specificity of detection of sight-threatening eye disease was higher for direct ophthalmoscopy (97%) compared to photography (86%), but since high sensitivity is essential for an effective screening program, a photographic method should be considered the preferred option in national, community-based screening programs (PUBMED:7580708). In summary, while direct ophthalmoscopy may occasionally identify cases of STDED missed by retinal photography, the overall evidence suggests that retinal photography is more sensitive and should be the preferred method for screening in diabetic retinopathy programs. Direct ophthalmoscopy may increase false-positive referrals without significantly improving the detection rate of STDED that requires treatment.
Instruction: Are the frail destined to fail? Abstracts: abstract_id: PUBMED:32717844 Fail-Aware LIDAR-Based Odometry for Autonomous Vehicles. Autonomous driving systems are set to become a reality in transport systems and, so, maximum acceptance is being sought among users. Currently, the most advanced architectures require driver intervention when functional system failures or critical sensor operations take place, presenting problems related to driver state, distractions, fatigue, and other factors that prevent safe control. Therefore, this work presents a redundant, accurate, robust, and scalable LiDAR odometry system with fail-aware system features that can allow other systems to perform a safe stop manoeuvre without driver mediation. All odometry systems have drift error, making it difficult to use them for localisation tasks over extended periods. For this reason, the paper presents an accurate LiDAR odometry system with a fail-aware indicator. This indicator estimates a time window in which the system manages the localisation tasks appropriately. The odometry error is minimised by applying a dynamic 6-DoF model and fusing measures based on the Iterative Closest Points (ICP), environment feature extraction, and Singular Value Decomposition (SVD) methods. The obtained results are promising for two reasons: First, in the KITTI odometry data set, the ranking achieved by the proposed method is twelfth, considering only LiDAR-based methods, where its translation and rotation errors are 1.00 % and 0.0041 deg/m, respectively. Second, the encouraging results of the fail-aware indicator demonstrate the safety of the proposed LiDAR odometry system. The results depict that, in order to achieve an accurate odometry system, complex models and measurement fusion techniques must be used to improve its behaviour. Furthermore, if an odometry system is to be used for redundant localisation features, it must integrate a fail-aware indicator for use in a safe manner. abstract_id: PUBMED:30520463 The hybrid strategy for complete myocardial revascularization in the frail elderly patient The hybrid strategy allows for a complete myocardial revascularization in patients with multivessel coronary artery disease and a high frailty index. These patients, due to their old age and multi-comorbidities, are evaluated with inadequate tools for their clinical complexity and destined to an incomplete revascularization for increased surgical or procedural risk. Hybrid revascularization enables to use the best techniques resulting from the surgical and percutaneous approach defining a tailored strategy for the patient. In the frail patient, this strategy is associated with favorable outcomes and a lower exposure to periprocedural complications. abstract_id: PUBMED:33518547 The Dignity of the Frail: On Compassion, Terror, and Social Death. This enquiry considers how the dignity of the frail elderly is objectively grounded, socially constructed, and subjectively experienced. The lives of the frail trouble public consciousness. A terror of old age, felt by young or old, is liable to form a toxic affective culture of social death. Against such threats, the dignity of the frail requires defense. However, empathy- and capacities-based approaches to dignity fail to give a compelling account of humanity's membership in shared community. By contrast, the poetry of the Psalms and New Testament puts terror to flight by articulating how dignity is found within God's steadfast, worth-bestowing love which tenderly accompanies humanity in its shared dustiness from the womb to old age and beyond. The blessed dignity these sources describe is found to be more conceptually robust and affectively compelling than an individualistic eudaimonism. Cultivating an ecology of dignity in practice is finally shown to depend on a compassion which grows from the same fertile, imaginative ground. abstract_id: PUBMED:37696264 Geometric versus Hemodynamic Indexes for Rupture-Destined Aneurysms: A Retrospective Cohort and a Repeated-Measures Study. Introduction: A proper stratification of intracranial aneurysms is critical in identifying rupture-destined aneurysms and unruptured intracranial aneurysms. We aimed to determine the utility of geometric and hemodynamic indexes in differentiating two types of aneurysms and to examine the characteristics of natural evolutionary changes of unruptured aneurysms. Methods: Rupture-destined aneurysm refers to an aneurysm that undergoes subsequent aneurysmal subarachnoid hemorrhage (SAH). On the other hand, an unruptured intracranial aneurysm is characterized by an aneurysm that does not experience rupture during serial time-of-flight magnetic resonance angiography (TOF-MRA). In addition to geometric indexes, signal intensity gradient (SIG), an in vivo approximated wall shear stress from TOF-MRA, was measured in aneurysms. The difference between the maximum and minimum values of SIG in an aneurysm compared to parent arterial values was designated as the delta-SIG ratio. Results: This study analyzed 20 rupture-destined aneurysms in 20 patients and 45 unruptured intracranial aneurysms in 41 patients with follow-up TOF-MRA. While geometric indexes did not show differences between the two groups, the delta-SIG ratio was higher in the rupture-destined aneurysms (1.5 ± 0.6 vs. 1.1 ± 0.3, p = 0.032). The delta-SIG ratio showed a higher area under the receiver operating characteristic curve for SAH than the size ratio (0.72 [95% CI, 0.58-0.87] vs. 0.56 [95% CI, 0.41-0.72], p = 0.033). The longitudinal re-examination of TOF-MRA in the unruptured intracranial aneurysms revealed evidence of aneurysmal growth, while concurrently exhibiting hemodynamic stability. Conclusion: The delta-SIG ratio showed higher discriminatory results between the two groups compared to geometric indexes. Aneurysmal rupture risk should be assessed by considering both geometric and hemodynamic information. This study was registered on &lt;ext-link ext-link-type="uri" xlink:href="http://ClinicalTrials.gov" xmlns:xlink="http://www.w3.org/1999/xlink"&gt;ClinicalTrials.gov&lt;/ext-link&gt; (NCT05450939). abstract_id: PUBMED:30905473 Frailty in patients over 65 years of age admitted to Intensive Care Units (FRAIL-ICU). Objective: To estimate the prevalence of frailty in patients admitted to the Intensive Care Unit (ICU) and its impact upon ICU mortality at 1 and 6 months. Design: A prospective observational cohort study was carried out. Setting: Spanish ICU. Intervention: None. Patients And Methods: Patients≥65 years of age admitted to the ICU for&gt;24hours. Variables were registered upon admission, and functional status was assessed by telephone calls 1 and 6 months after discharge from the ICU. Main Study Variables: Age, gender, frailty (FRAIL scale), functional status (Barthel, Lawton, Clinical Dementia Rating and NUTRIC score), days of mechanical ventilation (MV), functional score (APACHE II and SOFA), ICU mortality, and mortality 1 and 6 months after ICU discharge. Results: A total of 132 patients were evaluated, of which 46 were frail (34.9%). Age of the frail versus non-frail patients: 78.8±7.2 and 78.6±6.4 years, respectively (P=.43); male gender: 43.8% versus 56.3% (P=.10); SOFA score: 4.7±2.9 versus 4.6±2.9 (P=.75); MV: 33.3% versus 66.7% (P=.75); days of MV: 5.6±15 versus 4.3±8.1 (P=.57); ICU mortality 13% versus 6% (P = .14), mortality at 1 month 24% versus 8% (P = .01), mortality 6 months 32% versus 15% (P = .03). Frailty is associated with mortality at one month (OR = 3.5, P &lt;.05, 95% CI (1.22-10.03) and at 6 months after discharge from the ICU (OR = 2.62, P &lt;.05, 95% CI (1.04-6.56). Conclusions: Frailty was present in 35% of the patients admitted to the ICU, and was associated with mortality. abstract_id: PUBMED:31367420 Failing to Fail nursing students among mentors: A confirmatory factor analysis of the Failing to Fail scale. Aim: The aim was to explore the psychometric properties with respect to the internal consistency reliability of the subject-specific questionnaire "Failing to Fail." Design: Cross-sectional study. Methods: Exploratory factor analysis with varimax rotation. A confirmatory factor analysis was used to examine the factor structure of the "Failing to Fail" scale. The sample included 336 Norwegian nurse mentors. Results: The confirmatory factor analysis confirmed a five-factor structure of the "Failing to Fail" scale with adequate model fit. The factors were named as: (a) Insufficient mentoring competence; (b) Insufficient support in the working environment; (c) Emotional process dominates the assessment; (d) Insufficient support from the university; and (e) Decision-making detached from learning outcomes. The scale proved to be feasible to test whether mentors are Failing to Fail nursing students. The confirmatory factor analysis model supported the predictive validity of the "Failing to Fail" scale. abstract_id: PUBMED:31377914 Influence of fall environment and fall direction on risk of injury among pre-frail and frail adults. In this prospective study, half of all falls resulted in injury. Pre-frail adults sustained more injuries, while more frail adults had injuries requiring hospitalization or fractures. Pre-frail adults fell more often when in movement compared with frail adults who fell more often when standing and in indoor public spaces. Purpose: To assess prospectively how fall environment and direction are related to injury among pre-frail and frail adults. Methods: We included 200 community-dwelling adults with a prior fall (pre-frail, mean age 77 years) and 173 adults with acute hip fracture (frail, mean age 84 years; 77% community-dwelling). Falls were prospectively recorded using standardized protocols in monthly intervals, including date, time, fall direction and environment, and injury. We used logistic regression to assess the odds of injury adjusting for age, body mass index (BMI), and gender. Results: We recorded 513 falls and 331 fall-related injuries (64.5%) among the 373 participants. While the fall rate was similar between groups, pre-frail adults had more injuries (71% among pre-frail vs. 56% among frail, p = 0.0004) but a lower incidence of major injuries (9% among pre-frail vs. 27% among frail, p = 0.003). Pre-frail adults fell more often while in movement (84% among pre-frail vs. 55% among frail, p &lt; 0.0001), and frail adults fell more often while standing (26% vs. 15% respectively, p = 0.01). The odds of injury among frail adults was increased 3.3-fold when falling sideways (OR = 3.29, 95% CI = 1.68-6.45) and 2.4-fold when falling in an indoor public space (OR = 2.35, 95% CI = 1.00-5.53), and was reduced when falling at home (OR = 0.55, 95% CI = 0.31-0.98). The odds of injury among pre-frail adults was not influenced by environment and was 53% lower when falling backwards (OR = 0.47, 95% CI = 0.26-0.82). Conclusion: While pre-frail adults sustain more fall-related injuries, frail adults were more likely to sustain major injuries, especially when falling sideways or outside their home. abstract_id: PUBMED:37176544 Mortality Risks after Two Years in Frail and Pre-Frail Older Adults Admitted to Hospital. Background: Frailty is characterized by a progressive decline in the physiological functions of multiple body systems that lead to a more vulnerable condition, which is prone to the development of various adverse events, such as falls, hospitalization, and mortality. This study aims to determine whether frailty increases mortality compared to pre-frailty and to identify variables associated with a higher risk of mortality. Materials: Two cohorts, frail and pre-frail subjects, are evaluated according to the Fried phenotype. A complete examination of frailty, cognitive status, comorbidities and pharmacology was carried out at hospital admission and was extracted through electronic health record (EHR). Mortality was evaluated from the EHR. Methods: Kaplan-Meier estimates of survival probability functions were calculated at two years censoring time for frail and pre-frail cohorts. The log-rank test assessed significant differences between survival probability functions. Significant variables for frailty (p &lt; 0-05) were extracted by independent sample t-test. Further selection was based on variable significance found in multivariate logistic regression discrimination between frail and pre-frail subjects. Cox regression over univariate t-test-selected variables was calculated to identify variables associated with higher proportional hazard risks (HR) at two years. Results: Frailty is associated with greater mortality at two years censoring time than pre-frailty (log-rank test, p &lt; 0.0001). Variables with significant (p &lt; 0.05) association with mortality identified in both cohorts (HR 95% (CI in the frail cohort) are male sex (0.44 (0.29-0.66)), age (1.05 (1.01-1.09)), weight (0.98 (0.96-1.00)), and use of proton-pump inhibitors (PPIs) (0.60 (0.41-0.87)). Specific high-risk factors in the frail cohort are readmission at 30 days (0.50 (0.33-0.74)), SPPB sit and stand (0.62 (0.45-0.85)), heart failure (0.67 (0.46-0.98)), use of antiplatelets (1.80 (1.19-2.71)), and quetiapine (0.31 (0.12-0.81)). Specific high-risk factors in the pre-frail cohort are Barthel's score (120 (7.7-1700)), Pfeiffer test (8.4; (2.3-31)), Mini Nutritional Assessment (MNA) (1200 (18-88,000)), constipation (0.025 (0.0027-0.24)), falls (18,000 (150-2,200,000)), deep venous thrombosis (8400 (19-3,700,000)), cerebrovascular disease (0.01 (0.00064-0.16)), diabetes (360 (3.4-39,000)), thyroid disease (0.00099 (0.000012-0.085)), and the use of PPIs (0.062 (0.0072-0.54)), Zolpidem (0.000014 (0.0000000021-0.092)), antidiabetics (0.00015 (0.00000042-0.051)), diuretics (0.0003 (0.000004-0.022)), and opiates (0.000069 (0.00000035-0.013)). Conclusions: Frailty is associated with higher mortality at two years than pre-frailty. Frailty is recognized as a systemic syndrome with many links to older-age comorbidities, which are also found in our study. Polypharmacy is strongly associated with frailty, and several commonly prescribed drugs are strongly associated with increased mortality. It must be considered that frail patients need coordinated attention where the diverse specialist taking care of them jointly examines the interactions between the diversity of treatments prescribed. abstract_id: PUBMED:25673231 Cardiac autonomic modulation in non-frail, pre-frail and frail elderly women: a pilot study. Frailty has been defined as a geriatric syndrome that results in high vulnerability to health adverse outcomes. This increased vulnerability state results from dysregulation of multiple physiological systems and its complex interactions. Thus, assessment of physiological systems integrity and of its dynamic interactions seems to be useful in the context of frailty management. Heart rate variability (HRV) analysis provides information about autonomic nervous system (ANS) function, which is responsible to control several physiologic functions. This study investigated the cardiac autonomic modulation by HRV analysis in community-dwelling elderly women classified as non-frail, pre-frail and frail. Twenty-three elderly women were assigned to the following groups: non-frail (n = 8), pre-frail (n = 8) and frail (n = 7). HRV assessment was performed through linear and non-linear analysis of cardiac interval variability. It was observed a higher sympathetic and lower parasympathetic modulation in frail when compared with non-frail and pre-frail groups (p &lt; 0.05) as indicated by frequency domain indices. Additionally, frail group had a decreased 2LV % pattern (that reflects parasympathetic modulation) in the symbolic analysis in comparison with non-frail group. These findings suggest that frail elderly women present an autonomic imbalance characterized by a shift towards sympathetic predominance. Thus, monitoring ANS function in the context of frailty management may be an important strategy to prevention, diagnosis and treatment of this syndrome and its consequences. abstract_id: PUBMED:19224004 Is the pass/fail system applicable to a medical school in Korea? To determine whether a pass/fail system is more appropriate for medical education instead of a grade-based system, a survey of medical students and faculty members of Hallym University, Korea, was taken. A questionnaire was delivered to 54 junior students and 36 faculty members from a medical school in Korea and analyzed. Of these participants, 37.7% of students and 36.1% of faculty agreed to the pass/fail system, while 28.3% of students and 52.8% of faculty objected to it. The most frequent reason for objection was the potential decrease in learning achievement. A pass/fail system should be considered after persuasion of the students and faculty to think positively of this system. Answer: The concept of frailty in the context of the provided abstracts refers to a state of increased vulnerability due to a decline in physiological functions across multiple body systems, which can lead to adverse health outcomes such as falls, hospitalization, and mortality (PUBMED:37176544; PUBMED:25673231). However, the question "Are the frail destined to fail?" is not directly addressed by any of the abstracts, as the term "fail" can have different meanings depending on the context. In the context of health and aging, the abstracts suggest that frail individuals are at a higher risk for negative health outcomes compared to non-frail or pre-frail individuals. For instance, frailty is associated with greater mortality at two years compared to pre-frailty (PUBMED:37176544), and frail elderly women present an autonomic imbalance characterized by a shift towards sympathetic predominance, which may have implications for the management of frailty (PUBMED:25673231). Additionally, frail adults are more likely to sustain major injuries, especially when falling sideways or outside their home (PUBMED:31377914), and frailty is associated with increased mortality after ICU discharge (PUBMED:30905473). In the context of medical education, the term "fail" is used in a different sense, referring to the grading system and the potential impact of a pass/fail system on learning achievement (PUBMED:19224004). This is unrelated to the concept of frailty in health. Therefore, while frail individuals are at a higher risk for certain negative health outcomes, the abstracts do not support the notion that they are "destined to fail" in an absolute sense. Instead, they highlight the importance of recognizing and managing frailty to potentially mitigate its associated risks.
Instruction: Is active surveillance a safe alternative in the management of localized prostate cancer? Abstracts: abstract_id: PUBMED:28847461 Active surveillance of prostate cancer Several prospective studies have demonstrated the safety of active surveillance as a first treatment of prostate cancer. It spares many patients of a useless treatment, with its potential sequelae. Patients with a low-risk cancer are all candidates for this approach, as recommended by the American Society of Clinical Oncology (ASCO). Some patients with an intermediate risk could be also concerned by active surveillance, but this is still being discussed. Currently, the presence of grade 4 lesions on biopsy is a contra-indication. Modalities included a repeated prostate specific antigen test and systematic rebiopsy during the first year after diagnosis. MRI is now proposed to better select patients at inclusion and also during surveillance. No life style changes or drugs are significantly associated with a longer duration of surveillance. abstract_id: PUBMED:28220449 Current Management Strategy for Active Surveillance in Prostate Cancer. Purpose Of Review: Active surveillance has been increasingly utilized as a strategy for the management of favorable-risk, localized prostate cancer. In this review, we describe contemporary management strategies of active surveillance, with a focus on traditional stratification schemes, new prognostic tools, and patient outcomes. Recent Findings: Patient selection, follow-up strategy, and indication for delayed intervention for active surveillance remain centered around PSA, digital rectal exam, and biopsy findings. Novel tools which include imaging, biomarkers, and genetic assays have been investigated as potential prognostic adjuncts; however, their role in active surveillance remains institutionally dependent. Although 30-50% of patients on active surveillance ultimately undergo delayed treatment, the vast majority will remain free of metastasis with a low risk of dying from prostate cancer. The optimal method for patient selection into active surveillance is unknown; however, cancer-specific mortality rates remain excellent. New prognostication tools are promising, and long-term prospective, randomized data regarding their use in active surveillance will be beneficial. abstract_id: PUBMED:34025851 Active surveillance in prostate cancer management: where do we stand now? Prostate cancer (PCa) is the most common cancer in men, with a steadily rising incidence, affecting on average one in six men during their lifetime. The increase in morbidity is related to the increasing overall life expectancy, prostate-specific antigen testing, implementation of new molecular markers for cancer detection and the more frequent application of multiparametric magnetic resonance imaging. There is growing evidence demonstrating that active surveillance (AS) is an alternative to immediate intervention in patients with very low- and low-risk prostate cancer. Ongoing reports from multiple studies have consistently demonstrated a very low rate of metastases and prostate cancer specific mortality in selected cohorts of patients. As a matter of fact, AS has been adopted by many institutions as a safe and effective management strategy. The aim of our review is to summarize the contemporary data on AS in patients affected with PCa with the intention to present the most clinically useful and pertinent AS protocols. abstract_id: PUBMED:27986459 Oncological results of active surveillance in prostate cancer: A retrospective multicentric cohort Purpose: To report oncological outcomes of patients with prostate cancer undergoing active surveillance according to SURACAP criteria. Methods: This multicentric study included patients who were initially treated with active surveillance for localized prostate cancer according to the SURACAP criteria. The duration of active surveillance as well as the causes of discontinuing the protocol and the definitive pathological results of patients who further underwent radical prostatectomy were retrospectively evaluated. The predictors of discontinuing active surveillance were assessed using a univariable Cox Model. In addition, the predictive value of initial MRI was assessed for patients who performed such imagery. Results: Between 2007 and 2013, 80 patients were included, with a median age of 64 years [47-74]. Median follow-up was 52.9 months [24-108]. At 5 years follow-up, 43.4% patients were still under surveillance. Among patients that underwent surgery, 17.8% had an extra-capsular extension. The risk of discontinuing was not significantly greater for patients with tumor size of 2 or 3mm versus 1mm (HR=0.9 [0.46-1.75], P=0.763), 2 positives cores versus 1 (HR=0.98 [0.48-2.02], P=0.967), T2a vs. T1c stage (HR=2.18 [0.77-6.18], P=0.133), increased PSA level (HR=1 [0.96-1.15], P=0.975) or the patient's age (HR=1 [0.93-1.16], P=0.966). Among the 50 patients who performed initial MRI, the results of such imagery was not significantly associated to the risk of discontinuing active surveillance MRI (HR=1.49 [0.63-3.52], P=0.36). Conclusion: Although this study reveals a high rate of release from active surveillance at 5 years, the rate of extra-capsular tumors reported in the group of patients that underwent surgery is among the lowest in literature. Level Of Evidence: 4. abstract_id: PUBMED:21693354 Active surveillance for localized prostate cancer Active surveillance as an alternative approach to immediate curative treatment is demonstrated for an increasing number of patients with low risk prostate cancer. Optimization of selection and surveillance criteria to guarantee a low risk issue to patients are discussed in this review. They lead to consider active surveillance as an option rather than a standard of care for patients ideally included in clinical research protocols. abstract_id: PUBMED:26816834 Active surveillance and focal therapy for low-intermediate risk prostate cancer. Low risk and many cases of low-intermediate risk prostate cancer, are indolent, have little or no metastatic potential, and are not life threatening. Major advances have been made in understanding who these patients are, and in encouraging the use of conservative management in such individuals. Conservative management incorporates the early identification of those 'low risk' patients who harbor higher risk disease, and benefit from definitive therapy. Based on the current algorithm of PSA followed by systematic biopsy, this represents about 30% of newly diagnosed low risk patients. A further small proportion of patients with low risk disease demonstrate biological progression to higher grade disease. Men with lower risk disease can defer treatment, usually for life. Men with higher risk disease that can be localized to a relatively small volume of the prostate may be candidates for focal, prostate sparing therapy. The results of active surveillance, embodying conservative management with selective delayed intervention for the subset who are re-classified as higher risk over time based on repeat biopsy, imaging, or biomarker results, have shown that this approach is safe in the intermediate to long term, with a 1-5% cancer specific mortality at 15 years. Further refinement of the surveillance approach is ongoing, incorporating MRI, targeted biopsies, and molecular biomarkers. abstract_id: PUBMED:31061800 Contemporary approach to active surveillance for favorable risk prostate cancer. The approach to favorable risk prostate cancer known as "active surveillance" was first described explicitly in 2002. This was a report of 250 patients managed with a strategy of expectant management, with serial prostate-specific antigen and periodic biopsy, and radical intervention advised for patients who were re-classified as higher risk. This was initiated as a prospective clinical trial, complete with informed consent, beginning in 2007. Thus, there are now 20 years of experience with this approach, which has become widely adopted around the world. In this chapter, we will summarize the biological basis for active surveillance, review the experience to date of the Toronto and Hopkins groups which have reported 15-year outcomes, describe the current approach to active surveillance in patients with Gleason score 3 + 3 or selected patients with Gleason score 3 + 4 with a low percentage of Gleason pattern 4 who may also be candidates, enhanced by the use of magnetic resonance imaging, and forecast future directions. abstract_id: PUBMED:25374902 Active surveillance in men with low-risk prostate cancer: current and future challenges. Introduction: The implementation of prostate-specific antigen (PSA) screening has coincided with a decrease in mortality rate from prostate cancer at the cost of overtreatment. Active surveillance has thus emerged to address the concern for over-treatment in men with low-risk prostate cancer. Methods: A contemporary review of literature with respect to low-risk prostate cancer and active surveillance was conducted. The premise of active surveillance, ideal candidates, follow-up practices, treatment triggers, and the observed outcomes of delayed interventions are reviewed. Various institutional protocols are compared and contrasted. Results: Eligibility criteria from various institutions share similar principles. Candidates are followed with PSA kinetics and/or repeat biopsies to identify those who require intervention. Various triggers for intervention have been recognized achieving overall and cancer-specific survival rates &gt; 90% in most protocols. New biomarkers, imaging modalities and genetic tests are also currently being investigated to enhance the efficacy of active surveillance programs. Conclusion: Active surveillance has been shown to be safe and effective in managing men with low-risk prostate cancer. Although as high as 30% of men on surveillance will eventually need intervention, survival rates with delayed intervention remain reassuring. Long-term studies are needed for further validation of current active surveillance protocols. abstract_id: PUBMED:22232867 Active surveillance in prostate cancer Active surveillance in prostate cancer The spread of PSA in the screening of prostate cancer has almost doubled the incidence of this disease in the last twenty years. An improved understanding of the natural history of this cancer allows for risk stratification of the disease and to better predict insignificant prostate cancer. Active surveillance has recently been proposed as a new option to delay or avoid a radical treatment for patients with low-risk disease. The principle, results and future perspectives of this treatment modality are discussed in this review. abstract_id: PUBMED:28959144 Active surveillance of prostate cancer: Current state of practice and utility of multiparametric magnetic resonance imaging. Active surveillance (AS) is an alternative to immediate intervention in patients with very low- and low-risk prostate cancer. Ongoing reports from multiple AS programs have consistently demonstrated a very low rate of metastasis and prostate cancerspecific mortality in appropriately selected patients. Accordingly, AS has been adopted by major clinical organizations as a safe and effective management strategy. Clinical focus has now shifted to identifying the optimal approach to selecting and monitoring patients. Multiparametric magnetic resonance imaging (mpMRI) has emerged as one potentially helpful tool in the AS setting. The utility of mpMRI has been well demonstrated in the setting of screening and diagnosis, but its role in AS remains unclear. We summarize the published experience with AS and review pertinent, contemporary data on the use of mpMRI in the setting of AS. Answer: Yes, active surveillance is considered a safe alternative in the management of localized prostate cancer. Several prospective studies have demonstrated the safety of active surveillance as a first treatment option for prostate cancer, particularly for patients with low-risk cancer, as recommended by the American Society of Clinical Oncology (ASCO) (PUBMED:28847461). The approach involves careful monitoring of the disease with the intention to intervene with curative treatment if signs of progression are observed. This strategy helps to spare many patients from unnecessary treatment and its potential side effects. Active surveillance remains centered around prostate-specific antigen (PSA) testing, digital rectal exams, and biopsy findings. Although 30-50% of patients on active surveillance may eventually undergo delayed treatment, the majority remain free of metastasis with a low risk of dying from prostate cancer (PUBMED:28220449). Reports from multiple studies have consistently demonstrated very low rates of metastases and prostate cancer-specific mortality in selected cohorts of patients, leading to the adoption of active surveillance by many institutions as a safe and effective management strategy (PUBMED:34025851). Furthermore, long-term outcomes from active surveillance protocols have shown that this approach is safe in the intermediate to long term, with cancer-specific mortality rates as low as 1-5% at 15 years (PUBMED:31061800). The use of multiparametric magnetic resonance imaging (mpMRI) and other novel tools such as biomarkers and genetic assays are being investigated to enhance patient selection and monitoring during active surveillance, although their roles are still being clarified (PUBMED:28959144). In summary, active surveillance is a well-established and safe management option for patients with localized prostate cancer, particularly those with low-risk disease, allowing for the deferral of treatment while maintaining excellent cancer-specific survival rates.
Instruction: Describing maltreatment: do child protective service reports and research definitions agree? Abstracts: abstract_id: PUBMED:15970320 Describing maltreatment: do child protective service reports and research definitions agree? Objective: The National Research Council identified inadequate research definitions for abuse and neglect as barriers to research in child maltreatment. We examine the concordance between child protective services (CPS) classifications of maltreatment type with the determinations of type from two research coding systems. We contrast the two coding systems and the CPS classification, in their ability to predict subsequent difficulties in the psychological functioning of maltreated children at age 8. Method: The sample included 545 children who were enrolled in Longitudinal Studies of Child Abuse and Neglect (LONGSCAN) with a report of child maltreatment, had data collected at approximately 4 and 8 years of age, and had a lifetime review of CPS records to age 8. CPS Maltreatment reports were coded using LONGSCAN's modification of the Maltreatment Classification System (MMCS) and the Second National Incidence Study maltreatment coding system (NIS-2). The first analyses used reports as the unit of analysis to examine agreement between CPS and research determinations of allegation type. Validation analyses examined outcomes for each type of maltreatment experienced after age 4 under each coding system using the Child Behavior Checklist (CBCL), Trauma Symptom Checklist-Alternative form, and the Vineland Screener as the measures of outcome. Control variables were the CBCL and Battelle Developmental Screener, measured at age 4. Results: There were a total of 1980 reports of maltreatment for 545 study children although only 1593 CPS reports specified at least one type of maltreatment. There were differences between the type of maltreatment recorded in child protective service records and the conclusions reached by either research classification system. CPS classifications were most discordant with the research systems for emotional abuse and neglect. Nearly 10% of physical and sexual abuse reports, as determined by the MMCS, were classified as neglect by the child protective service agencies. The NIS-2 system and the MMCS had very high Kappa statistics for agreement for physical and sexual abuse. The validity of the research definitions for physical and sexual abuse was demonstrated in models predicting children's functioning at age 8. Prediction of child functioning was significantly but modestly improved in several domains compared to the CPS classifications. Conclusion: Both research classification systems moderately improved on the prediction of the adverse effects of maltreatment compared to the characterization of a maltreatment exposure as recorded by CPS. abstract_id: PUBMED:25275621 Research misconduct definitions adopted by U.S. research institutions. In 2000, the U.S. federal government adopted a uniform definition of research misconduct as fabrication, falsification, or plagiarism (FFP), which became effective in 2001. Institutions must apply this definition of misconduct to federally-funded research to receive funding. While institutions are free to adopt definitions of misconduct that go beyond the federal standard, it is not known how many do. We analyzed misconduct policies from 183 U.S. research institutions and coded them according to thirteen different types of behavior mentioned in the misconduct definition. We also obtained data on the institution's total research funding and public vs. private status, and the year it adopted the definition. We found that more than half (59%) of the institutions in our sample had misconduct policies that went beyond the federal standard. Other than FFP, the most common behaviors included in definitions were "other serious deviations" (45.4%), "significant or material violations of regulations" (23.0%), "misuse of confidential information" (15.8%), "misconduct related to misconduct" (14.8%), "unethical authorship other than plagiarism" (14.2%), "other deception involving data manipulation" (13.1%), and "misappropriation of property/theft" (10.4%). Significantly more definitions adopted in 2001 or later went beyond the federal standard than those adopted before 2001 (73.2% vs. 26.8%), and significantly more definitions adopted by institutions in the lower quartile of total research funding went beyond the federal standard than those adopted by institutions in the upper quartiles. Public vs. private status was not significantly associated with going beyond the federal standard. abstract_id: PUBMED:35469748 Religious and secular spirituality: Methodological implications of definitions for health research. The importance of spirituality as a research topic has been increasingly recognized, which has led to several studies on the topic. Areas including psychology, medicine, and nursing have produced studies on spirituality under a plurality of definitions and methods, which reveals the complexity of the theme. However, this has resulted in a range of potential problems, including: (1) the use of overlapping and contradictory terms between studies, or even within the same study, (2) research methodologies that do not fit the definitions (sometimes unreflectively) assumed by the authors, (3) difficulties, or even the impossibility, of comparing the results of studies, (4) controversies in respect of the inclusion/exclusion of secular groups in research on spirituality, and (5) ambiguous measurements, often being exclusively dependent on each participant's individual interpretation of what spirituality means. This article discusses these problems, recommends theoretical and methodological alternatives and presents taxonomy of definitions of spirituality. abstract_id: PUBMED:32355672 Appraisal of the current guidelines for management of cholangiocarcinoma-using the Appraisal of Guidelines Research and Evaluation II (AGREE II) Instrument. Cholangiocarcinoma (CC) is the second most common primary liver tumour. High-quality guidelines are essential for effective patient stratification and individualised treatment. This study aimed to appraise the methodological quality of existing guidelines for the resection of CC using the Appraisal of Guidelines for Research &amp; Evaluation (AGREE II) instrument. A systematic search of the literature in Cochrane, PubMed, Google Scholar, and Embase was performed. Assessment of the clinical practice guidelines (CPGs) and consensuses was performed using the AGREE II instrument by four clinicians experienced in surgical practice and the AGREE II appraisal method. Literature searches identified 13 guidelines of highly variable quality according to the AGREE II criteria. The guidelines scored well in certain domains such as scope &amp; purpose (median score across all guidelines; 65%), clarity of presentation (76%), and editorial independence (56%). However, they scored poorly for applicability (13%), rigour of development (30%), and stakeholder involvement (39%). None of the 13 guidelines was recommended universally for use without modification. Overall, the methodological quality of guidelines on the surgical management of CC is poor. Future updates should address and modify shortcomings detected by the AGREE II instrument, thereby facilitating better patient stratification and individualised treatment strategies. abstract_id: PUBMED:29216903 Evaluation of clinical practice guidelines using the AGREE instrument: comparison between data obtained from AGREE I and AGREE II. Objective: The Appraisal of Guidelines for Research and Evaluation (AGREE) is a representative, quantitative evaluation tool for evidence-based clinical practice guidelines (CPGs). Recently, AGREE was revised (AGREE II). The continuity of evaluation data obtained from the original version (AGREE I) has not yet been demonstrated. The present study investigated the relationship between data obtained from AGREE I and AGREE II to evaluate the continuity between the two measurement tools. Results: An evaluation team consisting of three trained librarians evaluated 68 CPGs issued in 2011-2012 in Japan using AGREE I and AGREE II. The correlation coefficients for the six domains were: (1) scope and purpose 0.758; (2) stakeholder involvement 0.708; (3) rigor of development 0.982; (4) clarity of presentation 0.702; (5) applicability 0.919; and (6) editorial independence 0.971. The item "Overall Guideline Assessment" was newly introduced in AGREE II. This global item had a correlation coefficient of 0.628 using the six AGREE I domains, and 0.685 using the 23 items. Our results suggest that data obtained from AGREE I can be transferred to AGREE II, and the "Overall Guideline Assessment" data can be determined with high reliability using a standardized score of the 23 items. abstract_id: PUBMED:34996621 One year into COVID-19: What have we learned about child maltreatment reports and child protective service responses? Background: A year has passed since COVID-19 began disrupting systems. Although children are not considered a risk population for the virus, there is accumulating knowledge regarding children's escalating risk for maltreatment during the pandemic. Objective: The current study is part of a larger initiative using an international platform to examine child maltreatment (CM) reports and child protective service (CPS) responses in various countries. The first data collection, which included a comparison between eight countries after the pandemic's first wave (March-June 2020), illustrated a worrisome picture regarding children's wellbeing. The current study presents the second wave of data across 12 regions via population data (Australia [New South Wales], Brazil, United States [California, Pennsylvania], Colombia, England, Germany, Israel, Japan, Canada [Ontario, Quebec], South Africa). Method: Regional information was gathered, including demographics, economic situation, and CPS responses to COVID-19. A descriptive analysis was conducted to provide an overview of the phenomenon. Results: Across all of the countries, COVID-19 had a substantial negative impact on the operation of CPSs and the children and families they serve by disrupting in-person services. One year into the COVID-19 pandemic, new reports of CM varied across the regions.1 In some, the impact of COVID-19 on CPS was low to moderate, while in others, more significant changes created multiple challenges for CPS services. Conclusions: COVID-19 created a barrier for CPS to access and protect children. The dramatic variance between the regions demonstrated how social, economic and structural contexts impact both CM reports and CPS responses. abstract_id: PUBMED:35705753 AGREE-S: AGREE II extension for surgical interventions: appraisal instrument. Background: The Appraisal of Guidelines Research and Evaluation (AGREE) II instrument was developed to evaluate the quality of clinical practice guidelines. Evidence suggests that development, reporting, and appraisal of guidelines on surgical interventions may be better informed by modification of the instrument. Objective: We aimed to develop an AGREE II extension specifically designed for appraisal of guidelines of surgical interventions. Methods: In a three-part project funded by the United European Gastroenterology and the European Association for Endoscopic Surgery, (i) we identified factors that were associated with higher quality of surgical guidelines, (ii) we statistically calibrated the AGREE II instrument in the context of surgical guidelines using correlation, reliability, and factor analysis, and (iii) we undertook a Delphi consensus process of stakeholders to inform the development of an AGREE II extension instrument for surgical interventions. Results: Several features were prioritized by stakeholders as of particular importance for guidelines of surgical interventions, including development of a guideline protocol, consideration of practice variability and surgical expertise in different settings, and specification of infrastructures required to implement the recommendations. The AGREE-S-AGREE II extension instrument for surgical interventions has 25 items, compared to the 23 items of the original AGREE II instrument, organized into the following 6 domains: Scope and purpose, Stakeholders, Evidence synthesis, Development of recommendations, Editorial independence, and Implementation and update. As the original instrument, it concludes with an overall appraisal of the quality of the guideline and a judgement on whether the guideline is recommended for use. Several items were amended and rearranged among domains, and an item was deleted. The Rigor of Development domain of the original AGREE II was divided into Evidence Synthesis and Development of Recommendations. Items of the AGREE II domain Clarity of Presentation were incorporated in the new domain Development of Recommendations. Three new items were introduced, addressing the development of a guideline protocol, support by a guideline methodologist, and consideration of surgical experience/expertise. Conclusion: The AGREE-S appraisal instrument has been developed to be used for assessment of the methodological and reporting quality of guidelines on surgical interventions. abstract_id: PUBMED:29793350 Abuse Characteristics and the Concordance of Child Protective Service Determinations and Adolescent Self-Reports of Abuse. This study examines the concordance of abuse self-reported by adolescents at 18 years and child protective service (CPS) determinations and how abuse characteristics predict concordance. It includes 819 youths participating in 18-year interviews of the Longitudinal Study of Child Abuse and Neglect (LONGSCAN). Cross-tabulations revealed low correspondence between self-reports and CPS determinations of physical, sexual, and emotional abuse. Logistic regression identified that among youths with CPS physical abuse determinations, White race, chronicity, and co-occurring neglect were positively associated with corresponding self-reports. Co-occurring CPS-determined emotional abuse was more concordant with self-reports. More frequent self-reported physical abuse was associated with corresponding CPS determinations. Self-reports of childhood emotional abuse and perpetration by nonparental family/other household members were positively associated with corresponding CPS determinations. CPS determination concordance also varied significantly by LONGSCAN site. Results demonstrate differences in abuse characteristics captured by CPS data and youth self-report, which may impact research findings on abuse correlates. abstract_id: PUBMED:36291256 Trends in Dyslexia Research during the Period 1950 to 2020-Theories, Definitions, and Publications. Introduction: The focus of the present paper is on (1) how dyslexia research and hence definitions have developed during the period 1950-2020 and includes (2) a database search of scientific publications on dyslexia during the same period. The focus is on the definitions of dyslexia and the organization of the network search based on the causal four-level model by Morton and Frith. Method: (1) The definitions are presented in accordance with a historic review of dyslexia research from 1950 to 2020 and based on (2) Google Scholar counts of publications on dyslexia, on defining dyslexia, on dyslexia at the four levels (symptomatic, cognitive, biological, environmental), and by areas (sensorimotor, comorbidity). Finally, a percentage calculation shows the relative development within each level and area by decennium (1950-1960, 1960-1970, 1970-1980, 1990-2000, 2002-2010, 2010-2020). Results: (1) Of the seven definitions presented, only the definition by the BDA 2007 included the four levels of the causal model. (2) The number of publications increased substantially over the period. However, relatively few publications have defined dyslexia. An increase in publications from 1950 to 2020 was seen across the four levels and two areas-however, with an alteration in the thematic focus over this time span. Summary: Defining dyslexia has still not reached a consensus. This uncertainty may explain why only one of the seven definitions proved satisfactory according to the four-level model. Along with the general increase in research, publications on dyslexia have increased accordingly during the period 1950 to 2020. Although the symptomatic level has played a dominant role over the whole period, thematic shifts have been seen over these 70 years. In particular, a substantial thematic shift was seen by the turn of the millennium. There has been a relative increase in the focus on literacy at the symptomatic level, on phonological awareness at the cognitive level, in gender at the biological level, and second language learning as comorbidities. However, increases in counts are not alone a valid indication of scientific progress. In particular, the lack of definitional criteria as a basis for participant and method selection should attract much more focus in future studies. The present study underlines the multifactorial nature of dyslexia, as evidenced by a substantial increase in the number of publications on the subject. It is a challenge for future research to continuously use and possibly redefine dyslexia definitions in line with such standards. abstract_id: PUBMED:33888385 Valve Academic Research Consortium 3: Updated Endpoint Definitions for Aortic Valve Clinical Research. Aims: The Valve Academic Research Consortium (VARC), founded in 2010, was intended to (i) identify appropriate clinical endpoints and (ii) standardize definitions of these endpoints for transcatheter and surgical aortic valve clinical trials. Rapid evolution of the field, including the emergence of new complications, expanding clinical indications, and novel therapy strategies have mandated further refinement and expansion of these definitions to ensure clinical relevance. This document provides an update of the most appropriate clinical endpoint definitions to be used in the conduct of transcatheter and surgical aortic valve clinical research. Methods And Results: Several years after the publication of the VARC-2 manuscript, an in-person meeting was held involving over 50 independent clinical experts representing several professional societies, academic research organizations, the US Food and Drug Administration (FDA), and industry representatives to (i) evaluate utilization of VARC endpoint definitions in clinical research, (ii) discuss the scope of this focused update, and (iii) review and revise specific clinical endpoint definitions. A writing committee of independent experts was convened and subsequently met to further address outstanding issues. There were ongoing discussions with FDA and many experts to develop a new classification schema for bioprosthetic valve dysfunction and failure. Overall, this multi-disciplinary process has resulted in important recommendations for data reporting, clinical research methods, and updated endpoint definitions. New definitions or modifications of existing definitions are being proposed for repeat hospitalizations, access site-related complications, bleeding events, conduction disturbances, cardiac structural complications, and bioprosthetic valve dysfunction and failure (including valve leaflet thickening and thrombosis). A more granular 5-class grading scheme for paravalvular regurgitation (PVR) is being proposed to help refine the assessment of PVR. Finally, more specific recommendations on quality-of-life assessments have been included, which have been targeted to specific clinical study designs. Conclusions: Acknowledging the dynamic and evolving nature of less-invasive aortic valve therapies, further refinements of clinical research processes are required. The adoption of these updated and newly proposed VARC-3 endpoints and definitions will ensure homogenous event reporting, accurate adjudication, and appropriate comparisons of clinical research studies involving devices and new therapeutic strategies. Answer: The concordance between child protective services (CPS) classifications of maltreatment type and the determinations of type from two research coding systems was examined in a study involving 545 children enrolled in the Longitudinal Studies of Child Abuse and Neglect (LONGSCAN). The study found differences between the type of maltreatment recorded in CPS records and the conclusions reached by either research classification system, particularly for emotional abuse and neglect. Nearly 10% of physical and sexual abuse reports, as determined by the Maltreatment Classification System (MMCS), were classified as neglect by CPS agencies. The MMCS and the Second National Incidence Study maltreatment coding system (NIS-2) showed very high agreement for physical and sexual abuse. The research definitions for physical and sexual abuse demonstrated validity in models predicting children's functioning at age 8, with a modest improvement in prediction of child functioning in several domains compared to CPS classifications. Overall, both research classification systems moderately improved the prediction of the adverse effects of maltreatment compared to the characterization of maltreatment exposure as recorded by CPS (PUBMED:15970320). This suggests that while there is some level of agreement between CPS reports and research definitions, particularly for physical and sexual abuse, there are notable discrepancies, especially for emotional abuse and neglect. The research definitions appear to provide a slightly better prediction of subsequent difficulties in the psychological functioning of maltreated children.
Instruction: Can the Blaylock Risk Assessment Screening Score (BRASS) predict length of hospital stay and need for comprehensive discharge planning for patients following hip and knee replacement surgery? Abstracts: abstract_id: PUBMED:25421081 Can the Blaylock Risk Assessment Screening Score (BRASS) predict length of hospital stay and need for comprehensive discharge planning for patients following hip and knee replacement surgery? Predicting arthroplasty planning and stay using the BRASS. Background: Knee and hip arthroplasty constitutes a large percentage of hospital elective surgical procedures. The Blaylock Risk Assessment Screening Score (BRASS) was designed to identify patients in need of discharge planning. The purpose of this study was to evaluate whether the BRASS was associated with length of stay (LOS) in hospital following elective arthroplasty. Methods: We retrospectively reviewed the charts of individuals undergoing primary elective arthroplasty for knee or hip osteoarthritis who had a documented BRASS score. Results: In our study cohort of 241, both BRASS (p &lt; 0.001) and replacement type (hip v. knee; p = 0.048) were predictive of LOS. Higher BRASS was associated with older patients (p &lt; 0.001), higher American Society of Anesthesiologists score (p &lt; 0.001) and longer LOS (p &lt; 0.001). We found a specificity of 83% for a BRASS greater than 8 and a hospital stay longer than 5 days and a specificity of 92% for a BRASS greater than 10. Conclusion: The BRASS represents a novel and significant predictor of LOS following elective arthroplasty. Patients with higher BRASS are more likely to stay in hospital 5 days or more and should receive pre-emptive social work consultations to facilitate timely discharge planning and hospital resources. abstract_id: PUBMED:28775218 Predictive factors for length of hospital stay following primary total knee replacement in a total joint replacement centre in Hong Kong. Introduction: The demand for total knee replacement in Hong Kong places tremendous economic burden on our health care system. Shortening hospital stay reduces the associated cost. The aim of this study was to identify perioperative predictors of length of hospital stay following primary total knee replacement performed at a high-volume centre in Hong Kong. Methods: We retrospectively reviewed all primary total knee replacements performed at Yan Chai Hospital Total Joint Replacement Centre from October 2011 to October 2015. Perioperative factors that might influence length of stay were recorded. Results: A total of 1622 patients were identified. The mean length of hospital stay was 6.8 days. Predictors of prolonged hospital stay following primary total knee replacement were advanced age; American Society of Anesthesiologists physical status class 3; bilateral total knee replacement; in-patient complications; and the need for blood transfusion, postoperative intensive care unit admission, and urinary catheterisation. Conclusions: Evaluating factors that can predict length of hospital stay is the starting point to improve our current practice in joint replacement surgery. Prediction of high-risk patients who will require a longer hospitalisation enables proactive discharge planning. abstract_id: PUBMED:10351588 Preadmission screening of patients scheduled for hip and knee replacement: impact on length of stay. Objective: An evaluation of the impact of a social work preadmission program on length of stay (LOS) of orthopedic patients undergoing elective total hip or total knee replacement surgeries (under diagnosis-related groups [DRGs]) at the Hadassah Ein-Kerem Hospital in Jerusalem is Israel. Intervention: The social work interventions included preadmission psychosocial evaluation and preliminary discharge planning, coordination of nursing and physiotherapy evaluations, ensuring completion of all medical tests prior to admission, and additional psychosocial follow-up during hospitalization to carry out the original discharge plan or prepare alternatives. Patients: The intervention patients were divided into two groups in order to see changes over time: May through December 1994 (n = 48), and January through December 1995 (n = 81). The comparison groups included patients operated on at the same hospital during 1993 (n = 51) and during January through April 1994 (n = 21) and at the Hadassah Mount Scopus Hospital during the same time periods. Patients in the comparison groups received usual social work intervention, as necessary, only after hospitalization. Results: Mean LOS was reduced significantly in the intervention patient groups, as compared to the preintervention patient groups in the same hospital, from 14.2 days (standard deviation [SD], 4.7) in 1993 and 14.7 (SD, 5.1) in January through April 1994 to 10.9 (SD, 3.0) in May through December 1994 and to 9.1 (SD, 2.8) in 1995 (P &lt; .01). Length of stay also was reduced in the comparison hospital, but by 1995 was longer than in the intervention patients. No differences in LOS by gender, age, or marital status were found. Length of stay was significantly longer for those undergoing total hip replacement as compared to those undergoing total knee replacement in all the groups. Conclusions: Preadmission screening and case management by a social worker can contribute to the efforts to decrease LOS of orthopedic patients by early multidisciplinary evaluations, discharge planning, and coordination of services. abstract_id: PUBMED:31512732 A preoperative education class reduces length of stay for total knee replacement patients identified at risk of an extended length of stay. Objective: To establish whether attendance at an education class prior to total hip or knee replacement surgery as part of an enhanced recovery after surgery pathway could decrease length of hospital stay. Methods: A single-site, retrospective cohort study comparing length of stay in hospital for patients who attended and did not attend an education class prior to hip or knee replacement surgery. Patients were stratified into 3 groups according to the predicted likelihood of an extended inpatient hospital stay using the Risk Assessment Predictor Tool. Results: Mean length of stay reduced by 0.37 days for patients who received hip replacement (n = 590) (95% confidence interval (95% CI) -0.74, -0.01, p = 0.05) and by 0.77 days for patients who underwent knee replacement (n = 643) (95% CI -1.23, -0.31, p = 0.001) following attendance at a preoperative education class. Patients undergoing knee replacement who were considered at high risk of an extended hospital stay stayed, on average, 2.59 days less in hospital after attending the class (mean length of stay: 4.52 (standard deviation (SD) 1.26) vs 7.11 (SD 4.18) days (95% CI -4.62, -0.54, p &lt; 0.02). Conclusion: This study supports the inclusion of a preoperative education session in this context for both hip and knee replacement procedures, and indicates that this may be most beneficial for patients undergoing knee replacement who are at risk of an extended length of stay. abstract_id: PUBMED:37182588 The Risk Assessment and Prediction Tool (RAPT) Score Predicts Discharge Destination, Length of Stay, and Postoperative Mobility After Total Joint Arthroplasty. Background: Predicting an arthroplasty patient's discharge disposition, length of stay (LOS), and physical function is helpful because it allows for preoperative patient optimization, expectation management, and discharge planning. The goal of this study was to evaluate the ability of the Risk Assessment and Prediction Tool (RAPT) score to predict discharge destination, LOS, and postoperative mobility in patients undergoing primary total knee arthroplasty (TKA) and total hip arthroplasty (THA). Methods: Primary unilateral TKAs (n = 9,064) and THAs (n = 8,649) performed for primary osteoarthritis at our institution from 2018 to 2021 (excluding March to June 2020) were identified using a prospectively maintained institutional registry. We evaluated the associations between preoperative RAPT score and (1) discharge destination, (2) LOS, and postoperative mobility as measured by (3) successful ambulation on the day of surgery and (4) Activity Measure for Post-Acute Care (AM-PAC) "6-Clicks" score. Results: On multivariable analyses adjusting for multiple covariates, every one-point increase in RAPT score among TKA patients was associated with a 1.82-fold increased odds of home discharge (P &lt; .001), 0.22 days shorter LOS (P &lt; .001), 1.13-fold increased odds of ambulating on postoperative day 0 (P &lt; .001), and 0.25-point higher Activity Measure for Post-Acute Care score (P &lt; .001). Similar findings were seen among THAs. A RAPT score of 8 or higher was the most sensitive and specific cutoff to predict home discharge. Conclusion: Among nearly 18,000 TKA and THA patients, RAPT score was predictive of discharge disposition, LOS, and postoperative mobility. A RAPT score of 8 or higher was the most sensitive and specific cutoff to predict discharge to home. In contrast to prior studies of the RAPT score which have grouped TKAs and THAs together, this study ran separate analyses for TKAs and THAs and found that THA patients seemed to perform better than TKA patients with equal RAPT scores, suggesting that RAPT may behave differently between TKAs and THAs, particularly in the intermediate risk RAPT range. abstract_id: PUBMED:31795331 Length of Hospital Stay for Osteoarthritic Primary Hip and Knee Replacement Surgeries in New Zealand. This study aims to explore the length of stay (LOS) of publicly funded osteoarthritic primary hip and knee replacement surgeries in New Zealand. Patients with osteoarthritis who underwent publicly funded primary hip and knee replacement surgery in 2005-2017 were included. We have identified 53,439 osteoarthritic primary hip replacements and 50,072 osteoarthritic primary knee replacements. LOS has been reduced by almost 40% over the last 13 years. Logistic regression showed that women, Māori, Pacific and Asian patients, older patients, people with more comorbidities and those having opiates on discharge and patients in earlier years were more likely to have extended LOS following hip replacements and knee replacements. Regional differences were noted in LOS between the Waitemata District Health Board (DHB) compared to Tairāwhiti DHB where patients were the most likely to have a LOS of more than 5 days after hip and knee replacements. LOS after hip and knee replacements has been reduced dramatically. Women, Māori, Pacific and Asian patients, older patients and people with more comorbidities are more likely to have extended LOS. Patients dispensed opiates on discharge had a longer LOS. There are great geographical variations in LOS for primary hip and knee surgeries in New Zealand. abstract_id: PUBMED:35386750 The Predictive Value of the "Identification of Seniors at Risk" Score on Mortality, Length of Stay, Mobility and the Destination of Discharge of Geriatric Hip Fracture Patients. Purpose: The German Society for Geriatrics recommends the "ISAR" questionnaire as a screening tool for patients ≥70 for geriatric screening in emergency rooms. Although the ISAR-score is collected routinely in the "AltersTraumaRegister DGU®" (ATR-DGU), to date less is known about the predictive value of the "ISAR"-score in geriatric trauma patients. Patients And Methods: Currently, 84 clinics participate in the ATR-DGU. This evaluation is limited to the subgroup of proximal femur fractures from 2016-2018. Patients ≥70 years, who underwent surgery for a hip fracture are included in the ATR-DGU. In this evaluation, the influence of the "ISAR"-score on mortality, length of stay, mobility and the destination of discharge was examined. Results: Overall 10,098 patients were included in the present study. The median age was 85 years (interquartile range (IQ) 80-89 years). According to the ISAR-score 80.6% (n=8142) of the patients were classified as geriatric patients (cut off "ISAR"-score ≥2 points). These group of patients had a length of stay of 16 days (IQ10.1-22.1) compared to the non-geriatric patient cohort showing a length of stay of 15 days (IQ10.1-20.1). Patients showing an ISAR-score ≥2 had an increased risk of being discharged to a nursing home (OR 8.25), not being able to walk (OR 12.52) and higher risk of mortality (OR 3.45). Conclusion: The "ISAR"-score shows predictive power for the length of stay, mobility, hospital mortality and discharge after hospital in the collective of geriatric trauma patients. It therefore seems suitable as a screening tool for geriatric trauma patients in the emergency department and should be considered in this context. abstract_id: PUBMED:25452634 Early mobilization of patients who have had a hip or knee joint replacement reduces length of stay in hospital: a systematic review. Objective: To systematically review the effect of early mobilization after hip or knee joint replacement surgery on length of stay in an acute hospital. Methods: Randomized controlled trials were selected from electronic databases based on inclusion criterion requiring an experimental group mobilizing (sitting out of bed/walking) earlier than a comparison group post joint replacement surgery of the hip or knee in an acute hospital. Clinically homogeneous data were analyzed with meta-analysis. Results: Five randomized controlled trials (totaling 622 participants) were included for review. A meta-analysis of 5 trials found a reduced length of stay of 1.8 days (95% confidence interval 1.1 to 2.6) in favor of the experimental group. In 4 of the 5 trials the experimental group first sat out of bed within 24 hours post operatively. In 4 of the 5 trials the experimental group first walked within 48 hours post operatively. Individual trials reported benefits in range of motion, muscle strength and health-related quality of life in favor of the experimental group. There were no differences in discharge destinations, incidence of negative outcomes or adverse events attributable to early mobilization when compared to the comparison groups. Conclusion: Early mobilization post hip or knee joint replacement surgery can result in a reduced length of stay of about 1.8 days. Trials that reported these positive results showed that early mobilization can be achieved within 24 hours of operation. This positive gain was achieved without an increase in negative outcomes. abstract_id: PUBMED:36526323 Identification of risk factors associated with prolonged hospital stay following primary knee replacement surgery: a retrospective, longitudinal observational study. Objectives: To identify risk factors associated with prolonged length of hospital stay and staying in hospital longer than medically necessary following primary knee replacement surgery. Design: Retrospective, longitudinal observational study. Setting: Elective knee replacement surgeries between 2016 and 2019 were identified using routinely collected data from an NHS Trust in England. Participants: There were 2295 knee replacement patients with complete data included in analysis. The mean age was 68 (SD 11) and 60% were female. Outcome Measures: We assessed a binary length of stay outcome (&gt;7 days), a continuous length of stay outcome (≤30 days) and a binary measure of whether patients remained in hospital when they were medically fit for discharge. Results: The mean length of stay was 5.0 days (SD 3.9), 15.4% of patients were in hospital for &gt;7 days and 7.1% remained in hospital when they were medically fit for discharge. Longer length of stay was associated with older age (b=0.08, 95% CI 0.07 to 0.09), female sex (b=0.36, 95% CI 0.06 to 0.67), high deprivation (b=0.98, 95% CI 0.47 to 1.48) and more comorbidities (b=2.48, 95% CI 0.15 to 4.81). Remaining in hospital beyond being medically fit for discharge was associated with older age (OR=1.07, 95% CI 1.05 to 1.09), female sex (OR=1.71, 95% CI 1.19 to 2.47) and high deprivation (OR=2.27, 95% CI 1.27 to 4.06). Conclusions: The regression models could be used to identify which patients are likely to occupy hospital beds for longer. This could be helpful in scheduling operations to aid hospital efficiency by planning these patients' operations for when the hospital is less busy. abstract_id: PUBMED:35165072 Enhanced recovery program - Impact of ASA grade on length of hospital stay in patients undergoing hip and knee arthroplasties. Introduction: Total hip and knee arthroplasties are two of the most quality of life enhancing orthopedic procedures performed. Enhanced recovery (ER) programs have been implemented in orthopedic surgery and have helped optimize preoperative, perioperative, and postoperative factors to reduce the physiological and psychological stress of surgery. The American Society of Anesthesiologists [ASA] classification system is now the most widely collected system for measuring physical health status by hip and knee arthroplasty registries worldwide. The aim of the study is to determine whether the ASA score is a predictive of length of hospital stay in patients undergoing hip and knee surgeries in elective setting. Methods: Retrospective data is collected from a consecutive series of 441 charts and Irish National Orthopedics Register (INOR) for patients who underwent elective primary hip &amp; knee replacements from January 1, 2018 to December 31, 2018 in Our Lady's Hospital, Navan. All these patients were assigned either ASA Class 2 or 3 in preoperative assessment. Results: Total 441 patients were included. Patients with ASA 2 (319 patients of the total patient for the same period), average length of hospital stay was 4.8 ± 1.2 days, whereas for patients with ASA 3 (122 patients of the total), length of hospital stay was 6.5 ± 2.3 with mean difference between two groups was 1.7 days (95% confidence interval of this difference). Conclusion: Patients with ASA 2 stayed shorter in the hospital compared to patients with ASA 3. Therefore, we recommend that pre-operative patients' optimization to downgrade a patient from an ASA 3 to ASA 2, but prospective analysis would be beneficial to examine the resource implications of such an initiative as well as patient outcomes with longer term follow up. Answer: Yes, the Blaylock Risk Assessment Screening Score (BRASS) can predict the length of hospital stay and the need for comprehensive discharge planning for patients following hip and knee replacement surgery. A study found that both BRASS and the type of replacement (hip vs. knee) were predictive of the length of stay (LOS) in hospital following elective arthroplasty. Higher BRASS scores were associated with older patients, higher American Society of Anesthesiologists scores, and longer LOS. Specifically, the study found a specificity of 83% for a BRASS greater than 8 and a hospital stay longer than 5 days, and a specificity of 92% for a BRASS greater than 10. This indicates that patients with higher BRASS scores are more likely to stay in the hospital for 5 days or more and should receive preemptive social work consultations to facilitate timely discharge planning and hospital resources (PUBMED:25421081).
Instruction: Is minimally invasive surgery appropriate for small differentiated thyroid carcinomas? Abstracts: abstract_id: PUBMED:20425543 Is minimally invasive surgery appropriate for small differentiated thyroid carcinomas? Purpose: To retrospectively evaluate the surgical completeness of minimally invasive total thyroidectomy for small (&lt;20 mm) differentiated thyroid carcinoma (DTC). Methods: The subjects of this study were 30 patients who underwent minimally invasive total thyroidectomy as a single procedure. We registered the following postoperative measurements in the LT4 withdrawal period: serum thyroglobulin level, 6-h radioiodine uptake diagnostic test results, and neck ultrasound (US) findings. Results: The mean serum thyroglobulin level was 4.99 +/- 4.67 ng/ml, the mean radioiodine uptake diagnostic test after 6 h was 3.11% +/- 2.90%, and US showed no thyroid remnant. Conclusions: The short-term outcome measures showed adequate resection of thyroid tissue, comparable with that reported after conventional surgery. Our findings suggest that small nodules with suspicious or malignant cytology are one of the best indications for minimally invasive surgery. abstract_id: PUBMED:19040979 Minimally invasive reoperative thyroid surgery. Conventional thyroid surgical techniques, introduced 100 years ago, continue to be applied in modern endocrine surgery practices. Minimally invasive approaches, however, have been proposed by Miccoli and others, and have been increasingly embraced. Reoperative thyroidectomy poses special challenges for even the expert surgeon, and these challenges are no less important when undertaking minimally invasive surgery. Nevertheless, the authors' experience with reoperative minimally invasive thyroid surgery would suggest that this approach is not only safe and effective, but confers many of the same advantages as primary minimally invasive surgery (particularly superior cosmetic results and rapid wound healing leading to early discharge). When applied judiciously, reoperative minimally invasive thyroid surgery offers advantages over conventional surgery and is an appropriate component of a high-volume head and neck endocrine practice. abstract_id: PUBMED:22696880 Minimally invasive thyroidectomy and the differentiated lesions: the way to follow. Background And Objectives: Nowadays, the conventional thyroidectomy may appear an overly aggressive treatment in patients undergoing intervention for small suspicious lesions harboring in low volume glands. In these cases a minimally invasive approach may be a safe and appropriate option. This work aims to evaluate the effectiveness of minimally invasive thyroidectomy in patients indicated to surgery for small lesions with preoperative suspicion of malignancy. Patients And Methods: 71 patients, undergoing minimally invasive total thyroidectomy as a single procedure between May 2005 and April 2009, were enrolled in this study. They were indicated to surgery for small suspicious or malignant lesions (up to 20 mm lenght by US; cT1 according to UICC 2002) and satisfied the inclusion criteria of minimally invasive thyroidectomy, with gland volume up to 25 ml, no evidence of locally advanced disease and no previous neck surgery. The outcomes were considered in terms of complication rate, postoperative pain, hospitalisation stay, cosmetic results and completeness of surgical resection in malignancies. Results: A low complication rate was registered. The surgical completeness, with mean serum thyroglobulin of 4.41 +/- 4.03 ng/ml and radioiodine uptake of 2.91 +/- 2.46%, was considered acceptable if compared with other experiences reported in literature. Excellent results with respect to patient comfort, postoperative pain and cosmetic outcome were obtained. Conclusion: This study confirms, where a correct selection of patients is made, the safety and the effectiveness of minimally invasive approach in the treatment of small suspicious and malignant nodules, which seem to represent its best indication. abstract_id: PUBMED:17619345 Indications and techniques of minimally invasive surgery applied to the thyroid Minimally invasive surgery of the thyroid gland is not associated with any increase in postoperative morbidity (recurrent laryngeal nerve paralysis, hypoparathyroidism). While it produces a better cosmetic result, it is a technically considerably more demanding procedure than conventional thyroidectomy. Safety and feasibility are highly dependant on strict patient selection. Since only a limited number of patients meet the selection criteria for minimally invasive surgery, the technique is likely to remain an option for a small number of patients only. abstract_id: PUBMED:18493913 Minimally invasive surgery for thyroid diseases and thyroid cancer. This article will review the emergence of minimally invasive thyroidectomy techniques and their applications to thyroid diseases. Minimally invasive surgical techniques for head and neck surgery have recently attracted interest. Furthermore techniques are divided into the mini-incision open, video-assisted, and complete endoscopic. Factors that determine the best operative approach include tumor size, histology, the presence of enlarged lymph nodes and evidence of local-regional invasion. Indications and contraindications for minimally invasive techniques will be discussed. abstract_id: PUBMED:16357563 Minimally invasive thyroidectomy. Purpose Of Review: Based on recent advances in ultrasonography, color-Doppler sonography and ultrasound-guided fine needle aspiration biopsy, minimally invasive thyroidectomy has been widely employed for treating thyroid tumors. Minimally invasive procedures have recently been adopted for the surgical treatment of thyroid disease as a means of preventing prominent scars, hypesthesia and paresthesia. Recent Findings: We have developed axillary and anterior chest approaches for endoscopic thyroidectomy. The mean operating time for the endoscopic procedure was significantly longer than that for conventional open surgery. All patients who were treated using the axillary approach were satisfied with the cosmetic results. The incidence of complaints after endoscopic surgery is significantly lower than that after open surgery. The minimally invasive mini-incision approach has been performed under direct and endoscopic vision to minimize surgical invasiveness and improve cosmetic results. Our procedure differs from conventional thyroidectomy in requiring a 3-cm skin incision, and in not raising of a skin flap and not dissecting the sternohyoid muscle. Summary: Endoscopic thyroidectomy is the procedure of choice in well selected patients with thyroid disease. The present indications for this procedure include benign follicular adenomas less than 4 cm in diameter, low-risk papillary carcinomas less than 10 mm, oxyphilic cell tumors less than 4 cm and Graves' disease as imaged on preoperative ultrasonography. The minimally invasive mini-incision approach is indicated for patients with low-risk, well differentiated carcinomas. Total thyroidectomy and bilateral modified neck dissection is possible, if the tumor is not large. abstract_id: PUBMED:20212416 Minimally invasive thyroid cancer surgery. The aim of this paper was to explore the appropriateness and outcomes of minimally invasive thyroid surgery for the management of well-differentiated thyroid cancer. The study is a planned analysis of a prospectively maintained patient database representing a consecutive, single-surgeon experience. A systematic review was undertaken of a series of patients undergoing minimally access surgery for well-differentiated thyroid cancer. Comprehensive demographic data were considered, including age, gender, pathologic findings, complications, and oncologic outcomes. Ninety-two patients with thyroid cancer (mean age =45.6 years) underwent minimally invasive or endoscopic thyroidectomy over a five-year period. Surgical pathology revealed papillary cancer in 76 patients, follicular cancer in 10 patients, Hurthle cell cancer in 3 patients and medullary cancer in 3 patients. There have been no recurrences in any of these patients thus far (with a short median follow-up of 31 months). Excellent cosmetic results have been observed with this minimal access approach. Minimally invasive and endoscopic thyroidectomy can be safely and effectively performed in many patients with low- or intermediate-risk thyroid cancer. In addition to improved cosmesis, many patients experience decreased pain and faster recovery, and are at no increased risk for complications in the hands of high-volume thyroid surgeons. abstract_id: PUBMED:27294036 Minimally invasive thyroidectomy: a ten years experience. Background: The conventional thyroidectomy is the most frequent surgical procedure for thyroidal surgical disease. From several years were introduced minimally invasive approaches to thyroid surgery. These new procedures improved the incidence of postoperative pain, cosmetic results, patient's quality of life, postoperative morbidity. The mini invasive video-assisted thyroidectomy (MIVAT) is a minimally invasive procedure that uses a minicervicotomy to treat thyroidal diseases. Methods: We present our experience on 497 consecutively treated patients with MIVAT technique. We analyzed the mean age, sex, mean operative time, rate of bleeding, hypocalcemia, transitory and definitive nerve palsy (6 months after the procedure), postoperative pain scale from 0 to 10 at 1 hour and 24 hours after surgery, mean hospital stay. Results: The indications to treat were related to preoperative diagnosis: 182 THYR 6, 184 THYR 3-4, 27 plummer, 24 basedow, 28 toxic goiter, 52 goiter. On 497 cases we have reported 1 case of bleeding (0,2%), 12 (2,4%) cases of transitory nerve palsy and 4 (0,8%) definitive nerve palsy. The rate of serologic hypocalcemia was 24.9% (124 cases) and clinical in 7.2% (36 cases); 1 case of hypoparathyroidism (0.2%). Conclusions: The MIVAT is a safe approach to surgical thyroid disease, the cost are similar to CT as the adverse events. The minicervicotomy is really a minimally invasive tissue dissection. abstract_id: PUBMED:33814255 Modified version of minimally invasive open thyroidectomy using an unilateral incision. Background/objective: Open thyroidectomy has been the standard approach for patients undergoing thyroidectomy. However, this approach leads to prominent scars, hypesthesia, paresthesia, and uncomfortable sensations. We aimed to describe our modified technique of minimally invasive open thyroidectomy (MIT) and to compare the results with those of conventional thyroidectomy. Methods: This study included 880 patients who underwent surgery between January 2016 and December 2016. Modified MIT was performed in 249 patients (28.3%), and conventional thyroidectomy was performed in the remaining 631 patients. Results: Lobectomy was performed in the majority of cases (MIT 204 [81.9%] vs. conventional 429 [67.9%]). There were no significant differences in complications between the two approaches (6 [2.4%] vs. 8 [1.3%]). Patients who underwent surgery using the minimally invasive approach had a shorter operative time (77.99 ± 34.5 vs. 91.23 ± 36.58 min) and were discharged earlier (2.4 ± 0.8 vs. 3.2 ± 0.8) than those who underwent conventional thyroidectomy. Conclusion: Modified MIT is a safe alternative to standard open thyroidectomy and allows the performance of bilateral total thyroidectomy with proper central compartment neck dissection. Level Of Evidence: 2b. abstract_id: PUBMED:17947942 The minimally invasive surgical approach in thyroid diseases. Aim: The targets of minimally invasive thyroidectomy could be summarised by: achievement of the same results as those obtained with traditional surgery, better postoperative course and improved cosmetic Results: In minimally invasive surgical approach the skin incision should not exceed 30 mm in length. In our experience this limit may be extended of 5 mm for thyroid between 25 and 50 mL in volume. This way allows more patients, excluded before, to take the advantages of minimally invasive approach. The aim of this work has been to demonstrate that the central neck minimally invasive approach is safe, less painful, better for cosmetic results and easily reproducible in surgical practice. Methods: From January 2003 to June 2007, 75 patients have been selected for minimally invasive thyroidectomy. The procedure was carried out through a central skin incision performed ''high'' between the cricoid and jugular notch. Our ''modified Miccoli-procedure'' consists in five-easily repeatable steps. In the postoperative stay, all patients were asked to evaluate the pain that feel and the cosmetic result by means of a numeric scale. Results: The skin incision performed was from 25 to 30 mm (mean 27.39 +/- 2.6 mm). We obtained in all cases excellent results about patients cure rate and comfort, few postoperative pain and attractive cosmetic Results: Conclusion: In this study we demonstrate that the central neck minimally invasive approach is safe, less painful, better for cosmetic results, with less paresthetic consequences and easily reproducible in surgical practice. In our opinion a longer incision (up to 35 mm), does not affect negatively the advantages of minimally invasive procedure. This way allows more patients to take the advantages of minimally invasive approach. Answer: Yes, minimally invasive surgery is considered appropriate for small differentiated thyroid carcinomas (DTC). Studies have shown that minimally invasive total thyroidectomy can be safely and effectively performed in patients with small DTCs, typically less than 20 mm in size. The surgical completeness of minimally invasive total thyroidectomy for small DTCs has been retrospectively evaluated, with outcomes such as serum thyroglobulin levels, radioiodine uptake diagnostic test results, and neck ultrasound findings indicating adequate resection of thyroid tissue comparable to that achieved with conventional surgery (PUBMED:20425543). Additionally, minimally invasive approaches have been associated with a low complication rate, acceptable surgical completeness, and excellent results in terms of patient comfort, postoperative pain, and cosmetic outcomes (PUBMED:22696880). Furthermore, minimally invasive thyroidectomy has been applied not only to primary surgery but also to reoperative thyroid surgery, with evidence suggesting that it is safe, effective, and confers advantages such as superior cosmetic results and rapid wound healing (PUBMED:19040979). The technique has been described as technically demanding and requires strict patient selection, but it has been shown to be feasible and safe without increasing postoperative morbidity (PUBMED:17619345). The minimally invasive approach has been recommended for well-selected patients with thyroid disease, including benign follicular adenomas, low-risk papillary carcinomas, oxyphilic cell tumors, and Graves' disease, based on preoperative ultrasonography (PUBMED:16357563). Additionally, minimally invasive and endoscopic thyroidectomy have been successfully performed in patients with low- or intermediate-risk thyroid cancer, with no increased risk for complications and excellent cosmetic results (PUBMED:20212416). Overall, the literature supports the appropriateness of minimally invasive surgery for small differentiated thyroid carcinomas, provided that patients are carefully selected and the procedure is performed by experienced surgeons (PUBMED:18493913, PUBMED:27294036, PUBMED:33814255, PUBMED:17947942).
Instruction: Percutaneous endoscopic gastrostomy: the technique of choice? Abstracts: abstract_id: PUBMED:34499544 Endoscopic Guided Percutaneous Nephrolithotomy. Percutaneous nephrolithotomy (PCNL) is the treatment of choice for kidney stones &gt;2 cm in size. Obtaining a nephrostomy tract to access the kidney is a key step in this procedure that ultimately determines the patient's outcome. We describe each step of the endoscopic guided approach to PCNL as performed at the University of California, Irvine. Each step of the endoscopic guided PCNL is described and depicted in the video. Surgical steps depicted in the video are from various recorded PCNL procedures from our institution. This guide outlines the indications, preoperative preparation, patient positioning, surgical steps, and postoperative care for patients undergoing PCNL using the endoscopic guided approach. Direct observation of nephrostomy puncture, tract dilation using the balloon, and passing of the Amplatz sheath allow for proper access to the kidney and lower risk of injury. By following this systematic module of endoscopic guided PCNL, a nephrostomy tract can be created and accessed under direct observation, facilitating the routine steps of PCNL. abstract_id: PUBMED:26519811 Percutaneous endoscopic sigmoidopexy for sigmoid volvulus: A case report. Introduction: Sigmoid volvulus often recurs and it is controversial whether preventive surgery should be performed in recurrent cases, especially in elderly and high-risk cases. Herein, we report a case of successful endoscopic sigmoidopexy using fixation to the abdominal wall. Case Presentation: The patient was an 86-year-old woman with multiple system atrophy, cerebral infarction, and disuse syndrome. She was admitted to our hospital with a recurrent sigmoid volvulus. Since surgery was considered high-risk, percutaneous endoscopic sigmoidopexy with fixation to the abdominal wall was indicated. Discussion: Percutaneous endoscopic sigmoidopexy was performed for this high-risk case with recurrent sigmoid volvulus. This procedure is advantageous in that suture removal is not necessary because the fixation sutures are buried subcutaneously. Reviewing the relevant literature, we believe that this is the first case of percutaneous endoscopic sigmoidopexy using abdominal wall fixation with buried sutures. Conclusion: Although further experience is necessary, percutaneous endoscopic sigmoidopexy may be an acceptable treatment for recurrent sigmoid volvulus in high-risk patients. abstract_id: PUBMED:30002773 Endoscopic drainage combined with percutaneous drainage in treatment of walled-off pancreatic necrosis - a single-center experience. Introduction: In last three decades we have been observing development of minimally invasive walled-off pancreatic necrosis (WOPN) treatment techniques. The choice of access to the necrosis and technique of treatment depends not only on the position and spread of necrosis, but in the first place on the experience of the medical center. Aim: To assess the effectiveness and safety of combined endoscopic and percutaneous drainage of WOPN. Material And Methods: We performed a retrospective analysis of 64 consecutive patients with symptomatic WOPN, who underwent endoscopic treatment in our department between 2011 and 2013. Results: Additional percutaneous drainage was executed during endoscopic treatment in 20/64 (31.25%) patients. Complications of treatment occurred in 4/20 (20%) patients. Complications of treatment occurred in 4/20 (20%) patients. All these complications were related to endoscopic treatment. No complications related to percutaneous drainage were noted. There were no deaths. Therapeutic success was achieved in all 20 patients. No patients required surgery. The average time of endoscopic drainage was 41.4 (11-173) days. The mean number of endoscopic procedures was 4.2 (2-12). The average time of percutaneous drainage was 11.3 (5-20) days. The medium time of follow-up was 54 (48-64) months. During the observation the recurrence of WOPN was noted in 2/20 (10%) patients. Long-term success of combined drainage was achieved in 18/20 (90%) patients. Conclusions: In selected patients with symptomatic WOPN combined endoscopic and percutaneous drainage enables a high success rate with a low procedure-related complication rate. abstract_id: PUBMED:35719902 Percutaneous endoscopic gastrostomy and jejunostomy: Indications and techniques. Nutritional support is essential in patients who have a limited capability to maintain their body weight. Therefore, oral feeding is the main approach for such patients. When physiological nutrition is not possible, positioning of a nasogastric, nasojejunal tube, or other percutaneous devices may be feasible alternatives. Creating a percutaneous endoscopic gastrostomy (PEG) is a suitable option to be evaluated for patients that need nutritional support for more than 4 wk. Many diseases require nutritional support by PEG, with neurological, oncological, and catabolic diseases being the most common. PEG can be performed endoscopically by various techniques, radiologically or surgically, with different outcomes and related adverse events (AEs). Moreover, some patients that need a PEG placement are fragile and are unable to express their will or sign a written informed consent. These conditions highlight many ethical problems that become difficult to manage as treatment progresses. The aim of this manuscript is to review all current endoscopic techniques for percutaneous access, their indications, postprocedural follow-up, and AEs. abstract_id: PUBMED:26884690 Percutaneous endoscopic gastrostomy in children: a single center experience. Aim: The aim of this study was to evaluate the demographic data and complication rates in children who had undergone percutaneous endoscopic gastrostomy in a three-year period in our Division of Pediatric Gastroenterology and to interrogate parental satisfaction. Material And Methods: The demographic data, complications and follow-up findings of the patients who had undergone percutaneous endoscopic gastrostomy between March 2011 and March 2014 were examined retrospectively using medical files. Results: Forty seven percutaneous endoscopic gastrostomy and percutaneous endoscopic gastrostomy related procedures were performed in 34 children during a three-year period. The median age of the patients was 2.25 years (3 months-16 years, first and third quartiles=1.0-6.0) and the mean body weight was 13.07±8.6 kg (3 kg-47 kg). Before percutaneous endoscopic gastrostomy procedure, the mean weight z score was -2.26±1.2 (-5-0) and the mean height z score was -2.25±0.96 (-3.85-0.98). The follow-up mean height and weight Z scores at the 12(th) month after the percutaneous endoscopic gastrostomy procedure could be reached in 24 patients. A significant increase in the mean weight Z score from -2.41 to -1,07 (p=0.000) and in the mean height Z score from -2.29 to -1.99 (p=0.000) was found one year after percutaneous endoscopic gastrostomy catheter was placed in these 24 patients. Patients with neurological and metabolic diseases constituted the majority (64.7% and 26.5% respectively). Peritoneal leakage of food was detected in one patient and local stoma infections were detected in three patients after the procedure. During the follow up period, "Buried bumper syndrome" was observed in one patient. Following percutaneous endoscopic gastrostomy, the number of patients using anti-reflux medication increased from 16 (47.1%) to 18 (52.9%) (p=0.62). One patient with cerebral palsy who had aspiration pneumonia after percutaneous endoscopic gastrostomy insertion had undergone Nissen fundoplication. Percutaneous endoscopic gastrostomy tube was removed in a patient. The parents had positive views related with percutaneous endoscopic gastrostomy after the procedure. Conclusions: Percutaneous endoscopic gastrostomy is a substantially successful and reliable method in infants as well as in children and adolescents. The parents had positive views related with percutaneous endoscopic gastrostomy after the procedure. abstract_id: PUBMED:27890759 Transforaminal Percutaneous Endoscopic Lumbar Diskectomy with Percutaneous Epidural Neuroplasty in Lumbar Disk Herniation: Technical Note. Objective: Percutaneous endoscopic lumbar diskectomy and percutaneous epidural neuroplasty is a commonly used minimally invasive spinal surgery in lumbar disk herniation. But to our knowledge, there is no report about the results of percutaneous endoscopic lumbar diskectomy with percutaneous epidural neuroplasty. We did this study to evaluate the effect of percutaneous endoscopic lumbar diskectomy with percutaneous epidural neuroplasty in lumbar disk herniation. Methods: We report a series of cases about techniques and results of percutaneous endoscopic lumbar diskectomy with percutaneous epidural neuroplasty in lumbar disk herniation, as well as introduce technical procedures and advantages. Results: Seven patients improved axial back pain with percutaneous endoscopic lumbar diskectomy with percutaneous epidural neuroplasty and had no complications. The percutaneous endoscopic lumbar diskectomy with percutaneous epidural neuroplasty has many advantages with removing lumbar disk herniation in procedures. Conclusions: Percutaneous endoscopic lumbar diskectomy with percutaneous epidural neuroplasty is a safe and effective surgical method in lumbar disk herniation. abstract_id: PUBMED:38344502 Retrograde Migration of a Percutaneous Endoscopic Gastro-Jejunal Tube Into the Esophagus. Percutaneous endoscopic gastrostomy (PEG) and percutaneous endoscopic gastro-jejunal (PEG-J) tube placement are both common procedures regularly performed on patients requiring nutritional support. These procedures may be complicated by infection, hemorrhage, fistulization, or tube migration. We present an extremely rare case of a patient with a PEG-J tube that migrated into the esophagus. abstract_id: PUBMED:27785174 Percutaneous Endoscopic Gastrostomy: Mortality and Risk Factors for Survival. Background: The present study evaluated long-term risk factors for survival in patients who have undergone Percutaneous endoscopic Gastrostomy, as well as morbidity and mortality rates. Methods: The retrospective study included 44 patients who underwent placement of a percutaneous endoscopic gastrostomy tube at various departments at Dicle University Medical Faculty between April 2008-September 2010. Results: The study evaluated 23 women (52.3%) and 21 men (47.7%), with a median age of 50 ± 20 (17 - 87) years. Median time for Percutaneous endoscopic Gastrostomy placement was 23 ± 8.3 (5 - 45) minutes per patient. Total morbidity was 15.9%, including wound infection (4), tube occlusion (1), peristomal leakage (1), and abdominal wall bleeding (1). Short-term complications were not associated with albumin level (P = 0.312).The median hospital stay was 49.34 ± 60.99 (1 - 314) days. The mean follow-up period was 13.07 ± 13.12 (1 - 41) months. The above-normal level of albumin was found to be effective on survival (P = 0.024). Mortality occurred in 18 (40.9%) patients during the follow-up. Conclusions: Percutaneous endoscopic Gastrostomy is both safe and effective in that it does not require surgical operation and it can be performed under surface anesthesia. The serum albumin level with patients who have undergone percutaneous endoscopic gastrostomyis an effective factor for survival. abstract_id: PUBMED:36248364 Percutaneous endoscopic drainage for acute long segment epidural abscess following endoscopic lumbar discectomy: A case report. Introduction: Acute epidural abscess after percutaneous endoscopic lumbar discectomy is a rare but grievous complication. When faced with a long-segment epidural abscess, open surgery has traditionally been performed which can lead to huge surgical trauma and unpredictable complications. For this reason, surgeons around the world are constantly looking for more minimally invasive and effective surgical methods. Patient Concerns: Our patient was a 32-year-old woman who had been receiving percutaneous endoscopic interlaminar discectomy for L5/S1 lumbar disc herniation one week ago. She returned to our institution with a fever and lower back pain. Diagnoses: Magnetic resonance imaging revealed a long segment epidural abscess accompanied by a paravertebral abscess, and staphylococcus aureus was detected in a bacterial culture of pyogenic fluids extracted from the paravertebral abscess. Treatments: We performed percutaneous endoscopic drainage (PED) for the epidural abscess. Long-term sensitive antibiotic treatment after surgery. Outcomes: Immediate pain relief was achieved and the inflammatory reaction subsided after 4 weeks of antibiotic therapy. Re-examination of the lumbar spine MRI after 1 month showed that the epidural abscess disappeared completely. Conclusion: Percutaneous endoscopy allowed us to approach the epidural abscess directly, enabling the immediate drainage of the abscess with minimal trauma to the patient. The good results obtained show that percutaneous endoscopic drainage is a reliable way to treat a long-segment epidural abscess. abstract_id: PUBMED:36157371 Percutaneous direct endoscopic pancreatic necrosectomy. Approximately 10%-20% of the cases of acute pancreatitis have acute necrotizing pancreatitis. The infection of pancreatic necrosis is typically associated with a prolonged course and poor prognosis. The multidisciplinary, minimally invasive "step-up" approach is the cornerstone of the management of infected pancreatic necrosis (IPN). Endosonography-guided transmural drainage and debridement is the preferred and minimally invasive technique for those with IPN. However, it is technically not feasible in patients with early pancreatic/peripancreatic fluid collections (PFC) (&lt; 2-4 wk) where the wall has not formed; in PFC in paracolic gutters/pelvis; or in walled off pancreatic necrosis (WOPN) distant from the stomach/duodenum. Percutaneous drainage of these infected PFC or WOPN provides rapid infection control and patient stabilization. In a subset of patients where sepsis persists and necrosectomy is needed, the sinus drain tract between WOPN and skin-established after percutaneous drainage or surgical necrosectomy drain, can be used for percutaneous direct endoscopic necrosectomy (PDEN). There have been technical advances in PDEN over the last two decades. An esophageal fully covered self-expandable metal stent, like the lumen-apposing metal stent used in transmural direct endoscopic necrosectomy, keeps the drainage tract patent and allows easy and multiple passes of the flexible endoscope while performing PDEN. There are several advantages to the PDEN procedure. In expert hands, PDEN appears to be an effective, safe, and minimally invasive adjunct to the management of IPN and may particularly be considered when a conventional drain is in situ by virtue of previous percutaneous or surgical intervention. In this current review, we summarize the indications, techniques, advantages, and disadvantages of PDEN. In addition, we describe two cases of PDEN in distinct clinical situations, followed by a review of the most recent literature. Answer: Percutaneous endoscopic gastrostomy (PEG) is a technique widely used for providing nutritional support to patients who are unable to maintain adequate nutrition through oral intake. The procedure involves the placement of a tube through the abdominal wall into the stomach under endoscopic guidance, allowing for direct feeding into the gastrointestinal tract. PEG is considered a suitable option for patients requiring nutritional support for more than 4 weeks and is commonly indicated for individuals with neurological, oncological, and catabolic diseases (PUBMED:35719902). PEG can be performed using various endoscopic, radiological, or surgical techniques, each with different outcomes and associated adverse events (AEs). The procedure is generally safe and effective, and it can be performed under surface anesthesia, which is advantageous as it does not require a surgical operation (PUBMED:27785174). In children, PEG has been shown to be a successful and reliable method, with significant improvements in weight and height Z scores observed one year after the procedure. Parents also reported positive views regarding PEG after the procedure (PUBMED:26884690). However, PEG is not without risks. Complications can include infection, hemorrhage, fistulization, or tube migration, as seen in a rare case where a PEG-J tube migrated into the esophagus (PUBMED:38344502). The serum albumin level has been identified as an effective factor for survival in patients who have undergone PEG, with higher levels associated with better outcomes (PUBMED:27785174). In conclusion, PEG is a widely accepted technique for providing long-term enteral nutrition to patients who cannot feed orally. It is considered the technique of choice for many clinicians due to its minimally invasive nature, safety profile, and effectiveness in improving nutritional status. However, patient selection, careful procedural execution, and monitoring for potential complications are essential to ensure the best outcomes.
Instruction: Surgical adverse outcomes and patients' evaluation of quality of care: inherent risk or reduced quality of care? Abstracts: abstract_id: PUBMED:18322130 Surgical adverse outcomes and patients' evaluation of quality of care: inherent risk or reduced quality of care? Background: Previous research has shown that sicker patients are less satisfied with their healthcare, but specific effects of adverse health outcomes have not been investigated. The present study aimed to assess whether patients who experience adverse outcomes, in hospital or after discharge, differ in their evaluation of quality of care compared with patients without adverse outcomes. Method: In-hospital adverse outcomes were prospectively recorded by surgeons and surgical residents as part of routine care. Four weeks after discharge, patients were interviewed by telephone about the occurrence of post-discharge adverse outcomes, and their overall evaluation of quality of hospital care and specific suggestions for improvements in the healthcare provided. Results: Of 2145 surgical patients admitted to the Leiden University Medical Center in 2003, 1876 (88%) agreed to be interviewed. Overall evaluation was less favourable by patients who experienced post-discharge adverse outcomes only (average 19% lower). These patients were also more often dissatisfied (OR 2.02, 95% CI 1.24 to 3.31) than patients without adverse outcomes, and they more often suggested that improvements were needed in medical care (OR 2.07, 1.45 to 2.95) and that patients were discharged too early (OR 3.26, 1.72 to 6.20). The effect of in-hospital adverse outcomes alone was not statistically significant. Patients with both in-hospital and post-discharge adverse outcomes also found the quality of care to be lower (on average 33% lower) than patients without adverse outcomes. Conclusions: Post-discharge adverse outcomes negatively influence patients' overall evaluation of quality of care and are perceived as being discharged too early, suggesting that patients need better information at discharge. abstract_id: PUBMED:18055886 Surgical adverse outcomes and patients' evaluation of quality of care: inherent risk or reduced quality of care? Background: Previous research has shown that sicker patients are less satisfied with their healthcare, but specific effects of adverse health outcomes have not been investigated. The present study aimed to assess whether patients who experience adverse outcomes, in hospital or after discharge, differ in their evaluation of quality of care compared with patients without adverse outcomes. Method: In hospital adverse outcomes were prospectively recorded by surgeons and surgical residents as part of routine care. Four weeks after discharge, patients were interviewed by telephone about the occurrence of post-discharge adverse outcomes, and their overall evaluation of quality of hospital care and specific suggestions for improvements in the healthcare provided. Results: Of 2145 surgical patients admitted to the Leiden University Medical Center in 2003, 1876 (88%) agreed to be interviewed. Overall evaluation was less favourable by patients who experienced post-discharge adverse outcomes only (average 19% lower). These patients were also more often dissatisfied (OR 2.02, 95% CI 1.24 to 3.31) than patients without adverse outcomes, and they more often suggested that improvements were needed in medical care (OR 2.07, 1.45 to 2.95) and that patients were discharged too early (OR 3.26, 1.72 to 6.20). The effect of in hospital adverse outcomes alone was not statistically significant. Patients with both in hospital and post-discharge adverse outcomes also found the quality of care to be lower (on average 33% lower) than patients without adverse outcomes. Conclusions: Post-discharge adverse outcomes negatively influence patients' overall evaluation of quality of care and are perceived as being discharged too early, suggesting that patients need better information at discharge. abstract_id: PUBMED:17120947 The holy Grail of surgical quality Improvement: process measures or risk-adjusted outcomes? Quality of care in surgery has garnered increased attention. Focus on the structure, processes of care, and use of risk-adjusted outcomes has offered different possible solutions. Volume-outcome studies have consistently demonstrated relationships between higher surgeon and hospital volume and favorable outcomes. The policy implications for quality improvement remain unclear. Recent efforts have focused on the use of risk-adjusted outcomes, such as the American College of Surgeons National Surgical Quality Improvement Program, to drive quality improvement. Conversely, large efforts, mandated by the Center for Medicare and Medicaid Services, have focused on process measures, such as perioperative complication prevention. For the future of surgical quality improvement, the combination of process measures and risk-adjusted outcomes are essential. It is only through the combined use of improved understanding of the relationship of processes of care and outcomes that we will make surgical care safer and improve quality. abstract_id: PUBMED:26380183 Improving lung cancer outcomes by improving the quality of surgical care. Surgical resection remains the most important curative treatment modality for non-small cell lung cancer, but variations in short- and long-term surgical outcomes jeopardize the benefit of surgery for certain patients, operated on by certain types of surgeons, at certain types of institutions. We discuss current understanding of surgical quality measures, and their role in promoting understanding of the causes of outcome disparities after lung cancer surgery. We also discuss the use of minimally invasive surgical resection approaches to expand the playing field for surgery in lung cancer care, and end with a discussion of the future role of surgery in a world of alternative treatment possibilities. abstract_id: PUBMED:33504369 Improving nursing documentation for surgical patients in a referral hospital in Freetown, Sierra Leone: protocol for assessing feasibility of a pilot multifaceted quality improvement hybrid type project. Background: There is an urgent need to improve quality of care to reduce avoidable mortality and morbidity from surgical diseases in low- and middle-income countries. Currently, there is a lack of knowledge about how evidence-based health system strengthening interventions can be implemented effectively to improve quality of care in these settings. To address this gap, we have developed a multifaceted quality improvement intervention to improve nursing documentation in a low-income country hospital setting. The aim of this pilot project is to test the intervention within the surgical department of a national referral hospital in Freetown, Sierra Leone. Methods: This project was co-developed and co-designed by in-country stakeholders and UK-based researchers, after a multiple-methodology assessment of needs (qualitative, quantitative), guided by a participatory 'Theory of Change' process. It has a mixed-method, quasi-experimental evaluation design underpinned by implementation and improvement science theoretical approaches. It consists of three distinct phases-(1) pre-implementation(project set up and review of hospital relevant policies and forms), (2) intervention implementation (awareness drive, training package, audit and feedback), and (3) evaluation of (a) the feasibility of delivering the intervention and capturing implementation and process outcomes, (b) the impact of implementation strategies on the adoption, integration, and uptake of the intervention using implementation outcomes, (c) the intervention's effectiveness For improving nursing in this pilot setting. Discussion: We seek to test whether it is possible to deliver and assess a set of theory-driven interventions to improve the quality of nursing documentation using quality improvement and implementation science methods and frameworks in a single facility in Sierra Leone. The results of this study will inform the design of a large-scale effectiveness-implementation study for improving nursing documentation practices for patients throughout hospitals in Sierra Leone. Trial Registration: Protocol version number 6, date: 24.12.2020, recruitment is planned to begin: January 2021, recruitment will be completed: December 2021. abstract_id: PUBMED:16151664 The case against volume as a measure of quality of surgical care. Healthcare purchasers, represented by the Leapfrog Group, have attempted to set standards for "quality" of surgical care that include a minimum volume for each of five major surgical procedures, with the assumption that higher volumes in surgery bring better outcomes. The VA National Surgical Quality Improvement Program (NSQIP) is a validated, outcome-based program that prospectively collects clinical data on all major surgical operations in the VA, and builds validated risk-adjustment models that generate, for each hospital and each surgical specialty within a hospital, risk-adjusted outcomes expressed as O/E (observed to expected) ratios for 30-day mortality and morbidity. The O/E ratio has been validated as a reliable comparative measure of the quality of surgical care. Unlike retrospective studies that are based on administrative databases, NSQIP studies have failed to demonstrate a direct relationship between volume and risk-adjusted outcomes of surgery across various specialties. These studies have emphasized that the quality of systems of care was more important than volume in determining the overall quality of surgical care at an institution. High-volume hospitals could still deliver poor care in as much as low-volume hospitals could deliver good care. NSQIP studies have also underscored the major limitations of claims data and administrative databases in the provision of adequate risk-adjustment models that are crucial for volume-outcome studies. Therefore, volume should not be substituted for prospectively monitored and properly risk-adjusted outcomes as a comparative measure of the quality of surgical care. abstract_id: PUBMED:37223232 Patient-Reported Outcomes and Surgical Quality. Delivering high-quality surgical care requires knowing how best to define and measure quality in surgery. Patient-reported outcomes (PROs) enable surgeons, health care systems, and payers to understand meaningful health outcomes from the patient's perspective and can be measured using patient-reported outcome measures (PROMs). As a result, there is much interest in using PROMs in routine surgical care, to guide quality improvement and to inform reimbursement pay structures. This chapter defines PROs and PROMs, differentiates PROMs from other quality measures such as patient-reported experience measures, describes PROMs in the context of routine clinical care, and provides an overview of interpreting PROM data. This chapter also describes how PROMs may be applied to quality improvement and value-based reimbursement in surgery. abstract_id: PUBMED:32882172 Rural Surgical Quality: Policy and Practice. Several national studies have demonstrated that rural hospitals successfully deliver high-quality care. Data at the national, regional, institutional, and individual practitioner levels all contribute to understanding of surgical outcomes in the rural setting. Quality metrics should be interpreted within the context of the rural community and outcomes analyzed with relevant risk adjustment for patient factors. abstract_id: PUBMED:24851748 ACS national surgical quality improvement program: targeting quality improvement in Canadian pediatric surgery. Purpose: The pediatric NSQIP program is in the early stages of facilitated surgical quality improvement for children. The objective of this study is to describe the initial experience of the first Canadian Children's Hospital participant in this program. Method: Randomly sampled surgical cases from the "included" case list were abstracted into the ACS-NSQIP database. These surgical procedure-specific data incorporate patient risk factors, intraoperative details, and 30 day outcomes to generate annual reports which provide hierarchical ranking of participant hospitals according to their risk-adjusted outcomes. Results: Our first risk-adjusted report identified local improvement opportunities based on our rates of surgical site infection (SSI) and urinary tract infection (UTI). We developed and implemented an engagement strategy for our stakeholders, performed literature reviews to identify practice variation, and conducted case control studies to understand local risk factors for our SSI/UTI occurrences. We have begun quality improvement activities targeting reduction in rates of SSI and UTI with our general surgery division and ward nurses, respectively. Conclusions: The NSQIP pediatric program provides high quality outcome data that can be used in support of quality improvement. This process requires multidisciplinary teamwork, systematic stakeholder engagement, clinical research methods and process improvement through engagement and culture change. abstract_id: PUBMED:25006676 Quality of pharmaceutical care in surgical patients. Background: Surgical patients are at risk for preventable adverse drug events (ADEs) during hospitalization. Usually, preventable ADEs are measured as an outcome parameter of quality of pharmaceutical care. However, process measures such as QIs are more efficient to assess the quality of care and provide more information about potential quality improvements. Objective: To assess the quality of pharmaceutical care of medication-related processes in surgical wards with quality indicators, in order to detect targets for quality improvements. Methods: For this observational cohort study, quality indicators were composed, validated, tested, and applied on a surgical cohort. Three surgical wards of an academic hospital in the Netherlands (Academic Medical Centre, Amsterdam) participated. Consecutive elective surgical patients with a hospital stay longer than 48 hours were included from April until June 2009. To assess the quality of pharmaceutical care, the set of quality indicators was applied to 252 medical records of surgical patients. Results: Thirty-four quality indicators were composed and tested on acceptability and content- and face-validity. The selected 28 candidate quality indicators were tested for feasibility and 'sensitivity to change'. This resulted in a final set of 27 quality indicators, of which inter-rater agreements were calculated (kappa 0.92 for eligibility, 0.74 for pass-rate). The quality of pharmaceutical care was assessed in 252 surgical patients. Nearly half of the surgical patients passed the quality indicators for pharmaceutical care (overall pass rate 49.8%). Improvements should be predominantly targeted to medication care related processes in surgical patients with gastro-intestinal problems (domain pass rate 29.4%). Conclusions: This quality indicator set can be used to measure quality of pharmaceutical care and detect targets for quality improvements. With these results medication safety in surgical patients can be enhanced. Answer: The relationship between surgical adverse outcomes and patients' evaluation of quality of care has been investigated to determine whether such outcomes are perceived as inherent risks of surgery or as indicators of reduced quality of care. A study conducted at the Leiden University Medical Center in 2003 found that patients who experienced post-discharge adverse outcomes evaluated the quality of care less favorably, with an average 19% lower overall evaluation compared to patients without adverse outcomes. These patients were also more likely to be dissatisfied, suggest that improvements were needed in medical care, and feel that they were discharged too early. However, the effect of in-hospital adverse outcomes alone on the evaluation of care was not statistically significant. Patients with both in-hospital and post-discharge adverse outcomes rated the quality of care even lower, on average 33% lower than those without adverse outcomes (PUBMED:18322130; PUBMED:18055886). The findings suggest that post-discharge adverse outcomes negatively influence patients' overall evaluation of quality of care and are associated with perceptions of being discharged too early, indicating a need for better information at discharge (PUBMED:18322130; PUBMED:18055886). This implies that adverse outcomes are not merely seen as inherent risks but can reflect perceived deficiencies in the quality of care provided. Quality improvement in surgical care is multifaceted, involving process measures, risk-adjusted outcomes, and patient-reported outcomes (PROs). The American College of Surgeons National Surgical Quality Improvement Program and other initiatives focus on using risk-adjusted outcomes to drive quality improvement, while process measures, such as perioperative complication prevention, are also emphasized (PUBMED:17120947; PUBMED:26380183). Additionally, PROs are increasingly recognized as important for understanding health outcomes from the patient's perspective and for guiding quality improvement and reimbursement structures (PUBMED:37223232). In summary, surgical adverse outcomes can lead to a reduced evaluation of quality of care by patients, who may not view these outcomes as simply inherent risks. Quality improvement efforts in surgery should consider both process measures and risk-adjusted outcomes, as well as patient perspectives, to enhance the safety and quality of surgical care.
Instruction: Can we apply the National Cholesterol Education Program Adult Treatment Panel definition of the metabolic syndrome to Asians? Abstracts: abstract_id: PUBMED:15111542 Can we apply the National Cholesterol Education Program Adult Treatment Panel definition of the metabolic syndrome to Asians? Objective: Limited information is available about the metabolic syndrome in Asians. Furthermore, the definition of central obesity using waist circumference may not be appropriate for Asians. The objectives of this study were to determine the optimal waist circumference for diagnosing central obesity in Asians and to estimate the prevalence of the metabolic syndrome in an Asian population. Research Design And Methods: We used data from the 1998 Singapore National Health Survey, a cross-sectional survey involving 4,723 men and women of Chinese, Malay, and Asian-Indian ethnicity aged 18-69 years. Receiver operating characteristic analysis suggested that waist circumference &gt;80 cm in women and &gt;90 cm in men was a more appropriate definition of central obesity in this population. The prevalence of the metabolic syndrome was then determined using the National Cholesterol Education Program Adult Treatment Panel III (NCEP ATP III) criteria with and without the modified waist circumference criteria. Results: In Asians, decreasing waist circumference increased the crude prevalence of the metabolic syndrome from 12.2 to 17.9%. Using the modified Asian criteria, the prevalence of the metabolic syndrome increased from 2.9% in those aged 18-30 years to 31.0% in those aged 60-69 years. It was more common in men (prevalence 20.9% in men versus 15.5% in women; P &lt; 0.001) and Asian Indians (prevalence 28.8% in Asian-Indians, 24.2% in Malays, and 14.8% in Chinese; P &lt; 0.001). Conclusions: NCEP ATP III criteria, applied to an Asian population, will underestimate the population at risk. With a lower waist circumference cutoff, the prevalence of the metabolic syndrome is comparable to that in Western populations. Ethnic differences are likely to exist between populations across Asia. abstract_id: PUBMED:11751533 Impact of the third cholesterol report from the adult treatment panel of the national cholesterol education program on the clinical laboratory. Background: The US National Cholesterol Education Program has recently released the third report of the Adult Treatment Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults. Incorporating new evidence and more consistent with other international intervention programs, these more complex guidelines will considerably expand indications for treatment. The implications for clinical laboratories are summarized in this report. Content: LDL-cholesterol (LDL-C) remains the major focus for classification and treatment, whereas diabetes, the presence of multiple risk factors, including the metabolic syndrome, and increased triglycerides (TGs), will now require more intensive management. For screening, a fasting lipoprotein profile is recommended, adding LDL-C and TGs to the previous measurements of total cholesterol and HDL-cholesterol (HDL-C). Lowering the cutpoints defining optimal LDL-C [100 mg/dL (2.58 mmol/L)] and normal TGs [150 mg/dL (1.70 mmol/L)] and raising the cutpoint for low HDL-C to 40 mg/dL (1.03 mmol/L) will select more patients for treatment. A new marker, non-HDL-C, becomes a secondary target in treating high TGs. Conclusions: Laboratories will need to adjust reporting formats and interpretations and can expect more requests for tests to characterize secondary causes of dyslipidemia, e.g., diabetes, and for the so-called "emerging risk factors", e.g., lipoprotein(a), homocysteine, and C-reactive protein. abstract_id: PUBMED:12572622 The national cholesterol education program adult treatment panel ill guidelines. Coronary heart disease (CHD) remains the leading cause of death in the United States with more than 40% of all deaths each year directly attributed to the disease. Current evidence suggests that early identification and aggressive modification of risk factors offer the most promising approach to reducing the burden of CHD. Dyslipidemia has been identified as the primary risk factor leading to the development of CHD. It is estimated that nearly 65 million Americans require some form of lipid-modification therapy. The National Cholesterol Education Program Adult Treatment Panel III (NCEP ATP III) set of guidelines released in May 2001 provides physicians with evidence-based recommendations on the classification, diagnosis, and treatment of lipid disorders. New features of the guidelines include a scoring system for calculating CHD risk, as well as the identification of CHD risk equivalents, lower treatment target goals, and an emphasis on conditions conferring a higher risk for CHD, such as the metabolic syndrome. The ATP III emphasis on risk assessment substantially increases the number of patients considered at risk for CHD and will expand the number eligible for lifestyle and drug interventions. This article highlights the new recommendations and reviews the impact of ATP III on osteopathic physicians. abstract_id: PUBMED:14613351 The National Cholesterol Education Program Adult Treatment Panel III guidelines. Coronary heart disease (CHD) persists as a major cause of morbidity and mortality in the United States, with more than 40% of all deaths each year directly attributed to the disease. Dyslipidemia is recognized as a major risk factor for the development and progression of CHD, with clinical trials clearly demonstrating the public health and economic benefits of favorable cholesterol modification. As a result of this evidence, the National Cholesterol Education Program (NCEP) has developed guidelines for the detection, evaluation, and treatment of high blood cholesterol in adults. The most recent of the NCEP recommendations, the Adult Treatment Panel III (ATP III) guidelines, were released in May 2001 and build on the earlier editions and reiterate the importance of low-density lipoprotein cholesterol (LDL-C) reduction to modify CHD risk. New features of the guidelines include the identification of CHD risk equivalents; lower treatment target goals; an emphasis on conditions conferring a higher risk for CHD, such as the metabolic syndrome; and a scoring system for calculating CHD risk. The ATP III emphasis on risk assessment will result in a substantial increase in the number of patients considered at risk for CHD and will expand the number eligible for lifestyle and drug intervention. abstract_id: PUBMED:12708635 New features of the National Cholesterol Education Program Adult Treatment Panel III lipid-lowering guidelines. The National Cholesterol Education Program (NCEP) Adult Treatment Panel III (ATP III) guidelines for lipid-lowering therapy to reduce coronary heart disease (CHD) risk contain a number of features that distinguish them from the previous ATP guidelines. These new features include modifications in lipid/lipoprotein levels considered optimal, abnormal, or reflective of risk; increased focus on primary prevention through use of Framingham risk scoring to define risk in persons with multiple lipid/nonlipid risk factors; and increased focus on the association of the metabolic syndrome with CHD risk. The introduction of the category of CHD risk equivalents-including persons with atherosclerotic disease, diabetes, or 10-year CHD risk &gt; 20% based on Framingham scoring-results in an increase over previous guidelines in the proportion of patients categorized as being at high risk and therefore eligible for more intensive low-density lipoprotein cholesterol (LDL-C)-lowering therapy. Use of the new secondary therapeutic target of non-high-density lipoprotein cholesterol should improve management of lipid risk factors in patients who have elevated triglyceride levels after LDL-C goals have been met. These new features of the NCEP ATP III guidelines should improve identification and treatment of patients with dyslipidemias associated with CHD risk. abstract_id: PUBMED:12240701 New therapeutic options in the National Cholesterol Education Program Adult Treatment Panel III. Coronary heart disease (CHD) is a common, costly, and undertreated disorder in the United States, and dyslipidemia is one of its most important modifiable risk factors. Recently, the National Cholesterol Education Program (NCEP) Adult Treatment Panel III (ATP III) published updated guidelines for the treatment of lipid disorders, greatly expanding the number of patients eligible for therapy. In the new recommendations, several significant changes have been made in the identification and management of patients at risk for CHD. Although ATP III maintains that low-density lipoprotein (LDL) cholesterol should be the primary target of lipid-lowering therapy, it identifies non-high-density lipoprotein (HDL) cholesterol (total cholesterol minus HDL cholesterol) as a secondary target in patients with elevated triglycerides. Patients with &gt; or = 2 CHD risk factors should now be assessed for 10-year absolute CHD risk based on the Framingham Point Scale to identify those who require more aggressive treatment. The guidelines also designate a new category, CHD risk equivalent, which recognizes that certain patients have the same high risk as those with established CHD. Diabetes is now identified as a CHD risk equivalent, as are other forms of atherosclerotic disease and multiple risk factors comprising a CHD 10-year risk of &gt; 20%. New lipoprotein classifications are given, and increased emphasis is placed on the metabolic syndrome, a constellation of metabolic risk factors, as a marker for CHD risk. Since adherence poses a major challenge in the management of patients with or at risk for CHD, the new guidelines provide physicians with several strategies for increasing patient compliance. The new guidelines should help physicians better identify and manage patients at risk for CHD, help more patients reach their lipid goals, and thereby decrease cardiovascular morbidity and mortality. abstract_id: PUBMED:14524636 Update on the National Cholesterol Education Program Adult Treatment Panel III guidelines: getting to goal. Considerable data on the pathophysiology, epidemiology, and treatment of dyslipidemia-induced coronary heart disease (CHD) have accumulated in recent years. These data have been assessed and incorporated into the guidelines of the National Cholesterol Education Program Expert Panel on the Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel [ATP] III). A major focus of the new guidelines is the assessment of the near-term (i.e., 10-yr) risk of experiencing a CHD event and matching the intensity of treatment to this risk. Patients with diabetes and those with a greater than 20% 10-year risk of experiencing a CHD event have been elevated to the risk level of CHD equivalent. The ATP III guidelines also modify several lipid and lipoprotein classifications. A low-density lipoprotein cholesterol (LDL) level below 100 mg/dl is now considered optimum for all individuals. In addition, high-density lipoprotein cholesterol (HDL) and triglyceride cutoff points have been modified to reflect more accurately the risk associated with abnormalities in these lipoproteins. As with the previous guidelines, the primary target of therapy remains LDL. Therapeutic lifestyle changes consisting of diet, weight reduction, and increased physical activity should be included in all treatment regimens. Based on their potent LDL-lowering properties and their proven ability to decrease mortality in a variety of patient populations, statins are generally the first choice for pharmacologic therapy. A secondary target of therapy includes non-HDL goals for patients with high triglyceride levels and the metabolic syndrome, which is characterized by abdominal obesity, elevated triglyceride levels, low HDL levels, and insulin resistance. Management of these secondary targets includes weight reduction and increased physical activity, and treatment of the lipid and nonlipid risk factors. Overall, ATP III represents an aggressive approach to treating dyslipidemia, greatly extending the number of individuals who qualify for treatment. abstract_id: PUBMED:27706073 Prevalence of Metabolic Syndrome among Korean Adolescents According to the National Cholesterol Education Program, Adult Treatment Panel III and International Diabetes Federation. In both adults and children, metabolic syndrome (MetS) has been attributed to risk factors for type 2 diabetes and cardiovascular disease such as insulin resistance, abdominal obesity, hypertension, and dyslipidemia. This descriptive study aimed to compare the prevalence of MetS and diagnostic components according to the National Cholesterol Education Program, Adult Treatment Panel III (NCEP-ATP III) and International Diabetes Federation (IDF) in 2330 Korean adolescents (10-18 years), using data from the 2010-2012 Korea National Health and Nutrition Examination Survey-V. The NCEP-ATP III and IDF were used to diagnose MetS and yielded prevalence rates of 5.7% and 2.1%, respectively, with no sex-related differences. The most frequent MetS diagnostic components according to the NCEP-ATP III and IDF criteria were high triglyceride levels (21.2%) and low high-density lipoprotein cholesterol levels (13.6%), respectively; approximately 50.1% and 33.1% of adolescents had at least one MetS diagnostic component according to the respective criteria. Both overweight/obese male and female adolescents exhibited significantly increased prevalence rates of MetS and related diagnostic components, compared to normal-weight adolescents. In conclusion, the prevalence rates of MetS and diagnostic components differ according to the NCEP-ATP III and IDF criteria. Henceforth, efforts are needed to establish diagnostic criteria for Korean adolescents. abstract_id: PUBMED:19245513 Comparison between Turkish Cardiovascular Risk Platform and United States National Cholesterol Education Program Adult Treatment Panel III definitions of the metabolic syndrome in Turkish adults. The Turkish Cardiovascular Risk Platform (TCRP) calls for the diagnosis of the metabolic syndrome (MS) if insulin resistance, impaired fasting glucose, impaired glucose tolerance, or diabetes mellitus and &gt;or=2 other established criteria are present. TCRP defines insulin resistance as a homeostasis model assessment &gt;2.7. The aim of this cross-sectional study was to compare TCRP guidelines with the United States National Cholesterol Education Program Adult Treatment Panel III (NCEP) definition of MS in Turkish adults (N=1690). The age- and sex-adjusted prevalence of MS was 25% with the TCRP and 40% for the NCEP definition. Patients with MS identified by the NCEP definition but not by the TCRP definition had lower body mass index and less insulin resistance, but had a similarly adverse cardiovascular risk factor profile to those with TCRP-identified MS, with high blood pressure, waist circumference, triglycerides, and total cholesterol/high-density lipoprotein cholesterol ratio. Other national health organizations should avoid using homeostasis model assessment as a prerequisite for diagnosing MS. Modification of the NCEP definition would be more appropriate for ethnic groups with different body sizes. abstract_id: PUBMED:21845059 A pilot study on metabolic syndrome and its associated features among Qatari schoolchildren. Aim: This pilot study aimed to evaluate the individual features of the metabolic syndrome (MeS) and its frequency in Qatari schoolchildren aged 6-12 years. Background: MeS has a strong future risk for development of diabetes and cardiovascular diseases. Childhood obesity is increasing the likelihood of MeS in children. Methods: The associated features of MeS were assessed in 67 children. They were recruited from the outpatient pediatric clinic at Hamad Medical Corporation, Qatar. Height, weight, and waist circumference were measured and body mass index was calculated for each child. Fasting blood glucose, total cholesterol, high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol and triglycerides (TG) were measured. MeS was defined according to the National Cholesterol Education Program Adult Treatment Panel III (NCEP-III) which was modified by Cook with adjustment for fasting glucose to ≥5.6 mM according to recommendations from the American Diabetes Association. Results: The overall prevalence of MeS according to NCEP-III criteria was 3.0% in children aged 6-12 years. Overweight and obesity was 31.3% in children aged 6-12 years, according to the International Obesity Task Force criteria. The prevalence of MeS was 9.5% in overweight and obese subjects. Increased TG levels represented the most frequent abnormality (28.4%) in metabolic syndrome features in all subjects, followed by HDL-C (19.4%) in all subjects. Conclusion: Increased TG levels and low HDL-C were the most frequent components of this syndrome. This study showed a significant prevalence of MeS and associated features among overweight and obese children. The overall prevalence of MeS in Qatari children is in accordance with data from several other countries. Answer: Based on the information provided in the abstracts, it appears that the National Cholesterol Education Program Adult Treatment Panel (NCEP ATP) III definition of the metabolic syndrome may not be entirely appropriate for Asian populations without modifications. The study using data from the 1998 Singapore National Health Survey suggested that the waist circumference criteria for diagnosing central obesity should be lower for Asians (>80 cm for women and >90 cm for men) compared to the original NCEP ATP III criteria (PUBMED:15111542). When these modified waist circumference criteria were used, the prevalence of the metabolic syndrome in the Asian population increased, indicating that the original NCEP ATP III criteria might underestimate the population at risk in Asians. The study concluded that with a lower waist circumference cutoff, the prevalence of the metabolic syndrome in Asians is comparable to that in Western populations, suggesting that ethnic differences exist and should be considered when applying the NCEP ATP III definition to different populations (PUBMED:15111542). Therefore, while the NCEP ATP III guidelines provide a framework for identifying and treating metabolic syndrome, it is important to consider regional and ethnic variations in body composition and risk factors. Adjustments to the criteria, such as the waist circumference cutoffs, may be necessary to accurately identify and manage metabolic syndrome in Asian populations.
Instruction: Outcomes of tracheostomy in the neonatal intensive care unit: is there an optimal time? Abstracts: abstract_id: PUBMED:30819350 Care of the Neonatal Intensive Care Unit Graduate after Discharge. Premature and critically ill term infants are often discharged from the neonatal intensive care unit (NICU) with ongoing medical problems, including respiratory problems; growth, nutrition and feeding problems; and neurologic injury. At discharge, they may also be dependent on technology such as supplemental oxygen, tracheostomy, mechanical ventilation, feeding tube, and monitors. Primary care physicians must have special knowledge and understanding of the medical complications of NICU graduates to coordinate post-discharge care. We examine the most common post-discharge medical problems in premature and critically ill term infants and inform the primary care provider about expected outcomes and possible new problems. abstract_id: PUBMED:33957547 Tracheostomy on infants born in the periviable period: Outcomes at discharge from the neonatal intensive care unit (NICU). Objectives: Advances in neonatal intensive care have allowed successful resuscitation of children born at the border of viability. However, there has been little change in the incidence of bronchopulmonary dysplasia (BPD) and anatomical upper airway obstruction which may require a tracheostomy in that group. The benefits of the procedure are accompanied by sequelae that impact outcomes. Information about these issues can assist caregivers in making decisions and planning care after discharge from the neonatal intensive care unit (NICU). The objectives of this study were to describe the clinical characteristics of neonates born in the periviable period (≤25 weeks gestation) requiring tracheotomy and to highlight their hospital course, complications and status upon NICU discharge. Methods: Retrospective analysis at four tertiary care academic children's hospitals. Medical records of neonates born ≤25 weeks gestation who required tracheotomy between January 1, 2012 and December 31, 2018 were reviewed. Demographics, medical comorbidities, and tracheostomy related complications were studied. Feeding, ventilation, and neurodevelopmental outcomes at time of transfer from NICU were evaluated. Results: Fifty-two patients were included. The mean gestational age was 24.3 (95% confidence interval, 24.1 to 24.5) weeks. The mean birth weight was 635 (95% CI: 603 to 667) grams and 50 (96.2%) children had BPD. At time of discharge from the NICU, 47 (90.4%) required mechanical ventilation, four (7.7%) required supplemental oxygen and one (1.9%) was weaned to room air. Forty-two (80.8%) were discharged with a gastrostomy tube, seven (28%) with a nasogastric tube, and three (5.8%) were on oral feeds. Two (3.8%) suffered hypoxic ischemic encephalopathy, 27 (51.9%) had neurodevelopmental delay, seven (13.5%) were diagnosed with another anomaly, and 16 (30.8%) were considered normal. Complications related to the procedure were observed in 28 (53.8%) neonates. Granulation tissue was seen in 17 (32.7%), wound break down or cellulitis in three (5.8%), one (1.9%) with tracheostomy plugging, three (5.8%) with dislodgement of the tracheostomy tube and four (7.7%) developed tracheitis. Conclusions: Tracheostomy in infants born in the periviable period is primarily performed for BPD and portends extended ventilatory dependence. It is associated with non-oral alimentation at the time of discharge from the NICU and developmental delay. Mortality directly related to the procedure is rare. Minor complications are common but do not require surgical intervention. These data may aid in counseling caregivers about the procedure in this vulnerable population. abstract_id: PUBMED:30396419 Neonatal Tracheostomy. Neonatal and infant tracheostomies have been valuable in the care and survival of children over the past century. With the implementation of neonatal and pediatric intensive care units, more infants are surviving conditions that were considered fatal. Neonatal tracheostomy plays a vital role in many of these conditions, with significant implications and association with overall mortality, morbidity, and developmental outcomes. Although the technique has not changed much, there have been significant evolutions in indications, survival, complications, and technological advances. Improved outcomes research to decrease the high associated morbidities is needed. abstract_id: PUBMED:3397233 Flexible fiberoptic bronchoscopy in the neonatal intensive care unit. The pediatric otolaryngologist is often asked to evaluate intubated patients in the neonatal intensive care unit (NICU) because of sudden deterioration in respiratory status unresponsive to medical management. Previously, evaluation of infants required transportation to the operating room for rigid endoscopy. At the present time, however, flexible fiberoptic bronchoscopy may be performed on such patients in the NICU as an initial diagnostic or therapeutic measure. The procedure is performed through an endotracheal tube or tracheostomy tube already in place, thereby assuring maintenance of a safe airway. Ten representative cases are presented detailing the indications, technique and potential applications in the NICU. abstract_id: PUBMED:24215607 Outcomes of tracheostomy in the neonatal intensive care unit: is there an optimal time? Objective: To compare short-term outcomes of infants who underwent early versus late tracheostomy during their initial hospitalization after birth and determine the association, if any, between tracheostomy timing and outcomes. Study Design: Retrospective chart review of infants who underwent a tracheostomy during their initial hospitalization at a single site. Results: The median (range) gestational age of our cohort (n = 127) was 28 (23-42) weeks and birth weight was 988 (390-4030) g. Tracheostomy indications included airway lesions (47%), bronchopulmonary dysplasia (25%), both (22%) and others (6%). Median postmenstrual age (PMA) at tracheostomy was 45 (35-75) weeks. Death occurred in 27 (21%) infants and 65 (51%) infants were mechanically ventilated. G-tube was present at discharge in 42 (33%) infants. Infants who underwent early tracheostomy (&lt;45 weeks PMA) (n = 66) had significantly lower gestational ages, weights and respiratory support than the late (≥45 weeks PMA) (n = 61) group. Death (29.5% versus 14%), home ventilation (41% versus 21%) and G tube (44% versus 14%) were significantly more frequent in the late tracheostomy group. On bivariate regression, outcomes were not independently associated with tracheostomy timing, after adjustment for gestational age and respiratory support. Conclusions: Of infants who underwent tracheostomy during the initial hospitalization after birth, 21% died. On adjusted analysis, tracheostomy timing was not independently associated with outcomes. abstract_id: PUBMED:33205548 Outcomes of infants with severe bronchopulmonary dysplasia in the pediatric intensive care unit. Background: Some infants with severe bronchopulmonary dysplasia (sBPD) are referred to higher-level centers for multidisciplinary care, including the pediatric intensive care unit (PICU). However, information regarding these infants is limited in PICUs. Methods: We investigated the characteristics and outcomes of preterm infants with sBPD referred to the PICU of a tertiary hospital. This retrospective cohort study included 14 preterm infants with sBPD who were transferred to the PICU beyond 40 weeks' postmenstrual age (PMA) because of weaning failure, from January 1, 2014, to September 30, 2018. Results: The median age at referral was 47.1 weeks (range, 43.6-55.9 weeks), and the median length of stay in the previous neonatal intensive care unit was 154 days (range, 105.8-202.3 days) after birth. After referral the following major comorbidities were found in the patients: large airway malacia, n = 7 (50.0%); significant upper airway obstruction, n = 3 (21.4%); and pulmonary arterial hypertension, n = 8 patients (57.1%). Finally, eight patients (57.1%) were successfully extubated without tracheostomy. Final respiratory support of the patients was determined at a median PMA of 56 weeks (range, 48-63 weeks). Age at referral (P = 0.023) and large airway obstruction (P = 0.028) were significantly related to a decrease in successful extubation. Conclusion: Based on a timely and individualized multidisciplinary approach, some of the prolonged ventilator-dependent infants, even those beyond term age, could be successfully extubated. abstract_id: PUBMED:35578392 Chronic lung disease in full-term infants: Characteristics and neonatal intensive care outcomes in infants referred to children's hospitals. Objective: To describe characteristics, outcomes, and risk factors for death or tracheostomy with home mechanical ventilation in full-term infants with chronic lung disease (CLD) admitted to regional neonatal intensive care units. Study Design: This was a multicenter, retrospective cohort study of infants born ≥37 weeks of gestation in the Children's Hospitals Neonatal Consortium. Results: Out of 67,367 full-term infants admitted in 2010-2016, 4886 (7%) had CLD based on receiving respiratory support at either 28 days of life or discharge. 3286 (67%) were still hospitalized at 28 days receiving respiratory support, with higher mortality risk than those without CLD (10% vs. 2%, p &lt; 0.001). A higher proportion received tracheostomy (13% vs. 0.3% vs. 0.4%, p &lt; 0.001) and gastrostomy (30% vs. 1.7% vs. 3.7%, p &lt; 0.001) compared to infants with CLD discharged home before 28 days and infants without CLD, respectively. The diagnoses and surgical procedures differed significantly between the two CLD subgroups. Small for gestational age, congenital pulmonary, airway, and cardiac anomalies and bloodstream infections were more common among infants with CLD who died or required tracheostomy with home ventilation (p &lt; 0.001). Invasive ventilation at 28 days was independently associated with death or tracheostomy and home mechanical ventilation (odds ratio 7.6, 95% confidence interval 5.9-9.6, p &lt; 0.0001). Conclusion: Full-term infants with CLD are at increased risk for morbidity and mortality. We propose a severity-based classification for CLD in full-term infants. Future work to validate this classification and its association with early childhood outcomes is necessary. abstract_id: PUBMED:32093927 Home Oxygen Use and 1-Year Readmission among Infants Born Preterm with Bronchopulmonary Dysplasia Discharged from Children's Hospital Neonatal Intensive Care Units. Objective: To determine associations between home oxygen use and 1-year readmissions for preterm infants with bronchopulmonary dysplasia (BPD) discharged from regional neonatal intensive care units. Study Design: We performed a secondary analysis of the Children's Hospitals Neonatal Database, with readmission data via the Pediatric Hospital Information System and demographics using ZIP-code-linked census data. We included infants born &lt;32 weeks of gestation with BPD, excluding those with anomalies and tracheostomies. Our primary outcome was readmission by 1 year corrected age; secondary outcomes included readmission duration, mortality, and readmission diagnosis-related group codes. A staged multivariable logistic regression was adjusted for center, clinical, and social risk factors; at each stage we included variables associated at P &lt; .1 in bivariable analysis with home oxygen use or readmission. Results: Home oxygen was used in 1906 of 3574 infants (53%) in 22 neonatal intensive care units. Readmission occurred in 34%. Earlier gestational age, male sex, gastrostomy tube, surgical necrotizing enterocolitis, lower median income, nonprivate insurance, and shorter hospital-to-home distance were associated with readmission. Home oxygen was not associated with odds of readmission (OR, 1.2; 95% CI, 0.98-1.56), readmission duration, or mortality. Readmissions for infants with home oxygen were more often coded as BPD (16% vs 4%); readmissions for infants on room air were more often gastrointestinal (29% vs 22%; P &lt; .001). Clinical risk factors explained 72% of center variance in readmission. Conclusions: Home oxygen use is not associated with readmission for infants with BPD in regional neonatal intensive care units. Center variation in home oxygen use does not impact readmission risk. Nonrespiratory problems are important contributors to readmission risk for infants with BPD. abstract_id: PUBMED:35922194 Clinical characteristics of 14 cases of neonatal tracheotomy in neonatal intensive care unit Objective: To analyze the clinical characteristics of neonatal tracheotomy in neonatal intensive care unit (NICU). Methods: This single-center retrospective study included 14 neonates admitted to NICU of Beijing Children's Hospital, Capital Medical University from January 2016 to August 2021, and were&lt;28 days of age on admission, who met the criteria of tracheotomy and finally completed the procedure. The clinical characteristics including age, weight, duration of ventilation, etiology of tracheotomy, length of hospital stay and prognosis were summarized and analyzed. Wilcoxon signed-rank test was used to compare the weight gain velocity and the duration of ventilation before and after tracheotomy. Paired t-test was used to compare the hospitalization length before and after tracheotomy. Spearman correlation was used to analyze the correlation between the clinical characteristics and outcomes. Results: For the 14 neonates, the gestational age was (38±4) weeks and birth weight was (2 824±949) g. Nine of them were male. The age on transportation was 16 (6, 25) d. A total of 10 neonates were on invasive ventilation on admission, the other 4 were on nasal continuous positive airway pressure support. Bilateral vocal cord paralysis (7 cases) was the commonest cause of tracheotomy. The age on operation was 33 (22, 44) d. There were statistically significant differences in duration of ventilation and weight gain velocity before and after operation (19.00 (10.50, 34.00) vs. 0.86 (0.06, 3.25) d, 1.66 (-0.16, 5.54) vs. 4.69 (2.30, 9.32) g/(kg·d), Z=3.01 and -1.98, both P&lt;0.05). The total hospital stay in NICU was (37±12) d. One neonate died during hospitalization. The existence of pneumonia on admission was positively correlated to NICU stay length (r=0.57, P=0.027), the pre-operational weight gain velocity was negatively correlated to the post-operational NICU stay length (r=-0.73, P=0.020). There were 4 neonates de-cannulated during 7-38 months after the tracheotomy, and 5 neonates still wearing the tracheal cannulation during 15-66 months after the tracheotomy. Two neonates died and 2 neonates lost follow-up after discharge. All neonates could not vocalize normally before de-cannulation, and the language development obviously lagged behind the normal age group after de-cannulation. Conclusions: Bilateral vocal cord paralysis is the commonest cause of neonatal tracheotomy. The benefit of tracheotomy for NICU neonates with surgical indications is obvious, especially in facilitating extubation and improving weight gain. abstract_id: PUBMED:1776881 Ultrathin flexible bronchoscopy in neonatal intensive care units. Thirty seven flexible bronchoscopies were performed in 33 infants in a neonatal intensive care unit, using a 2.2 mm flexible ultrathin bronchoscope. Twenty eight procedures were performed via an endotracheal tube or tracheostomy and nine in spontaneously breathing infants. Indications for endoscopy included persistent atelectasis and/or emphysema (n = 21), unexplained acute respiratory distress (n = 10), stridor (n = 3), assessment of congenital abnormalities of the tracheobronchial tree (n = 2), and follow up of an endobronchial granuloma during the course of corticosteroid treatment (n = 1). Abnormal airway dynamics and/or abnormal structure were seen in 23 of 37 cases. In 54% of the procedures, the results of bronchoscopy had a direct effect on further management. The procedure was well tolerated and completed in less than two minutes. Our results suggest that the ultrathin flexible bronchoscope improves airway exploration and the understanding of respiratory disorders during the first months of life, particularly in ventilated infants. Answer: The outcomes of tracheostomy in the neonatal intensive care unit (NICU) have been studied to determine if there is an optimal time for the procedure. A retrospective chart review of infants who underwent a tracheostomy during their initial hospitalization at a single site found that the timing of tracheostomy was not independently associated with outcomes after adjustment for gestational age and respiratory support. The study compared short-term outcomes of infants who underwent early versus late tracheostomy and found that death, home ventilation, and gastrostomy tube presence were more frequent in the late tracheostomy group, but these outcomes were not independently associated with tracheostomy timing (PUBMED:24215607). In a study of neonates born in the periviable period (≤25 weeks gestation) requiring tracheotomy, it was found that tracheostomy is primarily performed for bronchopulmonary dysplasia (BPD) and is associated with extended ventilatory dependence, non-oral alimentation at the time of discharge from the NICU, and developmental delay. Mortality directly related to the procedure is rare, and minor complications are common but do not require surgical intervention (PUBMED:33957547). Another study on neonatal tracheostomy highlighted its vital role in the care and survival of children, with significant implications for overall mortality, morbidity, and developmental outcomes. The study suggested that improved outcomes research is needed to decrease the high associated morbidities (PUBMED:30396419). Overall, while the timing of tracheostomy in the NICU does not appear to be independently associated with outcomes, the procedure itself is associated with significant implications for the infant's health and development. The decision to perform a tracheostomy and its timing should be carefully considered, taking into account the individual clinical circumstances of each infant.